The United Nations has created a 40-member scientific panel to study the risks and impacts of artificial intelligence (AI), marking what Secretary-General Antonio Guterres calls a “foundational step toward global scientific understanding of AI.” The move comes despite opposition from the United States and growing concern among AI researchers about the technology’s rapid development.
A Worldwide Effort to Monitor AI
The Independent International Scientific Panel on Artificial Intelligence will produce annual reports analyzing AI’s risks, opportunities, and societal effects. The UN General Assembly approved the panel with a vote of 117-2; the U.S. and Paraguay voted against it, while Tunisia and Ukraine abstained. Nations including Russia, China, and European allies supported the initiative.
Panel members were selected from over 2,600 candidates through an independent review involving multiple UN bodies and the International Telecommunications Union. Each will serve a three-year term. Europe holds 12 seats, with representatives from France, Germany, Italy, Spain, Poland, Belgium, Finland, Austria, Latvia, Turkey, and Russia.
Industry Experts Sound the Alarm
The panel comes amid growing warnings from AI experts. Former Anthropic researcher Mrinank Sharma wrote in an open letter that “the world is in peril” due to AI development. Zoe Hitzig, former lead researcher at OpenAI, voiced “deep reservations” about the company’s strategy. High-profile figures such as Dario Amodei, Sam Altman, and Steve Wozniak have also highlighted potential dangers of AI if left unchecked.
U.S. Raises Concerns About UN Authority
The United States criticized the panel, with its representative Lauren Lovelace calling it “a significant overreach of the UN’s mandate and competence” and insisting that “AI governance is not a matter for the UN to dictate.” UN officials maintain that the panel is intended to provide independent scientific guidance rather than enforce global regulations, giving all member states a voice in understanding and managing AI risks.

