World Leaders and AI Pioneers Urge UN to Establish Binding Global AI Safeguards by 2026

Update: 2025-09-23 12:54 IST

The opening of the United Nations General Assembly’s High-Level Week in New York has taken a dramatic turn with a coalition of more than 200 global figures — including Nobel laureates, former world leaders, and leading AI scientists — calling for urgent international action to regulate artificial intelligence.

The group issued a joint declaration titled the ‘Global Call for AI Red Lines,’ unveiled by Nobel Peace Prize winner Maria Ressa. The statement warns that AI’s rapid development is exposing societies to “unprecedented dangers” and demands that governments agree on enforceable rules before the end of 2026. The declaration argues that without binding commitments, AI could reshape human society in ways that threaten stability, rights, and even survival.

This marks the first time Nobel Prize winners from multiple disciplines have united on the issue of AI governance. Among the signatories are biochemist Jennifer Doudna, economist Daron Acemoglu, physicist Giorgio Parisi, and AI pioneers Geoffrey Hinton and Yoshua Bengio — two of the most influential researchers behind modern machine learning. Civil society groups have also rallied behind the appeal, with support from over 60 organizations including the UK-based think tank Demos and the Beijing Institute for AI Safety and Governance.

Renowned author Yuval Noah Harari, who co-signed the letter, emphasized the urgency: “For thousands of years, humans have learned, sometimes the hard way, that powerful technologies can have dangerous as well as beneficial consequences. Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.”

Concerns about AI misuse have grown steadily in recent years, fueled by real-world examples of harm such as mass surveillance, disinformation campaigns, and even reports linking AI to tragic personal outcomes, including the suicide of a teenager. Experts warn that the next wave of risks could be even more catastrophic — from massive job displacement and engineered pandemics to systemic human rights abuses.

Political voices have also joined the movement. Former Irish president Mary Robinson and Colombia’s Nobel Peace Prize-winning ex-president Juan Manuel Santos are among the notable leaders supporting the initiative. The campaign is being coordinated by the University of California, Berkeley’s Center for Human-Compatible AI, The Future Society, and France’s Center for AI Safety.

While the declaration does not propose specific legislative frameworks, it identifies urgent areas where prohibitions could be essential. These include outlawing lethal autonomous weapons, preventing AI systems from self-replicating, and banning their role in nuclear command and warfare. Ahmet Üzümcü, former head of the Organisation for the Prohibition of Chemical Weapons, underscored the stakes: “It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly.”

The signatories point to past successes in global cooperation — such as international treaties banning biological weapons and agreements to phase out ozone-depleting chemicals — as evidence that enforceable global rules for AI are achievable. They caution, however, that voluntary commitments from AI companies are not sufficient, noting that many corporate pledges remain unfulfilled in practice.

Warnings about AI’s existential risks are not new. In 2023, tech leaders including Elon Musk had called for a temporary pause on advanced AI development, while industry statements compared the dangers of uncontrolled AI to those of nuclear conflict and global pandemics. Although some current AI executives like Sam Altman (OpenAI), Dario Amodei (Anthropic), and Demis Hassabis (Google DeepMind) are not among the signatories this time, other senior figures such as OpenAI co-founder Wojciech Zaremba and former DeepMind scientist Ian Goodfellow have added their voices to the appeal.

The coalition’s message is clear: without immediate, enforceable global red lines, AI could cross thresholds that humanity cannot afford to ignore.

Tags:    

Similar News