Over 200 scientists, tech executives, and Nobel laureates issued an urgent call for international AI safeguards as world leaders convened at the United Nations General Assembly this week. The coalition – including experts from Anthropic, Google DeepMind, Microsoft, and OpenAI – warned that unchecked artificial intelligence development threatens humanity's future unless governments act by 2025.
The Call for Boundaries
Signatories proposed establishing global prohibitions on AI applications deemed catastrophic, including:
- Autonomous weapons systems controlling nuclear arsenals
- Lethal AI-powered military technology
- Mass surveillance networks and social scoring
- Sophisticated cyberattack tools
- Human impersonation systems
A Narrowing Window
The letter emphasizes that AI's rapid evolution could soon outpace human control, risking engineered pandemics, systemic disinformation, and human rights violations. 'Many experts warn meaningful intervention may become impossible within years,' the document states, urging binding agreements before next year's UNGA summit.
Balancing Progress and Protection
While acknowledging AI's potential to improve healthcare, education, and sustainability, signatories stress that current development trajectories prioritize capability over safety. The campaign aligns with growing UN efforts to establish tech governance frameworks amid rising geopolitical tensions over emerging technologies.
Reference(s):
Scientists urge global AI 'red lines' as leaders gather at UN
cgtn.com