AI leaders call for a global pause in superintelligence development
Pioneers of AI and tech leaders call for a halt in superintelligence research, warning of risks ranging from disempowerment to possible human extinction.

More than 850 public figures, including leading computer scientists Geoffrey Hinton and Yoshua Bengio, have signed a joint statement urging a global slowdown in the development of artificial superintelligence.
The open letter warns that unchecked progress could lead to human economic displacement, loss of freedom, and even extinction.
An appeal that follows growing anxiety that the rush toward machines surpassing human cognition could spiral beyond human control. Alan Turing predicted as early as the 1950s that machines might eventually dominate by default, a view that continues to resonate among AI researchers today.
Despite such fears, global powers still view the AI race as essential for national security and technological advancement.
Tech firms like Meta are also exploiting the superintelligence label to promote their most ambitious models, while leaders such as OpenAI’s Sam Altman and Microsoft’s Mustafa Suleyman have previously acknowledged the existential risks of developing systems beyond human understanding.
The statement calls for an international prohibition on superintelligence research until there is a broad scientific consensus on safety and public approval.
Its signatories include technologists, academics, religious figures, and cultural personalities, reflecting a rare cross-sector demand for restraint in an era defined by rapid automation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!