The United Nations calls for urgent regulation of military AI
The US is increasing export controls on advanced AI chips to limit China’s technological and military progress. In response, China has strongly condemned these actions as ‘bullying’ and protectionism.

The UN and global experts have emphasised the urgent need for comprehensive regulation of AI in military applications. UN Secretary has called for ‘global guardrails’ to govern the use of autonomous weapons, warning that rapid technological development has outpaced current policies.
Recently, 96 countries met at the UN to discuss AI-powered weapons, expanding the conversation to include human rights, criminal law, and ethics, with a push for legally binding agreements by 2026. Unregulated military AI poses serious risks like cybersecurity attacks and worsening geopolitical divides, as some countries fear losing a strategic advantage to rivals.
However, if properly regulated, AI could reduce violence by enabling less-lethal actions and helping leaders choose non-violent solutions, potentially lowering the human cost of conflict. To address ethical challenges, institutions like Texas A&M University are creating nonprofits that work with academia, industry, and defence sectors to develop responsible AI frameworks.
These efforts aim to promote AI applications that prioritise peace and minimise harm, shifting the focus from offensive weapons toward peaceful conflict resolution. Finally, UN Secretary warned against a future divided into AI ‘haves’ and ‘have-nots.’
He stressed the importance of using AI to bridge global development gaps and promote sustainable progress rather than deepen inequalities, emphasising international cooperation to guide AI toward inclusive growth and peace.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!