AI governance priorities outlined by EU at UN dialogue

The EU said AI governance must prepare institutions for agentic AI, surveillance risks, and harmful AI-generated content.

EU and UN flags representing AI governance discussions on responsible innovation, human rights, capacity-building, and interoperability

The European Union has called for the UN Global Dialogue on AI Governance to focus on responsible innovation, human rights, capacity-building and stronger interoperability between AI governance frameworks.

In a statement delivered on behalf of the EU and its member states, the bloc said the dialogue should examine AI’s social, economic, ethical, cultural, linguistic, technical and environmental implications. It also argued that responsible AI innovation should be framed not only as a risk-management challenge, but also as an opportunity for public benefit in areas such as education and government.

The EU urged participants to address who controls the data, compute and value chains behind AI systems. It also highlighted linguistic and cultural diversity, warning that AI systems trained mainly on a limited number of languages can produce less accurate and more costly outputs for speakers of underrepresented languages.

Capacity-building was presented as a core condition for effective AI governance, particularly for developing countries. The EU said countries and institutions need the skills, systems and human capacity to evaluate, question and deploy AI responsibly, while treating AI infrastructure as a matter of public interest rather than only market access or proprietary control.

The statement also identified agentic AI as an emerging governance frontier, arguing that such systems raise new questions around accountability, oversight and control that existing frameworks do not yet adequately address.

On safe and trustworthy AI, the EU called for greater compatibility between governance approaches to prevent regulatory arbitrage and support responsible cross-border deployment. It said trust should not rely only on self-assessment or voluntary disclosure, but also on auditability, traceability, validation mechanisms, certification approaches and evaluation frameworks for high-risk systems.

The EU also urged a human-centric, human rights-based approach grounded in international law. It identified AI-facilitated gender-based violence, harmful AI-generated content affecting children and older persons, manipulative algorithmic systems, data exploitation and AI-enabled surveillance as areas requiring dedicated attention.

The statement called for the UN dialogue to build on existing initiatives, including those led by UNESCO, ITU, UNDP, OHCHR, GPAI, the Council of Europe, the Hiroshima Process and AI summit processes. The EU also supported more interactive thematic sessions, continuity between dialogue editions and a co-chairs’ summary reflecting both converging and diverging views.

Why does it matter?

The EU statement shows how global AI governance debates are moving beyond broad principles towards questions of implementation, institutional capacity and interoperability between frameworks. By linking AI infrastructure, human rights, auditability and agentic AI, the EU is signalling that future international coordination will need to address both today’s deployment risks and the governance challenges posed by more autonomous systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!