Global AI governance and emerging regulatory approaches

As governments and international organisations develop AI frameworks, different regulatory approaches are emerging. Our analysis compares key policy models to assess whether global AI governance is becoming fragmented or converging.

AI governance

Introduction

In recent years, AI governance has become a central focus of digital policy, prompting governments and international organisations to develop regulatory and governance frameworks. These initiatives address issues such as:

  • Risk management;
  • Transparency;
  • Safety;
  • Accountability in AI systems.

Among the most prominent efforts are the European Union’s Artificial Intelligence Act, policy measures introduced by the United States government, regulatory provisions adopted by China, and ongoing discussions within the United Nations system. While these initiatives share a common focus on governing AI technologies, they reflect different legal traditions, policy priorities, and institutional approaches.

European Union and the risk-based framework under the AI Act

The European Union has established a comprehensive legal framework for AI through the Artificial Intelligence Act (Regulation (EU) 2024/1689), which introduces a risk-based approach to regulating AI systems. The regulation distinguishes between different categories of risk, with specific obligations applying depending on the level of potential impact.

In addition to rules for high-risk systems, the Act includes provisions for general-purpose AI models, recognising their role as foundational technologies that can be integrated into a wide range of downstream applications. According to the European Commission, such models are subject to requirements aimed at ensuring that they are ‘safe and trustworthy’, including obligations related to transparency, documentation, and risk management.

Rights groups warn proposed changes could weaken AI protections.

To support the implementation of these provisions, the European Commission has adopted guidelines clarifying the scope of obligations for providers of general-purpose AI models, as well as a voluntary Code of Practice outlining measures related to transparency, copyright compliance, and safety and security. These instruments are intended to facilitate compliance with the Act’s requirements, which began to apply in stages from August 2025.

United States: Executive and sectoral approach to AI governance

In the United States, AI governance has developed through a combination of executive actions, agency-led initiatives, and existing sector-specific regulations, rather than a single comprehensive federal law. In October 2023, the White House issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which outlines priorities related to safety testing, transparency, privacy protection, and the mitigation of risks associated with advanced AI systems.

The Executive Order directs federal agencies to establish standards and guidance within their respective areas of competence, including requirements for developers of certain high-capability models to share safety test results with the government.

White House
Image via Freepik

In parallel, the National Institute of Standards and Technology (NIST) has developed the AI Risk Management Framework, a voluntary tool designed to support organisations in identifying and managing risks associated with AI systems.

Additional measures have been introduced at the agency level, including guidance from the Federal Trade Commission and sector-specific rules addressing the use of AI in areas such as finance and healthcare. This approach reflects the role of existing regulatory bodies in overseeing AI-related risks within their established mandates.

China and regulatory measures on algorithmic and generative AI services

China has introduced a set of regulatory measures governing the development and use of AI, with a focus on algorithmic recommendation systems and generative AI services.

In 2022, the Cyberspace Administration of China (CAC), together with other authorities, adopted the Provisions on the Administration of Algorithmic Recommendation for Internet Information Services, which set requirements related to transparency, user rights, and the management of content generated or distributed by algorithms.

These provisions include obligations for service providers to ensure that algorithmic systems operate in accordance with applicable laws and regulations.

Great Wall of China
Image via Freepik

In 2023, the CAC issued the Interim Measures for the Management of Generative Artificial Intelligence Services, which apply to providers offering generative AI services to the public. The measures include requirements related to the accuracy of generated content, the data sources used for training, and the implementation of security assessments prior to public deployment.

According to the regulation, providers are responsible for ensuring that content generated by AI systems complies with existing legal and regulatory frameworks.

These instruments form part of a broader regulatory approach, in which specific AI applications are addressed through targeted measures adopted by competent authorities.

United Nations processes on AI and digital governance

At the multilateral level, the UN has initiated several processes addressing AI within the broader context of digital cooperation and international security.

In 2024, the UN General Assembly adopted the Global Digital Compact, which outlines principles and commitments related to the development and use of digital technologies, including AI, and refers to the need to promote ‘safe, secure and trustworthy’ systems.

In parallel, the UN has established new institutional processes in the area of information and communications technologies (ICTs) in the context of international security.

In 2025, the UN General Assembly endorsed the creation of the Global Mechanism on developments in the field of ICTs in the context of international security and advancing responsible State behaviour in the use of ICTs, following the conclusion of the Open-ended Working Group (OEWG) process. The mechanism is designed as a permanent multilateral forum for dialogue among member states, including discussions on threats, norms, the application of international law, confidence-building measures, and capacity development.

UN flag
Image via Freepik

The Global Mechanism held its organisational session on 30–31 March 2026, marking the start of its work as a standing UN platform, with regular plenary meetings and dedicated thematic groups planned as part of its structure. While its mandate focuses on ICT security, the mechanism forms part of a broader set of UN processes that address the governance of digital technologies.

In addition, the UN Secretary-General’s High-level Advisory Body on Artificial Intelligence published its final report in 2024, identifying policy options for international AI governance. Discussions linked to the World Summit on the Information Society (WSIS) process and its 20-year review (WSIS+20) continue to address digital governance issues, including emerging technologies.

Together, these initiatives reflect an effort within the UN system to facilitate dialogue, coordination, and institutional continuity in global discussions on digital governance.

Convergence and divergence in AI governance

A comparison of these approaches indicates both areas of alignment and points of divergence in AI governance frameworks. Across jurisdictions, there is a shared emphasis on addressing risks associated with AI, including concerns related to safety, transparency, and accountability.

For example, the European Union’s Artificial Intelligence Act establishes obligations for high-risk systems, while United States policy measures highlight safety testing and risk management, and China’s regulations include requirements related to the operation and oversight of algorithmic and generative AI services.

Similarly, multilateral processes within the United Nations system refer to the importance of ‘safe, secure and trustworthy’ AI and promote international dialogue on governance issues.

At the same time, these frameworks differ in their legal structure and scope.

AI governance is emerging as a central policy priority as rapid technological growth raises concerns.
Image via Freepik

The European Union has adopted a comprehensive legislative instrument with binding obligations across member states, whereas the United States relies on a combination of executive actions and sector-specific regulation.

China has introduced targeted regulatory measures targeting specific categories of AI applications, particularly in algorithmic recommendations and generative AI services.

At the multilateral level, UN processes focus on facilitating coordination, dialogue, and the development of shared principles, rather than establishing binding global rules.

These differences illustrate the variety of institutional and regulatory approaches through which AI governance is being developed.

Conclusion

Current developments in AI governance show that multiple regulatory and policy approaches are being developed across jurisdictions and at the international level.

While these frameworks share common elements, including a focus on risk management and the promotion of ‘safe, secure and trustworthy’ AI, they differ in their legal form, scope, and institutional implementation.

Regional and national measures, such as those adopted by the European Union, the United States, and China, coexist with multilateral processes within the United Nations that aim to support dialogue and coordination.

Together, these developments illustrate how AI governance is evolving through a combination of domestic regulation and international cooperation mechanisms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!