Global AI governance and emerging regulatory approaches

Introduction

In recent years, AI governance has become a central focus of digital policy, prompting governments and international organisations to develop regulatory and governance frameworks. These initiatives address issues such as:

  • Risk management;
  • Transparency;
  • Safety;
  • Accountability in AI systems.

Among the most prominent efforts are the European Union’s Artificial Intelligence Act, policy measures introduced by the United States government, regulatory provisions adopted by China, and ongoing discussions within the United Nations system. While these initiatives share a common focus on governing AI technologies, they reflect different legal traditions, policy priorities, and institutional approaches.

European Union and the risk-based framework under the AI Act

The European Union has established a comprehensive legal framework for AI through the Artificial Intelligence Act (Regulation (EU) 2024/1689), which introduces a risk-based approach to regulating AI systems. The regulation distinguishes between different categories of risk, with specific obligations applying depending on the level of potential impact.

In addition to rules for high-risk systems, the Act includes provisions for general-purpose AI models, recognising their role as foundational technologies that can be integrated into a wide range of downstream applications. According to the European Commission, such models are subject to requirements aimed at ensuring that they are ‘safe and trustworthy’, including obligations related to transparency, documentation, and risk management.

Rights groups warn proposed changes could weaken AI protections.

To support the implementation of these provisions, the European Commission has adopted guidelines clarifying the scope of obligations for providers of general-purpose AI models, as well as a voluntary Code of Practice outlining measures related to transparency, copyright compliance, and safety and security. These instruments are intended to facilitate compliance with the Act’s requirements, which began to apply in stages from August 2025.

United States: Executive and sectoral approach to AI governance

In the United States, AI governance has developed through a combination of executive actions, agency-led initiatives, and existing sector-specific regulations, rather than a single comprehensive federal law. In October 2023, the White House issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which outlines priorities related to safety testing, transparency, privacy protection, and the mitigation of risks associated with advanced AI systems.

The Executive Order directs federal agencies to establish standards and guidance within their respective areas of competence, including requirements for developers of certain high-capability models to share safety test results with the government.

White House
Image via Freepik

In parallel, the National Institute of Standards and Technology (NIST) has developed the AI Risk Management Framework, a voluntary tool designed to support organisations in identifying and managing risks associated with AI systems.

Additional measures have been introduced at the agency level, including guidance from the Federal Trade Commission and sector-specific rules addressing the use of AI in areas such as finance and healthcare. This approach reflects the role of existing regulatory bodies in overseeing AI-related risks within their established mandates.

China and regulatory measures on algorithmic and generative AI services

China has introduced a set of regulatory measures governing the development and use of AI, with a focus on algorithmic recommendation systems and generative AI services.

In 2022, the Cyberspace Administration of China (CAC), together with other authorities, adopted the Provisions on the Administration of Algorithmic Recommendation for Internet Information Services, which set requirements related to transparency, user rights, and the management of content generated or distributed by algorithms.

These provisions include obligations for service providers to ensure that algorithmic systems operate in accordance with applicable laws and regulations.

Great Wall of China
Image via Freepik

In 2023, the CAC issued the Interim Measures for the Management of Generative Artificial Intelligence Services, which apply to providers offering generative AI services to the public. The measures include requirements related to the accuracy of generated content, the data sources used for training, and the implementation of security assessments prior to public deployment.

According to the regulation, providers are responsible for ensuring that content generated by AI systems complies with existing legal and regulatory frameworks.

These instruments form part of a broader regulatory approach, in which specific AI applications are addressed through targeted measures adopted by competent authorities.

United Nations processes on AI and digital governance

At the multilateral level, the UN has initiated several processes addressing AI within the broader context of digital cooperation and international security.

In 2024, the UN General Assembly adopted the Global Digital Compact, which outlines principles and commitments related to the development and use of digital technologies, including AI, and refers to the need to promote ‘safe, secure and trustworthy’ systems.

In parallel, the UN has established new institutional processes in the area of information and communications technologies (ICTs) in the context of international security.

In 2025, the UN General Assembly endorsed the creation of the Global Mechanism on developments in the field of ICTs in the context of international security and advancing responsible State behaviour in the use of ICTs, following the conclusion of the Open-ended Working Group (OEWG) process. The mechanism is designed as a permanent multilateral forum for dialogue among member states, including discussions on threats, norms, the application of international law, confidence-building measures, and capacity development.

UN flag
Image via Freepik

The Global Mechanism held its organisational session on 30–31 March 2026, marking the start of its work as a standing UN platform, with regular plenary meetings and dedicated thematic groups planned as part of its structure. While its mandate focuses on ICT security, the mechanism forms part of a broader set of UN processes that address the governance of digital technologies.

In addition, the UN Secretary-General’s High-level Advisory Body on Artificial Intelligence published its final report in 2024, identifying policy options for international AI governance. Discussions linked to the World Summit on the Information Society (WSIS) process and its 20-year review (WSIS+20) continue to address digital governance issues, including emerging technologies.

Together, these initiatives reflect an effort within the UN system to facilitate dialogue, coordination, and institutional continuity in global discussions on digital governance.

Convergence and divergence in AI governance

A comparison of these approaches indicates both areas of alignment and points of divergence in AI governance frameworks. Across jurisdictions, there is a shared emphasis on addressing risks associated with AI, including concerns related to safety, transparency, and accountability.

For example, the European Union’s Artificial Intelligence Act establishes obligations for high-risk systems, while United States policy measures highlight safety testing and risk management, and China’s regulations include requirements related to the operation and oversight of algorithmic and generative AI services.

Similarly, multilateral processes within the United Nations system refer to the importance of ‘safe, secure and trustworthy’ AI and promote international dialogue on governance issues.

At the same time, these frameworks differ in their legal structure and scope.

AI governance is emerging as a central policy priority as rapid technological growth raises concerns.
Image via Freepik

The European Union has adopted a comprehensive legislative instrument with binding obligations across member states, whereas the United States relies on a combination of executive actions and sector-specific regulation.

China has introduced targeted regulatory measures targeting specific categories of AI applications, particularly in algorithmic recommendations and generative AI services.

At the multilateral level, UN processes focus on facilitating coordination, dialogue, and the development of shared principles, rather than establishing binding global rules.

These differences illustrate the variety of institutional and regulatory approaches through which AI governance is being developed.

Conclusion

Current developments in AI governance show that multiple regulatory and policy approaches are being developed across jurisdictions and at the international level.

While these frameworks share common elements, including a focus on risk management and the promotion of ‘safe, secure and trustworthy’ AI, they differ in their legal form, scope, and institutional implementation.

Regional and national measures, such as those adopted by the European Union, the United States, and China, coexist with multilateral processes within the United Nations that aim to support dialogue and coordination.

Together, these developments illustrate how AI governance is evolving through a combination of domestic regulation and international cooperation mechanisms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

New Chinese rules restrict digital promotion of financial products

China has introduced new online marketing rules for financial products, further tightening its long-standing restrictions on cryptocurrency-related activity. The new framework limits the promotion of financial products to licensed entities and treats digital currency trading and issuance as illegal financial activity.

Issued by the People’s Bank of China and seven other regulators, the Administrative Measures for Online Marketing of Financial Products will take effect on 30 September 2026. The rules extend responsibility to platforms, intermediaries, and content creators who promote or facilitate financial products online.

Any assistance in promoting or facilitating prohibited financial activity may now be treated as participation in illegal finance, expanding enforcement beyond direct trading bans. In practice, that broadens the focus from financial products themselves to the wider digital promotion layer, including online displays, traffic generation, and other forms of internet-based marketing support.

Authorities say the measures are intended to protect consumers by limiting misleading or aggressive online promotion, including livestream marketing and viral investment content. In that sense, the rules are not only about crypto, but about tighter control over how financial products are marketed in digital environments.

The policy also reinforces China’s existing position, dating back to 2021, when regulators declared all cryptocurrency transactions illegal, while pushing enforcement deeper into the digital advertising and distribution layers of financial markets.

Why does it matter?

Stronger oversight of online financial promotion shows that crypto-related advertising is increasingly being treated as a regulatory risk category, not just a marketing issue. The Chinese move also points to a broader trend in which regulators are extending scrutiny beyond financial products themselves to the digital channels, influencers, and platforms that help distribute them.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

China sets trial ethics rules for AI science and technology activities

China’s Ministry of Industry and Information Technology and nine other departments have issued the ‘Measures for AI science and technology ethics review and services (Trial)’, setting out rules on scope, support measures, implementing bodies, working procedures, supervision, and legal responsibility.

The text says the measures are intended to regulate ethics governance for AI science and technology activities and to support fair, just, safe, and responsible innovation.

The measures apply to AI scientific research, technology development, and other science and technology activities carried out in China that may raise ethics risks relating to human dignity, public order, life and health, the ecological environment, or sustainable development.

The text states that ethics requirements should run through the whole process of AI activities and lists principles including promoting human well-being, respecting life and rights, fairness and justice, reasonable risk control, openness and transparency, privacy and security protection, and controllability and trustworthiness.

On support measures, the document calls for improving the AI ethics standards system, including international, national, industry, and group standards. It also calls for stronger risk monitoring, testing, assessment, certification, and consulting services, more support for small and micro enterprises, work on ethics review research and technical innovation, the orderly opening of high-quality datasets, development of risk assessment and audit tools, public education, and ethics-related talent training.

The measures state that universities, research institutions, medical and health institutions, enterprises, and other entities engaged in AI science and technology activities are responsible for ethics review management within their own organisations and should establish AI science and technology ethics committees.

Local authorities and relevant departments may also establish specialised ethics review and service centres that provide review, re-examination, training, and consulting services on commission, but may not both review and re-examine the same AI activity.

The text sets out application and review procedures, including general, simplified, expert re-examination, and emergency procedures. It says review should focus on human well-being, fairness and justice, controllability and trustworthiness, transparency and explainability, traceability of responsibility, and privacy protection. Review decisions are to be made within 30 days after acceptance, subject to extension in complex cases. An emergency review is generally completed within 72 hours.

The measures also provide for expert re-examination of listed activities. The attached list covers human-machine integrated systems with a strong influence on human behaviour, psychological emotions, or health; algorithmic models, applications, and systems with the capacity for social mobilisation or guidance of social consciousness; and highly autonomous automated decision systems used in scenarios involving safety or health risks. The text says the list will be adjusted dynamically as needed.

The document further states that violations may be investigated and handled under laws, including the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Science and Technology Progress Law. According to the text, the measures take effect upon issuance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China sets standards for AI ethics review and algorithm accountability

The introduction of new AI ethics guidelines by China signals a structured attempt to formalise governance frameworks for rapidly expanding AI systems.

Coordinated by the Ministry of Industry and Information Technology of the People’s Republic of China and multiple state bodies, the policy integrates ethical oversight directly into technological development processes.

A central feature of the framework is the emphasis on operationalising ethical principles such as fairness, accountability, and human well-being through technical review mechanisms.

By focusing on data selection, algorithmic design, and system architecture, the guidelines move towards embedding ethical safeguards at the development stage and protecting intellectual property rights in AI ethics review technologies.

Such an approach reflects a broader shift towards anticipatory governance, where risks such as bias, discrimination, and algorithmic manipulation are addressed before deployment.

A policy by China that also highlights the role of infrastructure in ethical governance, including the development of auditing tools, risk assessment systems, and curated datasets.

Scenario-based evaluation mechanisms indicate an effort to tailor oversight to specific use cases, recognising that AI risks vary significantly across sectors. Instead of relying solely on static compliance rules, the framework promotes adaptive governance aligned with technological complexity.

Ultimately, the outcome is a governance model that seeks to maintain technological competitiveness while addressing societal risks, contributing to wider global debates on how states can regulate AI systems without constraining their development.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

China guidelines reshape e-commerce growth and digital trade strategy

New guidelines issued by China reflect an effort to reposition e-commerce as a structural driver of economic development rather than a purely consumer-facing sector.

Coordinated by the Ministry of Commerce of the People’s Republic of China, the policy links digital expansion with broader industrial strategy, aiming to integrate online platforms more deeply into manufacturing, supply chains, and regional economies.

A central policy objective is to extend the benefits of digital commerce to small and medium-sized enterprises and rural regions, where barriers to market access have historically limited growth.

By promoting industrial digitalisation and technological innovation, China seeks to enhance productivity and improve the quality of consumption, while reducing structural inequalities between urban and rural economies.

Instead of focusing solely on platform growth, the approach prioritises systemic economic transformation.

Internationally, China’s framework emphasises cross-border e-commerce and closer alignment with global digital trade rules, signalling an intention to expand participation in global markets while shaping emerging regulatory standards.

Initiatives linked to transnational digital trade corridors further indicate an effort to combine economic openness with strategic influence in rule-setting processes.

Regulatory measures form a parallel pillar of the policy, with clearer platform responsibilities and stronger oversight intended to balance innovation with accountability.

Combined with investments in data utilisation, financial support, and workforce development, the guidelines illustrate a governance model where the state actively structures digital markets to serve long-term economic and policy objectives.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

China advances new power grid strategy to support clean energy transition

Chinese Premier Li Qiang has called for accelerated development of a new-type power grid, positioning energy infrastructure reform as central to China’s long-term economic and environmental strategy.

Instead of incremental upgrades, the approach emphasises systemic transformation, linking energy security with decarbonisation and industrial modernisation.

Policy direction highlights the optimisation of the national energy structure through expanded deployment of renewable technologies, particularly solar power.

Continued investment in research and development is framed as essential for overcoming technical constraints and enabling large-scale adoption. The integration of AI into manufacturing and energy systems reflects a broader push towards industrial upgrading and efficiency gains.

The proposed power grid model prioritises resilience, flexibility, and low-carbon performance, indicating a shift towards more adaptive and digitally enabled infrastructure.

Such reforms in China aim to balance rising energy demand with sustainability goals, while reducing dependence on traditional energy sources. The emphasis on smart systems suggests increasing reliance on data-driven governance within the energy sector.

Beyond energy, the policy narrative connects infrastructure development with water management and agricultural modernisation, reinforcing a whole-of-system governance approach.

Long-term impact will depend on implementation capacity, regulatory coordination, and the ability to align technological deployment with environmental and economic objectives instead of isolated sectoral reforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU delegation in China calls for sustainable e-commerce and safety standards

Members of the European Parliament (MEPs) completed a visit to Beijing and Shanghai to address pressing e-commerce challenges affecting the European single market.

The delegation studied local business models and market supervision frameworks, engaging with Chinese regulators, e-commerce platforms, and the EU company representatives.

The discussions highlighted the surge of parcels from China, which now account for 91% of small shipments to Europe, and the resulting pressures on fair competition.

MEPs stressed that regulatory compliance must be consistent across all operators, ensuring consumer protection is not compromised by disparities in market practices or enforcement gaps.

The delegation urged representatives of e-commerce platforms to implement preventive measures, reinforcing accountability in areas such as product safety, customs compliance, and the removal of unsafe goods from the market.

MEPs underscored that these standards are essential to maintaining a sustainable and secure e-commerce environment for European citizens.

The visit, the first in eight years, demonstrated the EU’s commitment to safeguarding consumer rights, strengthening international cooperation, and ensuring digital commerce evolves in a manner that is fair, transparent, and safe for all citizens.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

World Data Organisation launches in Beijing to advance global data governance

The World Data Organisation was formally established in Beijing on 30 March 2026, as the first professional international body focused on global data development and governance. The organisation aims to operate as a non-governmental, non-profit platform for dialogue, rule-making, and international collaboration.

The WDO has three stated goals: bridging the data divide, unlocking data’s value, and powering the digital economy. These priorities are intended to reduce disparities in digital capacity between developed and developing countries.

Global data use has become central to addressing challenges such as poverty reduction, public health, climate change, and AI development. Disparities persist, with digitally deliverable services accounting for over 60% of service exports in advanced economies but only 15% in least developed countries.

China’s digital infrastructure has advanced rapidly, with 4.8 million 5G base stations built by the end of 2025, and computing power ranked second globally. Officials said platforms like the WDO and UN will help shape international data governance, promote cooperation, and support secure cross-border data flows.

The WDO seeks to safeguard countries’ rights to develop data while respecting privacy, security, and enterprise interests. By 2030, it is expected to become a globally influential platform and a trusted hub in international data governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

New China rules broaden 2026 agricultural census and tighten data controls

China has revised its regulation on the national agricultural census ahead of the country’s fourth such survey, with the updated rules due to take effect on 1 May 2026. According to the reported summary, Premier Li Qiang signed a State Council decree publishing the revised regulation.

The changes expand the scope of the agricultural census to include rural industrial development and village construction, alongside more traditional measures of agricultural activity. New data-collection methods, including remote sensing, have also been added to the framework.

Stronger data-quality controls form another part of the revision. The updated regulation introduces a post-census spot-check system and sets out confidentiality obligations for census personnel involved in the process.

Penalties for data falsification have also been tightened. The revised rules say people found to have fabricated or manipulated statistics may face heavier sanctions, including higher fines and possible criminal prosecution.

The fourth national agricultural census aims to provide an updated picture of agricultural development, rural construction, farmers’ living standards, and the outcomes of rural reform in China. Areas listed for coverage include agricultural production conditions, grain output, new quality productive forces in agriculture, rural development, and the living conditions of rural residents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

VTC expands AI training across all programmes in Hong Kong

The Vocational Training Council (VTC) has introduced an ‘AI for All’ strategy to integrate AI training across its programmes, aiming to support Hong Kong’s ambition to strengthen its innovation and technology sector.

The initiative aligns with broader policy priorities, including the ‘AI Plus’ approach outlined in national planning frameworks and Hong Kong’s budget, which emphasise integrating AI across industries while addressing a shortage of skilled professionals.

Under the ‘AI+Professional’ model, all Higher Diploma students are required to study IT modules covering prompt engineering, generative AI, and AI ethics and security, with training adapted to disciplines such as engineering, design, and information technology.

The council has also partnered with technology companies through memorandums of understanding. It provides ongoing training for employees in government and industry, while offering internal AI tools and a ‘Virtual Tutor’ platform to support teaching and learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot