Global AI governance and emerging regulatory approaches

Introduction

In recent years, AI governance has become a central focus of digital policy, prompting governments and international organisations to develop regulatory and governance frameworks. These initiatives address issues such as:

  • Risk management;
  • Transparency;
  • Safety;
  • Accountability in AI systems.

Among the most prominent efforts are the European Union’s Artificial Intelligence Act, policy measures introduced by the United States government, regulatory provisions adopted by China, and ongoing discussions within the United Nations system. While these initiatives share a common focus on governing AI technologies, they reflect different legal traditions, policy priorities, and institutional approaches.

European Union and the risk-based framework under the AI Act

The European Union has established a comprehensive legal framework for AI through the Artificial Intelligence Act (Regulation (EU) 2024/1689), which introduces a risk-based approach to regulating AI systems. The regulation distinguishes between different categories of risk, with specific obligations applying depending on the level of potential impact.

In addition to rules for high-risk systems, the Act includes provisions for general-purpose AI models, recognising their role as foundational technologies that can be integrated into a wide range of downstream applications. According to the European Commission, such models are subject to requirements aimed at ensuring that they are ‘safe and trustworthy’, including obligations related to transparency, documentation, and risk management.

Rights groups warn proposed changes could weaken AI protections.

To support the implementation of these provisions, the European Commission has adopted guidelines clarifying the scope of obligations for providers of general-purpose AI models, as well as a voluntary Code of Practice outlining measures related to transparency, copyright compliance, and safety and security. These instruments are intended to facilitate compliance with the Act’s requirements, which began to apply in stages from August 2025.

United States: Executive and sectoral approach to AI governance

In the United States, AI governance has developed through a combination of executive actions, agency-led initiatives, and existing sector-specific regulations, rather than a single comprehensive federal law. In October 2023, the White House issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which outlines priorities related to safety testing, transparency, privacy protection, and the mitigation of risks associated with advanced AI systems.

The Executive Order directs federal agencies to establish standards and guidance within their respective areas of competence, including requirements for developers of certain high-capability models to share safety test results with the government.

White House
Image via Freepik

In parallel, the National Institute of Standards and Technology (NIST) has developed the AI Risk Management Framework, a voluntary tool designed to support organisations in identifying and managing risks associated with AI systems.

Additional measures have been introduced at the agency level, including guidance from the Federal Trade Commission and sector-specific rules addressing the use of AI in areas such as finance and healthcare. This approach reflects the role of existing regulatory bodies in overseeing AI-related risks within their established mandates.

China and regulatory measures on algorithmic and generative AI services

China has introduced a set of regulatory measures governing the development and use of AI, with a focus on algorithmic recommendation systems and generative AI services.

In 2022, the Cyberspace Administration of China (CAC), together with other authorities, adopted the Provisions on the Administration of Algorithmic Recommendation for Internet Information Services, which set requirements related to transparency, user rights, and the management of content generated or distributed by algorithms.

These provisions include obligations for service providers to ensure that algorithmic systems operate in accordance with applicable laws and regulations.

Great Wall of China
Image via Freepik

In 2023, the CAC issued the Interim Measures for the Management of Generative Artificial Intelligence Services, which apply to providers offering generative AI services to the public. The measures include requirements related to the accuracy of generated content, the data sources used for training, and the implementation of security assessments prior to public deployment.

According to the regulation, providers are responsible for ensuring that content generated by AI systems complies with existing legal and regulatory frameworks.

These instruments form part of a broader regulatory approach, in which specific AI applications are addressed through targeted measures adopted by competent authorities.

United Nations processes on AI and digital governance

At the multilateral level, the UN has initiated several processes addressing AI within the broader context of digital cooperation and international security.

In 2024, the UN General Assembly adopted the Global Digital Compact, which outlines principles and commitments related to the development and use of digital technologies, including AI, and refers to the need to promote ‘safe, secure and trustworthy’ systems.

In parallel, the UN has established new institutional processes in the area of information and communications technologies (ICTs) in the context of international security.

In 2025, the UN General Assembly endorsed the creation of the Global Mechanism on developments in the field of ICTs in the context of international security and advancing responsible State behaviour in the use of ICTs, following the conclusion of the Open-ended Working Group (OEWG) process. The mechanism is designed as a permanent multilateral forum for dialogue among member states, including discussions on threats, norms, the application of international law, confidence-building measures, and capacity development.

UN flag
Image via Freepik

The Global Mechanism held its organisational session on 30–31 March 2026, marking the start of its work as a standing UN platform, with regular plenary meetings and dedicated thematic groups planned as part of its structure. While its mandate focuses on ICT security, the mechanism forms part of a broader set of UN processes that address the governance of digital technologies.

In addition, the UN Secretary-General’s High-level Advisory Body on Artificial Intelligence published its final report in 2024, identifying policy options for international AI governance. Discussions linked to the World Summit on the Information Society (WSIS) process and its 20-year review (WSIS+20) continue to address digital governance issues, including emerging technologies.

Together, these initiatives reflect an effort within the UN system to facilitate dialogue, coordination, and institutional continuity in global discussions on digital governance.

Convergence and divergence in AI governance

A comparison of these approaches indicates both areas of alignment and points of divergence in AI governance frameworks. Across jurisdictions, there is a shared emphasis on addressing risks associated with AI, including concerns related to safety, transparency, and accountability.

For example, the European Union’s Artificial Intelligence Act establishes obligations for high-risk systems, while United States policy measures highlight safety testing and risk management, and China’s regulations include requirements related to the operation and oversight of algorithmic and generative AI services.

Similarly, multilateral processes within the United Nations system refer to the importance of ‘safe, secure and trustworthy’ AI and promote international dialogue on governance issues.

At the same time, these frameworks differ in their legal structure and scope.

AI governance is emerging as a central policy priority as rapid technological growth raises concerns.
Image via Freepik

The European Union has adopted a comprehensive legislative instrument with binding obligations across member states, whereas the United States relies on a combination of executive actions and sector-specific regulation.

China has introduced targeted regulatory measures targeting specific categories of AI applications, particularly in algorithmic recommendations and generative AI services.

At the multilateral level, UN processes focus on facilitating coordination, dialogue, and the development of shared principles, rather than establishing binding global rules.

These differences illustrate the variety of institutional and regulatory approaches through which AI governance is being developed.

Conclusion

Current developments in AI governance show that multiple regulatory and policy approaches are being developed across jurisdictions and at the international level.

While these frameworks share common elements, including a focus on risk management and the promotion of ‘safe, secure and trustworthy’ AI, they differ in their legal form, scope, and institutional implementation.

Regional and national measures, such as those adopted by the European Union, the United States, and China, coexist with multilateral processes within the United Nations that aim to support dialogue and coordination.

Together, these developments illustrate how AI governance is evolving through a combination of domestic regulation and international cooperation mechanisms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

EU pushes Android changes to open AI competition

The European Commission has outlined draft measures requiring Google to improve interoperability on Android as part of ongoing proceedings under the Digital Markets Act. Regulators are focusing on how third-party AI services can interact with hardware and software features controlled by the Android operating system.

The proposed measures are intended to give competing AI services access to key Android features already used by Google’s own AI services, including Gemini. In practice, that could allow rival services to support actions such as sending messages, sharing content, or completing tasks through user-preferred applications rather than being limited by Google’s default ecosystem.

The Commission’s approach could also make it easier for users to activate alternative AI assistants through customised interactions and device-level features, reducing dependence on default system tools. The broader aim is to give third-party providers a more equal opportunity to innovate and compete in the fast-moving market for AI services on mobile devices.

Feedback on the proposed measures is being gathered as part of the Commission’s specification proceedings under the DMA. The consultation forms part of a wider regulatory effort to enforce fair access to core platform features and strengthen digital competition across European markets, including in the AI sector.

Why does it matter?

The move targets one of the most important control points in the digital economy: the operating system layer. Opening Android features to competing AI services could reduce the structural advantage held by Google and shift power towards a more competitive, multi-provider mobile ecosystem. This is an inference based on the Commission’s stated objective of giving third-party AI services access equivalent to that available to Google’s own AI tools.

Greater interoperability under the Digital Markets Act could reshape how AI reaches users, turning smartphones into more open platforms rather than tightly controlled default environments. At the same time, the case also shows how strongly the EU is trying to apply competition law to the next phase of AI distribution, not only to search, app stores, and browsers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Greece accelerates digital governance with AI enforcement and social media age restrictions

Greece is moving to tighten online child protection and expand AI-based public enforcement as part of a broader digital governance agenda, Digital Governance and Artificial Intelligence Minister Dimitris Papastergiou has said.

Under the plan, social media platforms would, from 2027, be required to block access for users under 15 using age verification systems rather than self-declared age data. However, AI is already being used in road safety enforcement, with smart cameras issuing digital fines through government platforms.

The policy includes tools such as Kids Wallet, built on privacy-preserving verification methods that share only age eligibility. Authorities say the aim is to address risks linked to digital addiction while strengthening protections for minors across online environments.

Alongside these measures, AI is already being deployed in road safety enforcement. Smart cameras are being used to issue digital fines through government platforms, with a nationwide rollout planned to expand monitoring and improve compliance.

These measures form part of a wider effort to digitise public administration, reduce inefficiencies, and strengthen accountability. By embedding technology more deeply into everyday governance, Greece is trying to reshape how citizens interact with the state while also addressing long-standing systemic problems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UNIDIR highlights the security implications of the shift from classical to quantum technologies

The United Nations Institute for Disarmament Research (UNIDIR) has outlined the evolution of digital technologies from early internet systems to emerging quantum capabilities, highlighting their growing impact on global systems and security.

In its analysis, UNIDIR traces the progression from dial-up connectivity and classical computing to advanced technologies such as AI and quantum computing, noting that innovation cycles are accelerating and becoming increasingly interconnected. The organisation states that the transition to quantum technologies represents a significant shift in how data is processed, stored and secured.

Unlike classical systems, quantum computing introduces new capabilities that could transform fields ranging from scientific research to communications.

However, UNIDIR warns that these advances also present risks, particularly in cybersecurity. Quantum technologies could challenge existing encryption methods and expose vulnerabilities in digital infrastructure, with implications for governments, businesses and critical systems.

The analysis also links emerging technologies to broader geopolitical dynamics, noting that competition over technological leadership is becoming a key factor in international security. As digital and physical systems converge, technological developments are increasingly shaping strategic stability.

Why does it matter?

UNIDIR emphasises the need for forward-looking governance, international cooperation and policy coordination to manage these challenges. It calls for stronger dialogue among states and stakeholders to ensure that technological progress supports global security rather than undermines it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital euro standards advance with European Central Bank support

The European Central Bank has signed agreements with the European Card Payment Cooperation, nexo standards, and the Berlin Group to support the future rollout of digital euro payments. Existing open technical standards will be reused to process transactions, to make implementation more accessible for payment service providers and merchants across Europe.

CPACE supports contactless payments, nexo standards help connect merchants with providers, while the Berlin Group supports account-based transactions using identifiers such as mobile numbers. Together, these standards are intended to create a more consistent technical environment for digital euro transactions across devices and platforms.

Reliance on open standards is designed to reduce costs and limit dependence on proprietary systems controlled by global card schemes and digital wallets. The ECB says this should help European payment providers expand beyond domestic markets without requiring major upgrades to point-of-sale infrastructure, while also improving interoperability and competition.

The final impact still depends on the adoption of the digital € regulation by the EU co-legislators, which the ECB says is necessary to unlock the initiative’s full potential and provide market actors with greater certainty for future investment.

Why does it matter?

Adoption of open standards by the European Central Bank reduces reliance on global payment providers and lowers costs for banks and merchants. Regulatory clarity on the digital euro would enable European solutions to scale across borders and strengthen control over the payments infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI for Peace Summit highlights push for African-led innovation

A growing push for African-led AI development is shaping discussions on peace, governance, and security across the continent. At the AI for Peace Summit hosted at the Humanitarian Peace Support School in Nairobi, stakeholders called for AI systems better tailored to African governance, security, and resilience challenges.

Brigadier General John Nkoimo, General Officer Commanding Central Command of the Kenya Defence Forces, speaking on behalf of the Chief of the Defence Forces, highlighted AI’s potential to improve situational awareness and strengthen inter-agency coordination in complex security environments.

Participants also called for stronger investment in local innovation ecosystems to ensure AI tools reflect regional realities, particularly in fragile and conflict-affected settings. Discussions also focused on governance gaps, with participants warning that regulatory frameworks need to evolve quickly enough to keep pace with rapid technological deployment.

Security applications such as early warning systems, election monitoring, and other operational uses featured prominently, alongside concerns over human rights protection and institutional accountability. The summit’s broader message was that Africa’s AI future should be shaped locally through stronger governance and sustained investment in homegrown solutions.

Why does it matter?

AI is moving away from a one-size-fits-all model towards systems better adapted to African governance and security realities. Context-specific tools are more likely to be effective in fragile and conflict-affected environments because they can better reflect local risks, institutions, and operational conditions.

It also supports longer-term resilience by prioritising local innovation, reducing dependence on imported technology frameworks, and helping ensure that AI deployment aligns with regional policy goals, ethical standards, and institutional needs.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

New Chinese rules restrict digital promotion of financial products

China has introduced new online marketing rules for financial products, further tightening its long-standing restrictions on cryptocurrency-related activity. The new framework limits the promotion of financial products to licensed entities and treats digital currency trading and issuance as illegal financial activity.

Issued by the People’s Bank of China and seven other regulators, the Administrative Measures for Online Marketing of Financial Products will take effect on 30 September 2026. The rules extend responsibility to platforms, intermediaries, and content creators who promote or facilitate financial products online.

Any assistance in promoting or facilitating prohibited financial activity may now be treated as participation in illegal finance, expanding enforcement beyond direct trading bans. In practice, that broadens the focus from financial products themselves to the wider digital promotion layer, including online displays, traffic generation, and other forms of internet-based marketing support.

Authorities say the measures are intended to protect consumers by limiting misleading or aggressive online promotion, including livestream marketing and viral investment content. In that sense, the rules are not only about crypto, but about tighter control over how financial products are marketed in digital environments.

The policy also reinforces China’s existing position, dating back to 2021, when regulators declared all cryptocurrency transactions illegal, while pushing enforcement deeper into the digital advertising and distribution layers of financial markets.

Why does it matter?

Stronger oversight of online financial promotion shows that crypto-related advertising is increasingly being treated as a regulatory risk category, not just a marketing issue. The Chinese move also points to a broader trend in which regulators are extending scrutiny beyond financial products themselves to the digital channels, influencers, and platforms that help distribute them.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

EU cybersecurity certification framework gains momentum after Cyprus event

The European Commission and the European Union Agency for Cybersecurity (ENISA) have stepped up efforts to strengthen cybersecurity certification across the EU during the European Cybersecurity Certification Week held in Cyprus. The event brought together policymakers, industry representatives, and national authorities to support the implementation of a more unified certification framework.

Discussions focused on advancing the EU Cybersecurity Certification Framework under the Cybersecurity Act, as well as its interactions with related legislation, including the Cyber Resilience Act, the NIS2 Directive, and the Cyber Solidarity Act. The initiative reflects a broader effort to harmonise standards and strengthen trust in digital products and services across member states.

Progress was also reported on two certification schemes currently under development. One concerns European Digital Identity Wallets, aiming to set high security requirements to protect citizens’ credentials, while the other focuses on Managed Security Services, particularly incident response capabilities under the Cyber Solidarity Act.

Participants also reviewed the peer assessment mechanism intended to support consistent implementation across member states. That process, already underway, is designed to promote equivalent cybersecurity standards throughout the EU and reduce the risk of fragmented national approaches.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New whitepaper aims to streamline virtual asset oversight in Nigeria

A Pan-African industry body, the Virtual Asset Service Providers Association, has introduced Project Green-White-Green, a policy framework designed to bring virtual asset transactions more fully into Nigeria’s formal financial system.

The proposal targets regulatory inefficiencies while seeking to capture an estimated $92.1 billion in annual transaction activity currently operating with limited formal integration.

VASPA Executive Chair Franklin Peters, who also leads Boundlesspay, said the framework addresses overlapping mandates among the Securities and Exchange Commission, Central Bank of Nigeria, and Corporate Affairs Commission. The model proposes more coordinated supervision, alignment of foreign exchange standards, and identity verification through integration with the National Identity Management Commission.

The whitepaper also introduces an API-based system intended to automate VAT and capital gains tax collection at the point of transaction. The aim is to reduce administrative friction, improve compliance, and create clearer regulatory pathways for Web3 businesses operating in Nigeria.

Although designed for Nigeria, the framework is presented as scalable across other African markets. Its proponents argue that better regulatory coordination and more structured taxation could support wider economic goals, including stronger formalisation and improved public revenue collection.

Why does it matter?

The framework directly tackles regulatory fragmentation that has slowed crypto and Web3 development in Nigeria.

By aligning the roles of the Securities and Exchange Commission of Nigeria, the Central Bank of Nigeria, and the Corporate Affairs Commission of Nigeria, it aims to reduce legal uncertainty and create a clearer path for startups to operate formally.

It also introduces structured taxation and compliance mechanisms, which could improve state revenue collection while bringing virtual asset activity into the formal economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI privacy model sets new standard for AI-data protection

The US R&D company, OpenAI, has introduced OpenAI Privacy Filter, a specialised AI system designed to detect and redact personally identifiable information in text with high accuracy.

A model that is part of broader efforts to strengthen privacy-by-design practices in AI development, offering developers a practical tool to embed data protection directly into workflows rather than relying on external processing systems.

Unlike traditional rule-based systems, the model applies contextual language understanding to identify sensitive information in unstructured text. It processes inputs in a single pass and supports long-context analysis, enabling efficient handling of large documents.

Local deployment further reduces exposure risks, allowing sensitive data to remain on-device rather than being transmitted to external servers.

Performance benchmarks indicate near frontier-level capability, with strong precision and recall scores across standard evaluation datasets.

The system detects multiple categories of private data, including personal identifiers, financial information, and confidential credentials, while allowing developers to fine-tune detection thresholds according to operational requirements.

Despite its capabilities, the model is positioned as one component within a wider privacy framework instead of a standalone compliance solution.

Human oversight remains necessary in high-risk domains such as legal or financial processing.

Such a release by OpenAI reflects a shift towards smaller, specialised AI systems designed to address targeted challenges in real-world deployments while maintaining adaptability and transparency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!