European Commission review finds Digital Markets Act strengthening competition and user choice

The European Commission has concluded that the Digital Markets Act remains effective in shaping fairer and more competitive digital markets across Europe. Its first formal review highlights measurable progress in empowering users and opening digital ecosystems to greater competition.

DMA has strengthened user choice by enabling data portability, alternative browser and search engine selection, and clearer consent over how personal data is used. At the same time, it has facilitated increased interoperability, allowing new entrants such as alternative app stores and messaging services to emerge.

The review also notes that businesses are benefiting from improved access to previously restricted ecosystems, particularly in areas such as connected devices and platform integration. These changes are contributing to a more dynamic and innovative digital environment.

Looking ahead, the Commission identifies AI and cloud computing as key areas for further regulatory focus. Continued enforcement, improved transparency and adaptation to emerging technological trends will be essential to fully realise the DMA’s objectives.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Global AI governance and emerging regulatory approaches

Introduction

In recent years, AI governance has become a central focus of digital policy, prompting governments and international organisations to develop regulatory and governance frameworks. These initiatives address issues such as:

  • Risk management;
  • Transparency;
  • Safety;
  • Accountability in AI systems.

Among the most prominent efforts are the European Union’s Artificial Intelligence Act, policy measures introduced by the United States government, regulatory provisions adopted by China, and ongoing discussions within the United Nations system. While these initiatives share a common focus on governing AI technologies, they reflect different legal traditions, policy priorities, and institutional approaches.

European Union and the risk-based framework under the AI Act

The European Union has established a comprehensive legal framework for AI through the Artificial Intelligence Act (Regulation (EU) 2024/1689), which introduces a risk-based approach to regulating AI systems. The regulation distinguishes between different categories of risk, with specific obligations applying depending on the level of potential impact.

In addition to rules for high-risk systems, the Act includes provisions for general-purpose AI models, recognising their role as foundational technologies that can be integrated into a wide range of downstream applications. According to the European Commission, such models are subject to requirements aimed at ensuring that they are ‘safe and trustworthy’, including obligations related to transparency, documentation, and risk management.

Rights groups warn proposed changes could weaken AI protections.

To support the implementation of these provisions, the European Commission has adopted guidelines clarifying the scope of obligations for providers of general-purpose AI models, as well as a voluntary Code of Practice outlining measures related to transparency, copyright compliance, and safety and security. These instruments are intended to facilitate compliance with the Act’s requirements, which began to apply in stages from August 2025.

United States: Executive and sectoral approach to AI governance

In the United States, AI governance has developed through a combination of executive actions, agency-led initiatives, and existing sector-specific regulations, rather than a single comprehensive federal law. In October 2023, the White House issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which outlines priorities related to safety testing, transparency, privacy protection, and the mitigation of risks associated with advanced AI systems.

The Executive Order directs federal agencies to establish standards and guidance within their respective areas of competence, including requirements for developers of certain high-capability models to share safety test results with the government.

White House
Image via Freepik

In parallel, the National Institute of Standards and Technology (NIST) has developed the AI Risk Management Framework, a voluntary tool designed to support organisations in identifying and managing risks associated with AI systems.

Additional measures have been introduced at the agency level, including guidance from the Federal Trade Commission and sector-specific rules addressing the use of AI in areas such as finance and healthcare. This approach reflects the role of existing regulatory bodies in overseeing AI-related risks within their established mandates.

China and regulatory measures on algorithmic and generative AI services

China has introduced a set of regulatory measures governing the development and use of AI, with a focus on algorithmic recommendation systems and generative AI services.

In 2022, the Cyberspace Administration of China (CAC), together with other authorities, adopted the Provisions on the Administration of Algorithmic Recommendation for Internet Information Services, which set requirements related to transparency, user rights, and the management of content generated or distributed by algorithms.

These provisions include obligations for service providers to ensure that algorithmic systems operate in accordance with applicable laws and regulations.

Great Wall of China
Image via Freepik

In 2023, the CAC issued the Interim Measures for the Management of Generative Artificial Intelligence Services, which apply to providers offering generative AI services to the public. The measures include requirements related to the accuracy of generated content, the data sources used for training, and the implementation of security assessments prior to public deployment.

According to the regulation, providers are responsible for ensuring that content generated by AI systems complies with existing legal and regulatory frameworks.

These instruments form part of a broader regulatory approach, in which specific AI applications are addressed through targeted measures adopted by competent authorities.

United Nations processes on AI and digital governance

At the multilateral level, the UN has initiated several processes addressing AI within the broader context of digital cooperation and international security.

In 2024, the UN General Assembly adopted the Global Digital Compact, which outlines principles and commitments related to the development and use of digital technologies, including AI, and refers to the need to promote ‘safe, secure and trustworthy’ systems.

In parallel, the UN has established new institutional processes in the area of information and communications technologies (ICTs) in the context of international security.

In 2025, the UN General Assembly endorsed the creation of the Global Mechanism on developments in the field of ICTs in the context of international security and advancing responsible State behaviour in the use of ICTs, following the conclusion of the Open-ended Working Group (OEWG) process. The mechanism is designed as a permanent multilateral forum for dialogue among member states, including discussions on threats, norms, the application of international law, confidence-building measures, and capacity development.

UN flag
Image via Freepik

The Global Mechanism held its organisational session on 30–31 March 2026, marking the start of its work as a standing UN platform, with regular plenary meetings and dedicated thematic groups planned as part of its structure. While its mandate focuses on ICT security, the mechanism forms part of a broader set of UN processes that address the governance of digital technologies.

In addition, the UN Secretary-General’s High-level Advisory Body on Artificial Intelligence published its final report in 2024, identifying policy options for international AI governance. Discussions linked to the World Summit on the Information Society (WSIS) process and its 20-year review (WSIS+20) continue to address digital governance issues, including emerging technologies.

Together, these initiatives reflect an effort within the UN system to facilitate dialogue, coordination, and institutional continuity in global discussions on digital governance.

Convergence and divergence in AI governance

A comparison of these approaches indicates both areas of alignment and points of divergence in AI governance frameworks. Across jurisdictions, there is a shared emphasis on addressing risks associated with AI, including concerns related to safety, transparency, and accountability.

For example, the European Union’s Artificial Intelligence Act establishes obligations for high-risk systems, while United States policy measures highlight safety testing and risk management, and China’s regulations include requirements related to the operation and oversight of algorithmic and generative AI services.

Similarly, multilateral processes within the United Nations system refer to the importance of ‘safe, secure and trustworthy’ AI and promote international dialogue on governance issues.

At the same time, these frameworks differ in their legal structure and scope.

AI governance is emerging as a central policy priority as rapid technological growth raises concerns.
Image via Freepik

The European Union has adopted a comprehensive legislative instrument with binding obligations across member states, whereas the United States relies on a combination of executive actions and sector-specific regulation.

China has introduced targeted regulatory measures targeting specific categories of AI applications, particularly in algorithmic recommendations and generative AI services.

At the multilateral level, UN processes focus on facilitating coordination, dialogue, and the development of shared principles, rather than establishing binding global rules.

These differences illustrate the variety of institutional and regulatory approaches through which AI governance is being developed.

Conclusion

Current developments in AI governance show that multiple regulatory and policy approaches are being developed across jurisdictions and at the international level.

While these frameworks share common elements, including a focus on risk management and the promotion of ‘safe, secure and trustworthy’ AI, they differ in their legal form, scope, and institutional implementation.

Regional and national measures, such as those adopted by the European Union, the United States, and China, coexist with multilateral processes within the United Nations that aim to support dialogue and coordination.

Together, these developments illustrate how AI governance is evolving through a combination of domestic regulation and international cooperation mechanisms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Cybersecurity reform in the EU advances through Spain consultation

Spain has launched a public consultation on the proposed EU Cybersecurity Act 2, inviting input from operators, citizens, and other interested parties on the need for, objectives of, and possible alternatives to the planned reform.

The consultation covers the European Commission’s proposal COM(2026) 11 final, which would repeal and replace Regulation (EU) 2019/881. The proposal is presented as a response to changes in the cyber threat landscape and to new strategic and regulatory challenges that have emerged since the current framework entered into force in 2019.

According to the consultation text, the reform is intended to address four main structural problems: a mismatch between the EU cybersecurity framework and current operational needs, limited practical use of the European Cybersecurity Certification Framework, fragmentation across the wider EU cybersecurity acquis, and growing cybersecurity risks in ICT supply chains.

Regarding ENISA, the proposal argues that the agency’s current functions and resources are insufficient to meet the needs of member states, the EU institutions, and market actors, particularly in policy implementation, operational cooperation, and crisis response. It also says the certification framework created under the current regulation has proved too slow and too complex in practice, with limited market uptake and governance mechanisms that have not delivered at the required speed.

The text also links the proposal to the growing complexity of compliance created by instruments such as NIS2, the Cyber Resilience Act, DORA, and the CER Directive. It says the new regulation would seek greater coherence and interoperability across those frameworks while reducing administrative burdens for companies and competent authorities.

A further objective is to create, for the first time, a horizontal EU-level framework for managing ICT supply-chain cybersecurity risks, including the identification of critical ICT assets, the possible designation of high-risk suppliers, and the adoption of proportionate measures to reduce strategic dependencies.

The proposal would also strengthen ENISA’s mandate and resources, reform and expand the certification framework, and support a more centralised incident-notification model linked to the wider Digital Omnibus simplification agenda.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU advances GPAI framework with focus on forecasting systemic risks

At the third meeting of the Signatory Taskforce, the European Commission advanced discussions on how to strengthen oversight of advanced AI systems through the General-Purpose AI Code of Practice, with a particular focus on risk forecasting and harmful manipulation.

The latest GPAI taskforce meeting focused on improving how providers assess and anticipate systemic risks linked to high-impact AI models. A central proposal would require providers to estimate when future systems may exceed the highest systemic risk tier already reached by any of their existing models, using structured forecasting methods.

The Commission is also considering using aggregate forecasts across the industry to provide a broader view of technological trends, including compute capacity, algorithmic efficiency, and data availability. The aim is to improve visibility into how capabilities may evolve across the sector rather than only at the level of individual providers.

Attention was also directed towards harmful manipulation, which the Code treats as a recognised systemic risk. Discussions focused on how providers should develop realistic scenarios for testing and evaluating model behaviour, including in deployment settings such as chatbot interfaces, third-party applications, and agentic systems.

The initiative reflects a wider EU regulatory approach centred on transparency, accountability, and proactive governance in AI development. Rather than waiting for harms to materialise, the Code of Practice is being used to push providers to identify risks earlier and to adopt more structured safety planning for general-purpose AI models with systemic risk.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU Global Green Bond Initiative Fund unlocks €20 billion for sustainable infrastructure

The European Union and its financial partners have launched the Global Green Bond Initiative Fund to mobilise up to €20 billion for sustainable infrastructure in developing economies.

The initiative reflects a broader shift towards using private capital alongside public investment to accelerate climate and environmental goals.

Moreover, the fund will prioritise green bonds issued by governments, local authorities, and businesses, with a focus on first-time issuers and least developed countries. By supporting both euro and local-currency bonds, the initiative also aims to strengthen domestic capital markets while expanding the international role of the euro.

Backed by major European financial institutions and supported through the EU guarantees, the GGBI Fund is designed to reduce investment risk and attract private investors at scale.

Alongside financing, the initiative includes technical assistance and subsidy mechanisms intended to improve access to green finance and lower borrowing costs.

The programme forms part of the EU’s Global Gateway strategy, linking economic development with sustainability goals while promoting high environmental standards and long-term resilience across partner regions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU cybersecurity certification framework gains momentum after Cyprus event

The European Commission and the European Union Agency for Cybersecurity (ENISA) have stepped up efforts to strengthen cybersecurity certification across the EU during the European Cybersecurity Certification Week held in Cyprus. The event brought together policymakers, industry representatives, and national authorities to support the implementation of a more unified certification framework.

Discussions focused on advancing the EU Cybersecurity Certification Framework under the Cybersecurity Act, as well as its interactions with related legislation, including the Cyber Resilience Act, the NIS2 Directive, and the Cyber Solidarity Act. The initiative reflects a broader effort to harmonise standards and strengthen trust in digital products and services across member states.

Progress was also reported on two certification schemes currently under development. One concerns European Digital Identity Wallets, aiming to set high security requirements to protect citizens’ credentials, while the other focuses on Managed Security Services, particularly incident response capabilities under the Cyber Solidarity Act.

Participants also reviewed the peer assessment mechanism intended to support consistent implementation across member states. That process, already underway, is designed to promote equivalent cybersecurity standards throughout the EU and reduce the risk of fragmented national approaches.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Ukraine highlights AI strategic shifts

The National Security and Defense Council of Ukraine has published an overview of global AI developments for March 2026, highlighting a shift towards infrastructure and strategic realignment. The report is part of its ‘AI Frontiers’ analytical series.

According to the Council, growing investment and expansion of data centres to fuel AI demands are increasing pressure on energy resources. This is creating new competition not only for computing power but also for energy stability.

The analysis also points to intensifying competition between the US, China and the European Union, extending beyond AI models to supply chains, semiconductors and infrastructure. At the same time, AI is becoming more integrated into defence, cyberspace and information operations.

The Council highlights rising risks linked to disinformation, synthetic content and legal challenges, alongside growing demand for clearer regulation and content labelling as AI adoption expands in Ukraine.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Commission allocates €63.2 million to support AI innovation in health and online safety

The European Commission has announced €63.2 million in funding to support AI innovation, focusing on health, online safety and broader technological development. The initiative aims to accelerate the deployment of AI solutions across key sectors.

According to the Commission, the funding will support projects that improve healthcare systems and strengthen protections in digital environments. It is part of ongoing efforts to expand AI capabilities and adoption.

The programme also seeks to encourage collaboration between research institutions, businesses and public bodies. This approach is intended to foster innovation while addressing societal challenges linked to AI use.

The Commission states that the investment will contribute to strengthening Europe’s digital capacity and advancing AI development across the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU launches protected data register

The European Commission has introduced a European Register of protected data to improve access to public sector information. The initiative is presented through the data.europa.eu platform as part of wider data-sharing efforts.

According to the Commission, the register provides a central point for discovering protected data held by public authorities. It is designed to make such datasets more visible and easier to locate.

The platform helps users identify conditions under which protected data can be accessed and reused. This includes guidance on legal and technical requirements linked to sensitive datasets.

The European Commission states that the register aims to strengthen transparency and data-driven innovation while supporting access to public sector information across the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU turns digital strategy into infrastructure diplomacy with partner countries

The European Commission, together with the governments of France and Finland, has hosted a high-level study visit in Brussels on secure, resilient and trusted connectivity and digital infrastructure, bringing policymakers and regulators from Egypt, Indonesia, Jordan, Kenya, the Philippines and Vietnam into direct talks with the EU institutions and industry actors. The visit forms part of the EU’s effort to turn its international digital strategy into practical cooperation with partner countries.

The programme focused on policy frameworks for secure and trusted telecommunications infrastructure, including subsea cable deployment and wider digital infrastructure development. In Brussels, delegates met with the European Commission and the European External Action Service. They were briefed on the EU policy tools, including the proposed Digital Networks Act, cybersecurity measures, and the EU’s Submarine Cable Security Toolbox.

The study visit then continued in Aachen, Antwerp, Paris and Helsinki, where participants met major European technology firms and providers of trusted connectivity and digital infrastructure solutions. That industry-facing element matters because the visit was not only about sharing regulatory ideas but also about showcasing European technical and commercial capacity in secure digital infrastructure.

Seen in that context, the initiative is best understood not as a major standalone policy announcement, but as a practical piece of digital diplomacy. The EU’s International Digital Strategy, launched in June 2025, explicitly aims to expand digital partnerships, promote a high level of security for the EU and its partners, and shape global digital governance and standards through cooperation on areas such as secure connectivity, cybersecurity, digital public infrastructure, and emerging technologies.

That wider strategy also includes an ‘EU Tech Business Offer’, combining public and private investment to support the digital transition of partner countries through areas such as AI factories, secure and trusted connectivity, digital public infrastructure and cybersecurity. The Brussels study visit appears to fit squarely within that model, linking diplomacy, regulatory outreach and industrial promotion.

The significance of the visit, therefore, lies less in any immediate policy outcome than in what it says about the EU’s external digital posture. Brussels is trying to position itself not only as a regulator of digital markets at home, but also as a provider of standards, expertise and infrastructure models abroad. At a time of rising geopolitical competition over connectivity, network security and critical infrastructure, such exchanges allow the EU to present European approaches to trusted digital development as an alternative to more fragmented or politically dependent models.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!