European Commission review finds Digital Markets Act strengthening competition and user choice

The European Commission has concluded that the Digital Markets Act remains effective in shaping fairer and more competitive digital markets across Europe. Its first formal review highlights measurable progress in empowering users and opening digital ecosystems to greater competition.

DMA has strengthened user choice by enabling data portability, alternative browser and search engine selection, and clearer consent over how personal data is used. At the same time, it has facilitated increased interoperability, allowing new entrants such as alternative app stores and messaging services to emerge.

The review also notes that businesses are benefiting from improved access to previously restricted ecosystems, particularly in areas such as connected devices and platform integration. These changes are contributing to a more dynamic and innovative digital environment.

Looking ahead, the Commission identifies AI and cloud computing as key areas for further regulatory focus. Continued enforcement, improved transparency and adaptation to emerging technological trends will be essential to fully realise the DMA’s objectives.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI research collaboration expands as Google plans campus in South Korea

A major step in global AI expansion is underway as Google prepares to establish its first overseas AI campus in Seoul within 2026. The initiative reflects a broader effort to deepen collaboration between global technology firms and regional innovation ecosystems.

The project is being developed in coordination with Google DeepMind and institutions in South Korea, with a dedicated research team expected to support joint development. Around ten specialists will lead technical cooperation, strengthening links between academia, startups and industry.

A central pillar of this collaboration is the K-Moonshot Project, which applies AI to challenges in biotechnology, climate and energy. Alongside this, an agreement with the Ministry of Science and ICT aims to enhance research capabilities and develop specialised human capital in advanced technologies.

The initiative highlights a growing convergence between national innovation strategies and global AI leadership, signalling a shift towards more distributed and collaborative research infrastructures across regions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Global AI governance and emerging regulatory approaches

Introduction

In recent years, AI governance has become a central focus of digital policy, prompting governments and international organisations to develop regulatory and governance frameworks. These initiatives address issues such as:

  • Risk management;
  • Transparency;
  • Safety;
  • Accountability in AI systems.

Among the most prominent efforts are the European Union’s Artificial Intelligence Act, policy measures introduced by the United States government, regulatory provisions adopted by China, and ongoing discussions within the United Nations system. While these initiatives share a common focus on governing AI technologies, they reflect different legal traditions, policy priorities, and institutional approaches.

European Union and the risk-based framework under the AI Act

The European Union has established a comprehensive legal framework for AI through the Artificial Intelligence Act (Regulation (EU) 2024/1689), which introduces a risk-based approach to regulating AI systems. The regulation distinguishes between different categories of risk, with specific obligations applying depending on the level of potential impact.

In addition to rules for high-risk systems, the Act includes provisions for general-purpose AI models, recognising their role as foundational technologies that can be integrated into a wide range of downstream applications. According to the European Commission, such models are subject to requirements aimed at ensuring that they are ‘safe and trustworthy’, including obligations related to transparency, documentation, and risk management.

Rights groups warn proposed changes could weaken AI protections.

To support the implementation of these provisions, the European Commission has adopted guidelines clarifying the scope of obligations for providers of general-purpose AI models, as well as a voluntary Code of Practice outlining measures related to transparency, copyright compliance, and safety and security. These instruments are intended to facilitate compliance with the Act’s requirements, which began to apply in stages from August 2025.

United States: Executive and sectoral approach to AI governance

In the United States, AI governance has developed through a combination of executive actions, agency-led initiatives, and existing sector-specific regulations, rather than a single comprehensive federal law. In October 2023, the White House issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which outlines priorities related to safety testing, transparency, privacy protection, and the mitigation of risks associated with advanced AI systems.

The Executive Order directs federal agencies to establish standards and guidance within their respective areas of competence, including requirements for developers of certain high-capability models to share safety test results with the government.

White House
Image via Freepik

In parallel, the National Institute of Standards and Technology (NIST) has developed the AI Risk Management Framework, a voluntary tool designed to support organisations in identifying and managing risks associated with AI systems.

Additional measures have been introduced at the agency level, including guidance from the Federal Trade Commission and sector-specific rules addressing the use of AI in areas such as finance and healthcare. This approach reflects the role of existing regulatory bodies in overseeing AI-related risks within their established mandates.

China and regulatory measures on algorithmic and generative AI services

China has introduced a set of regulatory measures governing the development and use of AI, with a focus on algorithmic recommendation systems and generative AI services.

In 2022, the Cyberspace Administration of China (CAC), together with other authorities, adopted the Provisions on the Administration of Algorithmic Recommendation for Internet Information Services, which set requirements related to transparency, user rights, and the management of content generated or distributed by algorithms.

These provisions include obligations for service providers to ensure that algorithmic systems operate in accordance with applicable laws and regulations.

Great Wall of China
Image via Freepik

In 2023, the CAC issued the Interim Measures for the Management of Generative Artificial Intelligence Services, which apply to providers offering generative AI services to the public. The measures include requirements related to the accuracy of generated content, the data sources used for training, and the implementation of security assessments prior to public deployment.

According to the regulation, providers are responsible for ensuring that content generated by AI systems complies with existing legal and regulatory frameworks.

These instruments form part of a broader regulatory approach, in which specific AI applications are addressed through targeted measures adopted by competent authorities.

United Nations processes on AI and digital governance

At the multilateral level, the UN has initiated several processes addressing AI within the broader context of digital cooperation and international security.

In 2024, the UN General Assembly adopted the Global Digital Compact, which outlines principles and commitments related to the development and use of digital technologies, including AI, and refers to the need to promote ‘safe, secure and trustworthy’ systems.

In parallel, the UN has established new institutional processes in the area of information and communications technologies (ICTs) in the context of international security.

In 2025, the UN General Assembly endorsed the creation of the Global Mechanism on developments in the field of ICTs in the context of international security and advancing responsible State behaviour in the use of ICTs, following the conclusion of the Open-ended Working Group (OEWG) process. The mechanism is designed as a permanent multilateral forum for dialogue among member states, including discussions on threats, norms, the application of international law, confidence-building measures, and capacity development.

UN flag
Image via Freepik

The Global Mechanism held its organisational session on 30–31 March 2026, marking the start of its work as a standing UN platform, with regular plenary meetings and dedicated thematic groups planned as part of its structure. While its mandate focuses on ICT security, the mechanism forms part of a broader set of UN processes that address the governance of digital technologies.

In addition, the UN Secretary-General’s High-level Advisory Body on Artificial Intelligence published its final report in 2024, identifying policy options for international AI governance. Discussions linked to the World Summit on the Information Society (WSIS) process and its 20-year review (WSIS+20) continue to address digital governance issues, including emerging technologies.

Together, these initiatives reflect an effort within the UN system to facilitate dialogue, coordination, and institutional continuity in global discussions on digital governance.

Convergence and divergence in AI governance

A comparison of these approaches indicates both areas of alignment and points of divergence in AI governance frameworks. Across jurisdictions, there is a shared emphasis on addressing risks associated with AI, including concerns related to safety, transparency, and accountability.

For example, the European Union’s Artificial Intelligence Act establishes obligations for high-risk systems, while United States policy measures highlight safety testing and risk management, and China’s regulations include requirements related to the operation and oversight of algorithmic and generative AI services.

Similarly, multilateral processes within the United Nations system refer to the importance of ‘safe, secure and trustworthy’ AI and promote international dialogue on governance issues.

At the same time, these frameworks differ in their legal structure and scope.

AI governance is emerging as a central policy priority as rapid technological growth raises concerns.
Image via Freepik

The European Union has adopted a comprehensive legislative instrument with binding obligations across member states, whereas the United States relies on a combination of executive actions and sector-specific regulation.

China has introduced targeted regulatory measures targeting specific categories of AI applications, particularly in algorithmic recommendations and generative AI services.

At the multilateral level, UN processes focus on facilitating coordination, dialogue, and the development of shared principles, rather than establishing binding global rules.

These differences illustrate the variety of institutional and regulatory approaches through which AI governance is being developed.

Conclusion

Current developments in AI governance show that multiple regulatory and policy approaches are being developed across jurisdictions and at the international level.

While these frameworks share common elements, including a focus on risk management and the promotion of ‘safe, secure and trustworthy’ AI, they differ in their legal form, scope, and institutional implementation.

Regional and national measures, such as those adopted by the European Union, the United States, and China, coexist with multilateral processes within the United Nations that aim to support dialogue and coordination.

Together, these developments illustrate how AI governance is evolving through a combination of domestic regulation and international cooperation mechanisms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Europol’s IOCTA 2026 shows growing cyber threats across Europe’s digital landscape

The 2026 Internet Organised Crime Threat Assessment has been released by Europol, outlining the growing complexity of cybercrime across Europe. The report identifies encryption, proxies, and AI as key drivers behind the increasing scale and sophistication of digital threats.

According to Europol, criminal networks are adapting rapidly, using fragmented online environments and encrypted communication channels to evade detection. The report highlights cybercrime enablers, online fraud schemes, cyber-attacks, and online child sexual exploitation as central areas of concern in the EU threat landscape.

AI is playing a growing role in cyber-enabled crime by making fraud, deception, and other forms of online abuse more scalable and more convincing. Europol presents this as part of a wider shift in which digital threats are becoming more adaptive, more accessible, and harder to disrupt through traditional law enforcement methods alone. This is an inference based on Europol’s framing of AI as a major force expanding cybercrime.

The report also points to continued risks in cyber-attacks and online child sexual exploitation, underlining how technological change is affecting both financially motivated crime and harms involving vulnerable users. In that sense, IOCTA 2026 presents Europe’s cyber challenge not as a series of isolated incidents, but as a broader digital threat environment shaped by enabling technologies and rapidly evolving criminal tactics. This is an inference grounded in Europol’s description of the report’s main threat areas.

These developments reinforce the need for stronger operational cooperation, more advanced investigative capabilities, and continued adaptation across Europe’s law enforcement and regulatory systems. Europol’s overall message is that cybercrime is becoming more sophisticated, more industrialised, and more deeply embedded in the wider digital ecosystem. This is an inference based on the report’s scope and framing.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

China advances AI-driven scientific research platform

The Chinese Academy of Sciences has introduced ScienceOne 100, an advanced AI model system designed to support scientific research across disciplines, including mathematics, physics, and biology.

The platform reflects a broader shift from isolated experimentation towards integrated, collaborative research environments powered by AI. Built on the earlier ScienceOne foundation model, the system combines multiple domain-specific large models and tools to streamline the full research cycle.

Three core components drive its functionality: a literature compass for automated analysis and review writing, an innovation evaluation engine to detect emerging research directions, and an agent factory offering more than 2,000 tools for scientific workflows.

Performance gains place the latest version at a high level in scientific reasoning and data interpretation, especially in image analysis and long-horizon problem solving. Training has relied on specialised scientific datasets, allowing the system to operate with precision across complex research contexts.

Deployment is already underway across more than 50 institutes, supporting over 100 research scenarios. Early use cases span materials discovery, aerospace modelling, environmental research, and biomedical design, underscoring its potential to accelerate output and reshape research infrastructure.

Why does it matter? 

ScienceOne 100 signals a decisive shift towards AI-led research infrastructure, where discovery becomes faster, more scalable, and less dependent on linear human workflows.

Automated literature analysis, hypothesis testing, and simulation can significantly shorten the path from idea to result, increasing overall scientific productivity and enabling more complex, cross-disciplinary breakthroughs.

Strategic implications extend beyond efficiency gains. Large-scale AI platforms strengthen national innovation capacity, particularly in critical sectors such as biotechnology, materials science, and aerospace.

Wider adoption could reshape global research competition, influence how scientific knowledge is validated, and drive demand for hybrid expertise combining domain knowledge with advanced computational skills.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Intellectual property cooperation launched under EU-Japan IP Action

The European Union Intellectual Property Office has launched the EU-Japan IP Action in Tokyo, marking the first dedicated intellectual property cooperation project between the European Union and Japan.

The initiative is intended to strengthen the protection and promotion of intellectual property rights through technical cooperation, policy dialogue, and industry engagement. The launch also highlighted how AI is reshaping innovation, competition, and IP enforcement in the digital environment.

EUIPO Executive Director João Negrão said: ‘Today’s event marks a milestone: the official launch of the EUJapan IP Action. As the first dedicated cooperation project on intellectual property between our two regions, organised by the EUIPO and co-funded by the European Union, it carries real promise – for trade, for innovation, and for growth on both sides.’

The launch brought together officials from the EU and Japan, including representatives of the Japan Patent Office and Japan’s Intellectual Property Strategy Headquarters. Speakers described the initiative as a new phase of cooperation focused on streamlining IP processes and ensuring that legal frameworks keep pace with industrial and technological change.

A panel discussion examined the impact of AI and large language models on intellectual property, including questions of authorship, ownership of AI-generated inventions, and copyright enforcement. Industry representatives also discussed practical challenges related to AI governance and anti-piracy.

The event continued with a conference on generative AI, where participants from business, government, and academia examined how IP frameworks should respond to AI-driven change. Discussions included compensation for creators whose works are used in AI training, alongside legal, contractual, and technical mechanisms that could support that goal. Creative sectors, including manga, animation, music, and video games, were also part of the discussion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK moves to strengthen sovereignty over critical AI infrastructure

Britain is moving to strengthen its position in the global AI race, with Technology Secretary Liz Kendall calling for greater national control over key parts of the AI stack. In a recent speech, she described artificial intelligence as an increasingly important source of economic strength, security, and geopolitical influence.

Concerns centre on the concentration of power in a small number of companies that control much of the world’s advanced AI computing capacity. The government’s strategy is intended to reduce reliance on external providers while building domestic capabilities across areas such as research, infrastructure, compute, and talent.

Plans include the development of a national AI hardware strategy to improve access to chips and other critical technologies. At the same time, Britain says it will focus on sectors where it believes it holds a competitive edge, while continuing to work with allies on standards, governance, and the international rules shaping AI development.

Officials have stressed that AI sovereignty does not mean technological isolation, but stronger strategic resilience and greater influence over how future systems are built and governed. In that context, support for domestic firms and institutions is being framed as essential if Britain is to remain a serious player in the emerging global AI order.

Why does it matter?

Control over AI infrastructure is quickly becoming a core element of national power, comparable to energy or defence capabilities.

Concentration of computing and advanced chips in a few global players creates strategic vulnerabilities, exposing countries to external decisions that can affect economic stability, security and technological development.

Britain’s push for AI sovereignty reflects a broader global trend towards technological self-determination. Efforts to build domestic capacity and shape international standards could influence global AI governance, access to critical technologies, and reshape alliances in a more fragmented digital order.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Singapore urges organisations to strengthen AI governance frameworks

GovTech Singapore has argued that stronger AI governance in workplaces is essential for trust, compliance, risk management, and responsible innovation as AI adoption expands across business operations.

The agency leading Singapore’s Smart Nation and digital government efforts defines AI governance as a framework of policies, processes, and responsibilities guiding the ethical, transparent, and accountable development and deployment of AI systems within an organisation. The framework is linked to oversight across the AI lifecycle, from design through to ongoing monitoring.

Key elements identified by GovTech Singapore include transparency and explainability, fairness and bias mitigation, accountability and human oversight, and data privacy and security. Responsible AI is also linked to Singapore’s wider Smart Nation agenda, which the agency describes as a national priority.

The guidance recommends that organisations establish clear internal policies on AI use, build AI literacy across teams, carry out regular audits and assessments, and prioritise secure development practices. It also points to Singapore’s Model AI Governance Framework for Generative AI, developed by the AI Verify Foundation and the Infocomm Media Development Authority, as a reference point for businesses adapting governance frameworks to their own needs.

As part of its effort to support responsible AI use in the public sector, GovTech Singapore also highlights its AI Guardian suite. The suite includes Litmus, a testing platform using adversarial prompts to identify risks and vulnerabilities, and Sentinel, a guardrails service designed to detect and mitigate unsafe or irrelevant content before it affects AI models or users.

Overall, GovTech Singapore presents AI governance not only as a compliance issue, but as part of building a trusted digital environment in which AI can be deployed safely and effectively.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Atos launches digital sovereignty offering for AI and regulated environments

Atos Group has launched an integrated digital sovereignty offering, designed to help organisations retain control and accountability over their data, infrastructure and digital operations.

The proposition combines capabilities across cloud, cybersecurity, AI and digital workplace services. It draws on Atos and Eviden expertise, including fully European data encryption products from Eviden.

Sovereignty is embedded by design across existing portfolios, with graduated levels tailored to each customer’s workloads. Open standards and interoperability sit at the core, aiming to reduce vendor lock-in.

The offering targets regulated sectors including the public sector, defence, financial services and healthcare. Atos Group digital sovereignty leader Michael Kollar said the initiative helps organisations ‘turn sovereignty into an operational capability.’

The launch complements the recent introduction of Atos Sovereign Agentic Studios, which focused on moving AI deployments into production under sovereign control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT researchers develop tool to estimate energy use of AI workloads

Researchers from the Massachusetts Institute of Technology and the MIT-IBM Watson AI Lab have developed a rapid estimation system that calculates the energy consumption of AI workloads in seconds, offering a major improvement over traditional methods that take hours or days.

The tool, known as EnergAIzer, is designed to support data centre operators as AI demand accelerates and electricity consumption rises. With AI infrastructure expected to account for a significant share of US power usage in the coming years, more efficient resource planning has become increasingly critical.

EnergAIzer analyses repeatable workload patterns and GPU behaviour to generate fast predictions of energy use across different hardware setups. After incorporating real GPU measurements, the system achieves high accuracy while remaining lightweight and adaptable to current and future chip designs.

By providing immediate feedback on energy consumption, the tool allows developers and operators to optimise workloads, reduce waste, and test different configurations before deployment. The approach is positioned as a practical step towards improving sustainability across large-scale AI systems.

Why does it matter? 

Energy use is becoming one of the defining constraints of AI growth, as large-scale models push data centres towards unprecedented electricity demand. A tool like EnergAIzer directly addresses this bottleneck by making power consumption visible and measurable before deployment.

Faster and more accurate estimation changes how AI systems are designed and scaled. Rather than reacting to energy costs after deployment, developers and operators can optimise workloads in advance, cutting waste and improving efficiency.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!