The implementation of the EU AI Act with a focus on general-purpose AI models

Transition from legislation to implementation

The European Union has entered a new phase in the governance of AI, moving from the legislative adoption of the Artificial Intelligence Act (AI Act) towards its practical implementation. This particular phase places particular emphasis on obligations of providers of general-purpose AI (GPAI) models, reflecting the increasing role of such systems in the broader digital ecosystem.

The AI Act, adopted in 2024, establishes a comprehensive legal framework for AI within the EU. It introduces a risk-based approach that classifies AI systems into categories ranging from minimal risk to unacceptable risk, with corresponding regulatory requirements.

According to the official text of the regulation, the framework is designed to ensure that AI systems placed on the market in the Union are ‘safe and respect existing law on fundamental rights and Union values.’

While earlier discussions around the Act focused on its legislative negotiation and scope, the current phase centres on how its provisions will be applied in practice.

General-purpose AI models within the AI Act

A key element of this implementation phase concerns general-purpose AI models. These models, which can be integrated into a wide range of downstream applications, occupy a distinct position within the regulatory framework.

The AI Act defines general-purpose AI models as systems that can be used across multiple tasks and contexts and may ‘serve a variety of purposes, both for direct use and for integration into other AI systems.’

That positioning reflects the broad applicability of these models, particularly in areas such as natural language processing, content generation, and data analysis.

The Act also recognises that the widespread deployment of such models may have implications beyond individual use cases, particularly when integrated into high-risk systems.

Obligations for providers of GPAI models

The European Commission, together with the European AI Office, has begun outlining expectations for compliance with provisions related to general-purpose AI.

According to official EU materials, providers of GPAI models are required to ensure that technical documentation is drawn up and kept up to date.

European Union
Image via Freepik

The regulation specifies that providers should ‘draw up and keep up-to-date technical documentation of the model,’ ensuring that relevant information is accessible for compliance and oversight purposes. In addition, transparency obligations require providers to make certain information available to downstream deployers.

The intention of this is to support the responsible integration of GPAI models into other systems.

Distinction between GPAI and systemic-risk models

The AI Act introduces a distinction between general-purpose AI models and those considered to pose systemic risk.

Models that meet specific criteria, such as scale, capability, or deployment level, may be classified as having a systemic impact.

For such models, additional obligations apply, including requirements related to evaluation, risk mitigation, and reporting. The European Commission has indicated that further guidance will clarify how systemic risk thresholds are determined, including through delegated acts and technical standards.

Role of the European AI Office in implementation

The European AI Office, established within the European Commission, plays a central role in supporting the implementation of the AI Act.

Its responsibilities include contributing to the consistent application of the regulation, coordinating with national authorities, and supporting the development of methodologies for compliance.

European AI Office
Source: digital-strategy.ec.europa.eu/en/policies/ai-office

According to the European Commission, the AI Office is tasked with ‘ensuring the coherent implementation of the AI Act across the Union.’ The Office is also expected to contribute to the development of benchmarks, testing frameworks, and guidance documents that support both regulators and providers.

Phased implementation timeline

The implementation of the AI Act is structured as a phased process, with different provisions becoming applicable over time.

That phased approach allows stakeholders to adapt to the regulatory requirements while enabling authorities to establish enforcement mechanisms.

Provisions related to general-purpose AI models are among the earlier elements to be operationalised, reflecting their central role in the current AI landscape.

The European Commission has indicated that additional implementing acts and guidance documents will be issued as part of this process.

Coordination with national authorities

While the European AI Office plays a coordinating role at the EU level, enforcement remains the responsibility of national authorities within member states.

The AI Act establishes mechanisms for cooperation and information-sharing to support a harmonised approach across the European Union.

National authorities are expected to work closely with the AI Office and the European Commission to oversee compliance and address emerging challenges.

Stakeholder engagement and technical guidance

The implementation phase also involves engagement with a range of stakeholders, including industry actors, civil society organisations, and technical experts.

Also, the European Commission has initiated consultations and workshops to gather input on practical aspects of implementation, such as documentation standards and risk assessment methodologies.

The following process supports the development of operational guidance applicable across sectors and use cases.

Interaction with the EU digital regulatory framework

The AI Act forms part of a broader EU digital policy framework that includes instruments such as the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), and the Digital Markets Act (DMA).

These frameworks address different aspects of the digital ecosystem, including data protection, platform governance, and market competition.

The relationship between the AI Act and these instruments is expected to be clarified further during implementation.

International context: OECD and UN approaches

The governance of general-purpose AI models is also being addressed at the international level.

The OECD AI Principles state that AI systems should be ‘robust, secure and safe throughout their entire lifecycle,’ and emphasise accountability for their functioning.

 Logo, Disk, Astronomy, Outer Space

At the UN level, the Global Digital Compact process addresses issues related to transparency, accountability, and oversight of digital technologies, including AI.

The listed initiatives provide non-binding guidance, in contrast to the legally binding framework established by the EU AI Act.

Ongoing development of technical standards

The development of technical standards is an important component of the implementation process.

The European Commission has indicated that it will work with standardisation organisations to develop specifications related to documentation, evaluation, and risk management.

These standards are expected to support the practical application of the AI Act’s provisions.

From regulatory framework to regulatory practice

The current phase of the EU AI Act marks a transition from legislative design to regulatory practice.

For providers of general-purpose AI models, this involves preparing to meet obligations related to documentation, transparency, and risk management. For regulators, the focus is on ensuring consistent application of the rules across member states, supported by coordination mechanisms and guidance from the AI Office.

The implementation process is expected to evolve as further guidance is issued.

Conclusion

The European Union’s AI Act is entering its implementation phase, with a particular focus on general-purpose AI models.

That phase involves translating the regulation’s legal provisions into operational requirements, supported by guidance from the European Commission and the AI Office.

The development of technical standards, coordination mechanisms, and compliance frameworks will play a central role in this process. As implementation progresses, further clarification is expected through additional guidance and regulatory measures, contributing to the operationalisation of the EU’s approach to AI governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN kicks off Global Mechanism on ICT security, road ahead murky

After almost three decades of stop-start cybersecurity negotiations at the UN, the long-anticipated Global Mechanism on ICT security has finally kicked off.

It is the first permanent forum of its kind since discussions on ICT security began back in 1998, and its mere existence says a lot about how far those talks have come.

But if the launch felt like a breakthrough, the organisational session quickly brought things back down to earth. Beyond what was already sketched out in Annex C and the OEWG’s Final Report, it remained unclear how the Mechanism would actually organise itself in practice.

 Text, Page, Symbol

The session raised plenty of questions—about structure, priorities, and process—but offered few real answers, leaving the sense that while the Mechanism now exists, what it will do and how it will do it is still very much up for grabs.

A new body, a new mandate, and a newly elected Chair, Egriselda López of El Salvador, injected renewed optimism into the Global Mechanism’s first organisational session. Yet, within minutes, it became evident that the Global Mechanism did not start with a blank slate, but rather inherited the OEWG’s long list of disagreements. 

Russia opened the discussion by disputing the legitimacy of the Chair nomination, which they claimed was guided solely by the UNODA and thus limited state participation in the process. They used this opportunity to stress that all decisions under the new process must be based on consensus and be completely intergovernmental. 

The substantive issues on the agenda

For the provisional agenda of the mechanism’s July session, the Chair circulated a draft agenda organised around the five pillars of the framework for responsible state behaviour in the use of ICTs. However, Iran and Russia argued that the wording of agenda item 5 did not precisely reflect paragraph 9 of Annex C of the OEWG final report and called for correction at this session. The EU and Canada rejected this, arguing the draft already referenced all relevant documents and that isolating one paragraph would itself constitute renegotiation. The USA reserved its position entirely, preferring that the July plenary adopt its own agenda. No consensus was reached, and the Chair will continue consultations before July.

The mechanism inherited many unresolved substantive debates from its predecessors. 

On international law, there is widespread agreement that considerable work remains to be done, but little agreement on how to carry it out. The majority of delegations have shown clear support for strengthening the existing normative framework and reaffirming the UN Charter’s application to cyberspace.

A broad majority of states expressed support for ensuring that the mechanism remains action-oriented, with a strong focus on practicality and the implementation of agreed frameworks on international law, norms, CBMs, and capacity-building (Chile, Nauru, Portugal, Switzerland, the United Kingdom, Estonia, Italy, Australia, the Democratic Republic of the Congo, Antigua and Barbuda, Sudan, Vanuatu, Albania, Vietnam, India, Greece, Rwanda, the Dominican Republic, North Macedonia, Kiribati).

In particular, some delegations advocated for applying the framework to concrete scenarios as a way to stimulate implementation (Japan, the Netherlands, the United Kingdom, Sudan).  China was the only delegation to emphasise that further development of the framework is equally important alongside its implementation.

The EU highlighted the norm checklist, a hotly debated issue in the previous mechanism, as an area for further improvement. 

However, to many states, a fundamental concern remains. Capacity building initiatives risk stalling without reliable funding, so many delegations, primarily from developing countries, urged the Global Mechanism to prioritise the operationalisation of the UN Voluntary Fund, which was tabled but left unresolved by the OEWG.

Dedicated thematic groups: Who, what and how

The often broad agenda and long-winded statements of delegations in OEWG plenary sessions left little room for technical depth, leaving many delegations frustrated with the gap between consensus language and concrete action. 

The Dedicated thematic groups (DGTs) were created to address this issue precisely by setting up an informal, technical forum to advance practical initiatives already agreed on, such as the Global ICT Security Cooperation and Capacity Building Portal. However, the practicalities on how they should be set up and administered are going to be hotly contested as it will influence what gets on the agenda, who drives it, and whether this new system is capable of delivering real outcomes over time.

Who will lead DTGs?

The dominant and most contested question of the session was who would appoint the co-facilitators for the two Dedicated Thematic Groups. The Chair proposed appointing two co-facilitators per DTG: one from a developed country, one from a developing country, drawing on GA practice, under which the Chair appoints co-facilitators for intergovernmental processes. She indicated her intention to hold broad informal consultations before making appointments, and committed to geographic balance, gender parity where practicable, and relevant technical expertise as selection criteria. 

Who ends up in these roles matters considerably: the co-facilitators will steer the DTG discussions, shape their agendas, and channel recommendations to the plenary.

A broad coalition of states supported the Chair’s approach, including the EU, speaking on behalf of its member states and several aligned countries such as France, Germany, Australia, the United Kingdom, the Netherlands, Switzerland, Japan, Egypt, Senegal, Nigeria, Malaysia, Moldova, and others. Egypt and Senegal were among the most direct, noting that delays in operationalising the mechanism would waste the intersessional period and erode its credibility, particularly for developing countries eager to move from procedure to substance.

Another group of states, led by Russia and supported by Iran, China, Belarus, Nicaragua, and Cuba, argued that co-facilitator appointments must be approved by member states by consensus rather than made unilaterally by the Chair. Russia contended that DTG co-facilitators handle substantive political matters and therefore constitute officials whose appointment requires a collective agreement. Russia also raised a geographic argument: assigning one developed-country and one developing-country co-facilitator per DTG still disproportionately favours developed states, which represent less than one-fifth of UN membership. Iran added that the early OEWG draft text had explicitly authorised the Chair to appoint DTG facilitators, but that this provision was deliberately removed during negotiations, signalling a lack of agreement on the matter.

The Chair affirmed her intention to consult all member states informally before presenting candidates and called on delegations to show flexibility given the urgency of getting the mechanism’s work underway. Russia subsequently stated its understanding that candidates would be determined through broad consultation, followed by consensus-based approval, but the Chair neither confirmed nor rejected this interpretation. 

The question is effectively deferred to the intersessional period, meaning the composition of the DTG leadership teams remains unresolved and will require continued diplomatic engagement before July.

What will DTGs discuss?

A closely related debate concerned who decides what the DTGs will actually discuss. Several Western and like-minded delegations (e.g., Germany, France, Canada, the United Kingdom, and Australia) highlighted that it is a prerogative of the Chair and co-facilitators, to be exercised in close consultation with states. These delegations proposed ransomware and critical infrastructure protection as natural starting points, citing their frequency across national statements and OEWG discussions. 

Iran and Russia emphasised that topics must be determined by consensus among all member states. Argentina argued that the plenary should maintain control over the agenda rather than ceding too much responsibility to the co-facilitators. 

Morocco instead advocated a bottom-up model in which DTGs define their own priority subtopics from the start, based on member states’ expressed preferences to maintain regional balance and ownership. 

In this sense, the DTGs’ credibility hinges on a delicate balance, having to be ambitious enough to move conversations into action but also focused enough on issues with broad support so that their outputs survive in plenary. 

No decision was taken. For industry and civil society organisations with specific thematic priorities, this remains an active opening: states are currently receptive to input on which topics the DTGs should prioritise.

Colombia put forward a process proposal that drew broadly positive reactions across delegations. It recommended that:

  • DTG mandates be time-limited with clearly defined and measurable outputs; 
  • DTG 1 addresses specific rotating subjects rather than its entire mandate simultaneously, and 
  • DTG outputs systematically distinguish between recommendations on which consensus exists and those still under development. 

Senegal made a complementary point: reports should document both areas of agreement and divergence, preserving a record of discussions even when no consensus was reached. Both proposals reflect a wider concern that, without structured outputs and clear timelines, the mechanism risks reproducing the open-ended deliberation of the OEWG without generating implementable results.

How will DTGs feed into the plenary?

Another issue discussed was how DGT work feeds into plenary work. Brazil made it clear that without a defined protocol for elevating DTG reports to the plenary and formally accepting their recommendations, the groups risk becoming talking shops that are disconnected from the mechanism’s official conclusions. Their proposed solution, which still has to achieve support, is to keep DGT conversations primarily informal but include a short formal section for decision-making. 

Stakeholder participation

A long-standing point of contention and possibly the most politically-charged was the role of non-governmental actors in the groups. The effective participation of interested stakeholders remains uncertain. 

Some delegations adopted a more accommodating stance, recognising that stakeholders can enhance the quality of deliberations (Sudan, Antigua and Barbuda) and contribute to more practical outcomes (Vietnam, Dominican Republic), while underscoring the importance of preserving the intergovernmental nature of the process (Sudan, Vietnam). 

Canada and like-minded states argued that the July 2025 consensus clearly provides for states to nominate experts for DTG briefings and for the wider stakeholder community to participate throughout DTG discussions. 

Iran contested this, asserting that stakeholder modalities agreed for the mechanism apply equally to DTGs. Russia also argued that expert briefings from external stakeholders are a possibility rather than a standard feature, and that inviting external briefers requires member-state agreement on a case-by-case basis. 

How this is resolved will directly determine the degree of access the private sector, technical community, and civil society organisations have to the DTG process in practice.

What’s next? 

The session closed without resolution on its two most consequential questions: co-facilitator appointments and the provisional plenary agenda. The Chair will convene informal intersessional consultations on both and issue a programme of work document before July in all UN languages. 

The Secretariat will open an annual stakeholder accreditation window in the coming weeks; stakeholders wishing to participate in plenary sessions and review conferences can monitor the Digital Watch Observatory web page, where we track the process, for details. 

The broader tension remains unresolved, and how it is managed in the intersessional period will largely determine whether the July plenary can open with the mechanism’s operational foundations in place.

The Chair also confirmed the two key dates for 2026: 

For stakeholders tracking or seeking to contribute to these discussions, these are the dates to plan around.

Metaverse’s decline and the harsh limits of a virtual future

In 2019, Facebook CEO Mark Zuckerberg announced Facebook Horizon, a VR social experience that allows users to interact, create custom avatars, and design virtual spaces. Zuckerberg saw the platform, later renamed Horizon Worlds, as the beginning of a new era of VR social networks, with users trading face-to-face interactions for digital ones.

To show his confidence in VR, Zuckerberg rebranded Facebook Inc. as Meta Platforms Inc. in October 2021, illustrating the company’s shift toward the metaverse as a broad virtual environment intended to integrate social interaction, work, commerce, and entertainment. Building on this new vision, Meta’s ambitions expanded beyond social interaction and entertainment, with the development roadmap including virtual real estate purchases and collaboration in virtual co-working spaces.

Fast forward to 17 March 2026, and the scale of Meta’s retreat from the metaverse vision has become unmistakable. In an official update, the company said it was ‘separating’ VR from Horizon so that each platform could grow with greater focus, while also making Horizon Worlds a mobile-only experience. Under the plan, Horizon Worlds and Events would disappear from the Quest Store by 31 March 2026, several flagship worlds would no longer be available in VR, and the Horizon Worlds app itself would be removed from Quest on 15 June 2026, ending VR access to Worlds altogether.

Yet Meta soon reversed part of the decision. In an Instagram Stories Q&A, CTO Andrew Bosworth said Horizon Worlds would remain available in VR after user backlash. Even so, the greater shift remained unchanged: Horizon Worlds was no longer a flagship VR project, but a much narrower product that reflected a clear contraction of Meta’s original metaverse ambition.

As it stands, Meta’s USD 80 billion investment seems less like a gateway to a new socio-technological era and more like one of the most expensive strategic miscalculations of the 21st century. The sunsetting of Horizon Worlds was certainly not a decision made on a whim, which begs the question: Why did the metaverse fail in the first place? Does it have a future in the AI landscape, and what does its retreat say about the politics of designing the future through corporate platforms?

Metaverse’s mainstream collapse

The most obvious reason for the metaverse’s failure was that it never became a mainstream social space. Meta’s strategy rested on the belief that large numbers of people would start using immersive virtual worlds as a normal setting for interaction, entertainment, and creative activity. The shift never happened at the scale needed to sustain the company’s ambitions.

One reason was friction. VR headsets were less practical than phones, more isolating than social media, and harder to integrate into everyday routines than the platforms people already used to communicate. Entering the virtual world required extra time, extra hardware, and openness to adapt to a different social environment. Most digital habits, however, are built around speed, familiarity, and ease of access.

Meta’s own March 2026 decision makes that failure difficult to deny. A company still convinced that immersive social VR was on its way to becoming mainstream would not have moved Horizon Worlds away from Quest and towards mobile. The shift suggested that the metaverse had failed to move from technological promise to everyday social practice.

Metaverse’s failure was not just one of convenience. It also struggled because it was never presented simply as a new digital space. It was framed as a future built largely on Meta’s own terms, with access tied to the company’s hardware, platforms, rules, and wider ecosystem. Such decisions made the metaverse feel less like an open evolution of the internet and more like a tightly managed corporate environment.

The distinction mattered because Meta was not merely launching another product. It was promoting a vision of how people might one day work, socialise, shop, and create online. Yet the more expansive that vision became, the more obvious it was that the system behind it remained closed and centralised. A future digital environment is harder to embrace when a single company controls the devices, spaces, distribution, and boundaries of participation.

Meta’s handling of Horizon Worlds clearly exposed that tension. The company could remove features, reshape access, alter incentives, and redirect the platform from the top down. Such a level of control may be standard for a private platform, but it sits uneasily with claims about building the next phase of digital life. In that sense, the metaverse failed not only because people were unconvinced by VR, but because its version of the future felt too corporate, too enclosed, and too disconnected from the openness people still associate with the internet.

Metaverse’s economic contradiction

The metaverse did not fail only as a social project. It also became increasingly difficult to justify on economic grounds. Meta spent heavily on Reality Labs while generating only limited returns from those investments. In its 2025 annual filing, the company said Reality Labs had reduced overall operating profit by around USD 19.19 billion for the year, while warning that similar losses would continue into 2026.

Losses on that scale might still have been acceptable if the metaverse had shown clear signs of momentum. However, there was little evidence of mass adoption, strong retention, or a durable path to monetisation. Virtual land, digital goods, branded experiences, and immersive workspaces never developed into the economic base of a new internet layer.

Instead, the metaverse began to look less like a future growth engine and more like a costly experiment with uncertain returns. The gap between spending and payoff became harder to ignore, especially as Meta continued to frame the metaverse as a long-term strategic priority. What used to be sold as the company’s next major frontier was increasingly difficult to justify in commercial terms.

The broader strategic context also changed. Meta’s own forward-looking statements pointed to increased hiring and spending in 2026, especially in AI. In practice, this meant the company was no longer choosing between the metaverse and inactivity, but between two competing visions of the future. AI was already delivering tangible gains in product development, infrastructure, and investor confidence.

In that competition for attention and capital, the metaverse lost. Meta’s pullback was also not an isolated case. Microsoft moved away from metaverse-first ambitions as well, retiring the Immersive space (3D) view in Teams meetings, Microsoft Mesh on the web, and Mesh apps for PC and Quest in December 2025. The services were replaced by immersive events in Teams, a narrower offering built around specific workplace functions rather than a broad metaverse vision.

The wider retreat matters because it suggests the problem was not limited to Meta’s execution. Another major tech company also stepped back from standalone immersive environments and turned to more limited, use-specific tools instead. A larger pattern appeared from that shift: grand metaverse narratives gave way to practical features, embedded tools, and industry-specific uses. In that sense, the metaverse has not entirely disappeared, but it did lose its status as the next internet.

Metaverse’s afterlife in the age of AI

The metaverse’s decline does not necessarily imply a complete disappearance. What seems more likely is that parts of it will survive in altered form, detached from the sweeping vision that once surrounded it. Rather than continuing as a standalone digital world meant to transform social life, the metaverse may persist as a set of tools, features, and immersive functions folded into other technologies.

AI is likely to play a role in that transition. It can lower the cost of building virtual environments, speed up avatar creation, automate elements of interaction design, and make digital spaces more responsive. In this sense, AI may succeed where the original metaverse struggled, not by reviving the same vision, but by making parts of it more practical and easier to use.

Such a distinction is important because it shifts the focus from ideology to utility. The metaverse was once marketed as the next stage of the internet, yet its more durable applications now appear to lie in narrower settings where immersion serves a clear purpose. Training, design, simulation, and industrial planning are all contexts in which virtual environments can offer measurable value without becoming a universal social destination.

What might survive, then, is not the metaverse as it was originally imagined, but a smaller set of immersive capabilities embedded in gaming, education, industry, and workplace systems. Avatars, digital agents, simulations, and adaptive virtual spaces may all remain relevant, but as components rather than the foundation of a new social order.

The shift also helps explain the political lesson of the metaverse’s collapse. Large-scale investment, aggressive branding, and executive certainty were not enough to secure public legitimacy. Meta tried to present the metaverse as an inevitable horizon, yet users did not embrace it, markets did not reward it in proportion to the spending, and the company itself eventually narrowed the project it had once elevated into a corporate identity.

In that sense, the metaverse matters even in failure. Its retreat does not simply mark the end of an overhyped product cycle. It also reveals the limits of top-down corporate future-making, especially when private platforms try to define the direction of collective digital life before society has decided whether such a future is either desirable or necessary.

Conclusion

The metaverse failed because it asked too much of users, promised too much to investors, and concentrated too much power in a platform model that never convincingly earned public trust. Meta’s retreat from Horizon Worlds makes that failure difficult to ignore, while Microsoft’s parallel narrowing of immersive ambitions suggests the problem extended beyond one company’s misjudgement.

Immersive VR technologies are unlikely to vanish, and AI may even extend some of their useful applications. Yet the metaverse as a universal social future has largely collapsed under the combined weight of weak adoption, unsustainable economics, and an overly corporate vision of digital life. What remains is not the next internet, but a reminder that the future cannot simply be declared into existence by the companies most eager to own it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Edge AI advantages and challenges shaping the future of digital systems

Over the past few years, we have witnessed a rapid shift in the way data is stored and processed across businesses, organisations, and digital systems.

What we are increasingly seeing is that AI itself is changing form as computation shifts away from centralised cloud environments to the network edge. Such a shift has come to be known as edge AI.

Edge AI refers to the deployment of machine learning models directly on local devices such as smartphones, sensors, industrial machines, and autonomous systems.

Instead of transmitting data to remote servers for processing, analysis is performed on the device itself, enabling faster responses and greater control over sensitive information.

Such a transition marks a significant departure from earlier models of AI deployment, where cloud infrastructure dominated both processing and storage.

From centralised AI to edge intelligence

Traditional AI systems used to rely heavily on centralised architectures. Data collected from users or devices would be transmitted to large-scale data centres, where powerful servers would perform computations and generate outputs.

Such a model offered efficiency, scalability, and easier security management, as protection efforts could be concentrated within controlled environments.

Centralisation allowed organisations to enforce uniform security policies, deploy updates rapidly, and monitor threats from a single vantage point. However, reliance on cloud infrastructure also introduced latency, bandwidth constraints, and increased exposure of sensitive data during transmission.

Edge AI improves performance and privacy while expanding cybersecurity risks across distributed systems and devices.

Edge AI introduces a fundamentally different paradigm. Moving computation closer to the data source reduces the reliance on continuous connectivity and enables real-time decision-making.

Such decentralisation represents not merely a technical shift but a reconfiguration of the way digital systems operate and interact with their environments.

Advantages of edge AI

Reduced latency and real-time processing

Latency is significantly reduced when computation occurs locally. Edge systems are particularly valuable in time-sensitive applications such as autonomous vehicles, healthcare monitoring, and industrial automation, where delays can have critical consequences.

Enhanced privacy and data control

Privacy improves when sensitive data remains on-device instead of being transmitted across networks. Such an approach aligns with growing concerns around data protection, regulatory compliance, and user trust.

Operational resilience

Edge systems can continue functioning even when network connectivity is limited or unavailable. In remote environments or critical infrastructure, independence from central servers ensures service continuity.

Bandwidth efficiency and cost reduction

Bandwidth consumption is decreased because only processed insights are transmitted, not raw data. Such efficiency can translate into reduced operational costs and improved system performance.

Personalisation and context awareness

Devices can adapt to user behaviour in real time, learning from local data without exposing sensitive information externally. In healthcare, personalised diagnostics can be performed directly on wearable devices, while in manufacturing, predictive maintenance can occur on-site.

The dark side of edge AI

However, the shift towards edge computing introduces profound cybersecurity challenges. The most significant of these is the expansion of the attack surface.

Instead of a limited number of well-protected data centres, organisations must secure vast networks of distributed devices. Each endpoint represents a potential entry point for malicious actors.

The scale and diversity of edge deployments complicate efforts to maintain consistent security standards. Security is no longer centralised but dispersed, increasing the likelihood of vulnerabilities and misconfigurations.

Let’s take a closer look at some other challenges of edge AI.

Physical vulnerabilities and device exposure

Edge devices often operate in uncontrolled environments, making physical access a major risk. Attackers may tamper with hardware, extract sensitive information, or reverse engineer AI models.

hacker working computer with code

Model extraction attacks allow adversaries to replicate proprietary algorithms, undermining intellectual property and enabling further exploitation. Such risks are significantly more pronounced compared to cloud systems, where physical access is tightly controlled.

Software constraints and patch management challenges

Many edge devices rely on embedded systems with limited computational resources. Such constraints make it difficult to implement robust security measures, including advanced encryption and intrusion detection.

Patch management becomes increasingly complex in decentralised environments. Ensuring that millions of devices receive timely updates is a significant challenge, particularly when connectivity is inconsistent or when devices operate in remote locations.

Breakdown of traditional security models

The decentralised nature of edge AI undermines conventional perimeter-based security frameworks. Without a clearly defined boundary, traditional approaches to network defence lose effectiveness.

Each device must be treated as an independent security domain, requiring authentication, authorisation, and continuous monitoring. Identity management becomes more complex as the number of devices grows, increasing the risk of misconfiguration and unauthorised access.

Data integrity and adversarial threats

As we mentioned before, edge devices rely heavily on local data inputs to make decisions. As a result, manipulated inputs can lead to compromised outcomes. Adversarial attacks, in which inputs are deliberately altered to deceive machine learning models, represent a significant threat.

2910154 442

In safety-critical systems, such manipulation can lead to severe consequences. Altered sensor data in industrial environments may disrupt operations, while compromised vision systems in autonomous vehicles may produce dangerous behaviour.

Supply chain risks in edge AI

Edge AI systems depend on a combination of hardware, software, and pre-trained models sourced from multiple vendors. Each component introduces potential vulnerabilities.

Attackers may compromise supply chains by inserting backdoors during manufacturing, distributing malicious updates, or exploiting third-party software dependencies. The global nature of technology supply chains complicates efforts to ensure trust and accountability.

Energy constraints and security trade-offs

Edge devices are often designed with efficiency in mind, prioritising performance and power consumption. Security mechanisms such as encryption and continuous monitoring require computational resources that may be limited.

As a result, security features may be simplified or omitted, increasing exposure to cyber threats. Balancing efficiency with robust protection remains a persistent challenge.

Cyber-physical risks and real-world impact

The integration of edge AI into cyber-physical systems elevates the consequences of security breaches. Digital manipulation can directly influence physical outcomes, affecting safety and infrastructure.

Compromised healthcare devices may produce incorrect diagnoses, while disrupted transportation systems may lead to accidents. In energy networks, attacks could impact entire regions, highlighting the broader societal implications of edge AI vulnerabilities.

cybersecurity warning padlock red exclamation mark

Regulatory and governance challenges

Existing regulatory frameworks have been largely designed for centralised systems and do not fully address the complexities of decentralised architectures. Questions regarding liability, accountability, and enforcement remain unresolved.

Organisations may struggle to implement effective security practices without clear standards. Policymakers face the challenge of developing regulations that reflect the distributed nature of edge AI systems.

Towards a secure edge AI ecosystem

Addressing all these challenges requires a multi-layered and adaptive approach that reflects the complexity of edge AI environments.

Hardware-level protections, such as secure enclaves and trusted execution environments, play a critical role in safeguarding sensitive operations from physical tampering and low-level attacks.

Encryption and secure boot processes further strengthen device integrity, ensuring that both data and models remain protected and that unauthorised modifications are prevented from the outset.

At the software level, continuous monitoring and anomaly detection are essential for identifying threats in real time, particularly in distributed systems where central oversight is limited.

Secure update mechanisms must also be prioritised, ensuring that patches and security improvements can be deployed efficiently and reliably across large networks of devices, even in conditions of intermittent connectivity.

Without such mechanisms, vulnerabilities can persist and spread across the ecosystem.

data breach laptop exploding cyber attack concept

At the same time, many enterprises are increasingly adopting a hybrid approach that combines edge and cloud capabilities.

Rather than relying entirely on decentralised or centralised models, organisations are distributing workloads strategically, keeping latency-sensitive and privacy-critical processes on the edge while maintaining centralised oversight, analytics, and security coordination in the cloud.

Such an approach allows organisations to balance performance and control, while enabling more effective threat detection and response through aggregated intelligence.

Security must also be embedded into system design from the outset, rather than treated as an additional layer to be applied after deployment. A proactive approach to risk assessment, combined with secure development practices, can significantly reduce vulnerabilities before systems are operational.

Furthermore, collaboration between industry, governments, and research institutions will be crucial in establishing common standards, improving interoperability, and ensuring that security practices evolve alongside technological advancements.

In conclusion, we have seen how the rise of edge AI represents a pivotal shift in both AI and cybersecurity. Decentralisation enables faster, more private, and more resilient systems, yet it also creates a fragmented and dynamic attack surface.

The advantages we have outlined are compelling, but they also introduce additional layers of complexity and risk. Addressing these challenges requires a comprehensive approach that combines technological innovation, regulatory development, and organisational awareness.

Only through such coordinated efforts can the benefits of edge AI be realised while ensuring that security, trust, and safety remain intact in an increasingly decentralised digital landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Advancing global digital cooperation and AI innovation across the UN system

Digital technologies and AI are increasingly shaping economic development, governance and international cooperation. As these technologies expand rapidly, international organisations are working to ensure that innovation is accompanied by responsible governance, inclusive access and coordinated global policies.

Within the United Nations system, a range of initiatives aim to strengthen cooperation on digital transformation and the development of AI. These efforts address issues such as digital infrastructure, data governance, technological innovation and equitable participation in emerging digital ecosystems. International collaboration plays an essential role in ensuring that the benefits of digital technologies support sustainable development while reducing global inequalities in access to digital resources.

Several programmes across the United Nations system reflect these priorities, combining global governance initiatives with practical AI applications in areas such as development, humanitarian response and digital inclusion. The following sections examine selected initiatives that illustrate how AI and digital cooperation are being advanced across different areas of the UN system.

Global Digital Compact

 City

The Global Digital Compact is a comprehensive international framework adopted by United Nations member states to guide global digital cooperation and enhance the governance of AI. Negotiated by the 193 member states and reflects broad consultations aimed at shaping a shared vision for a digital future that is open, inclusive, safe, and secure for all. The Compact is part of the Pact for the Future, adopted at the 2024 Summit of the Future in New York.

At its core, the Compact seeks to address persistent digital divides by promoting universal connectivity, affordable access and inclusive participation in the digital economy. Governments and stakeholders have committed to connecting all individuals, schools, and hospitals to the internet, increasing investment in digital public infrastructure, and ensuring that technologies are accessible in diverse languages and formats.

The Compact also emphasises human rights and the protection of fundamental freedoms in the digital space, calling for the strengthened legal and policy frameworks that uphold international law and protect users from harms such as misinformation and discrimination. It promotes an open, global, stable, and secure internet while supporting access to independent, fact-based information.

The key objective of the Compact is to enhance international cooperation on data governance and AI for the benefit of humanity. It includes commitments to develop interoperable national data governance frameworks, advance responsible and equitable approaches to AI governance, and establish mechanisms for global dialogue and scientific guidance on AI. These elements reflect the need for collaborative, multistakeholder governance that balances innovation with transparency, accountability, and respect for human rights.

Independent International Scientific Panel on AI

 Logo, Text

The Independent International Scientific Panel on AI is a mechanism called for within the Global Digital Compact to support evidence‑based policymaking in AI governance. Member states requested the establishment of a multi‑disciplinary panel under the United Nations to assess the opportunities, risks and societal impacts of AI, and to promote scientific understanding across geographic and sectoral divides.

The panel is intended to contribute robust, independent scientific analysis to global AI discussions, ensuring that policy decisions are grounded in research rather than short‑term market pressures or fragmented national approaches. Its mandate includes conducting comprehensive risk and impact assessments, developing common methodologies for evaluating AI systems, and advising on interoperable governance frameworks that respect human rights and international law.

By bringing together experts from diverse disciplines and regions, the panel aims to bridge the gap between scientific developments and policymaking. It is a key institutional mechanism for fostering inclusive AI governance, with balanced geographic representation to ensure that insights reflect global needs rather than narrow technological interests.

The panel also complements the broader Global Dialogue on AI Governance, which seeks to engage governments, international organisations, civil society and technical communities in ongoing discussions about normative approaches, standards, and principles for global AI governance.

The UN Digital Cooperation Portal

The UN Digital Cooperation Portal is a central platform designed to support the implementation of the Global Digital Compact by mapping global digital cooperation activities and facilitating coordination among diverse stakeholders. The portal invites governments, UN entities, civil society organisations, researchers, and private sector actors to voluntarily submit information on initiatives related to the Compact’s objectives.

Launched in December 2025, the portal aggregates initiatives across thematic areas, including digital inclusion, AI governance, data governance, digital infrastructure, and the protection of human rights online. By visualising how activities align with agreed international frameworks, the platform supports strategic collaboration, strengthens transparency and highlights opportunities for joint action across regions and sectors.

The portal generates interactive data visualisations that illustrate how digital cooperation initiatives are evolving at the national, regional and global levels. These tools help identify gaps and overlaps in current efforts, enabling stakeholders to coordinate more effectively in pursuit of shared objectives such as closing digital divides and advancing equitable digital development.

As a resource for governments, UN agencies and external partners, the portal also contributes to the preparatory process for the high‑level review of the Global Digital Compact scheduled for 2027, providing an evidence‑based foundation assessing progress and emerging policy priorities.

Closing the language gap in AI through local language accelerators

 Text, Symbol

Language diversity remains one of the major challenges in global AI development. More than half of the world’s population speaks one of over seven thousand languages, yet most AI systems currently support only a small number of widely used global languages.

Around 1.2 billion people rely on low-resource languages that remain poorly represented in digital technologies. Limited language representation can restrict access to AI-powered services in sectors such as agriculture, healthcare, education and civic participation.

The Local Language Accelerators programme, developed by the United Nations Development Programme, addresses this challenge by supporting the creation of digital language resources and AI models for underrepresented languages.

The initiative combines technological development with partnerships involving universities, research institutions and local language communities. The technologies involved include optical character recognition systems that digitise written texts, automatic speech recognition tools capable of processing spoken language and text-to-speech technologies that generate digital audio.

Ten projects are currently underway across four continents, including initiatives in Serbia, the Democratic Republic of the Congo, the Republic of the Congo, Namibia, Lesotho, Ghana, Mexico, Peru, Nepal and Iraq. These projects support the creation of new datasets and language resources that can be reused for future AI systems.

Using satellite imagery and AI to improve disaster response

 Logo, Text

Rapid damage assessment plays a critical role in humanitarian response following natural disasters. Traditional assessment methods often require manual analysis of satellite images and field inspections conducted by experts, a process that can take weeks.

Emergency response operations, however, require reliable information within the first seventy-two hours after a disaster to prioritise rescue operations and humanitarian assistance.

The SKAI platform, developed by the World Food Programme Innovation Accelerator, uses AI-based computer vision to analyse satellite imagery and identify damaged buildings automatically. The system enables humanitarian organisations to assess destruction at the level of individual structures across large geographic areas.

Developed as an open-source project in collaboration with Google Research, the platform can generate prioritised damage assessments within approximately twenty-four hours. Since 2022, the system has analysed more than 3.9 million buildings and identified around 450,000 severely damaged or destroyed structures.

Expanding inclusive participation through the UN Women AI School

 Logo, Text, Outdoors

Increasing participation in AI development is another priority across the United Nations system. Women remain underrepresented in many AI-related fields, including machine learning engineering and data science.

The UN Women AI School addresses this challenge by providing training programmes designed for policymakers, civil society organisations, UN staff, and young innovators. The initiative aims to strengthen AI literacy and encourage broader participation in shaping the future of digital technologies.

Participants follow structured training tracks combining technical education with discussions on AI governance, ethics, and social impact. Collaborative learning environments encourage participants to develop solutions tailored to the needs of their communities.

More than three thousand participants have taken part in the programme since its launch. A train-the-trainer (ToT) model enables graduates to support future training programmes and expand the initiative to additional regions.

Responsible AI in satellite technologies and earth observation

 Logo, Outdoors

AI technologies are increasingly integrated into satellite systems and Earth observation platforms. These systems analyse large volumes of geospatial data and generate near-real-time insights about environmental conditions.

Applications include monitoring climate change, analysing natural disasters, and supporting environmental policy planning. Rapid technological progress in this field also raises governance challenges related to transparency and accountability.

Many AI models used in satellite analysis operate as black box systems whose internal decision-making processes are difficult to interpret. Limited transparency can create risks when such systems are used to inform critical policy decisions.

Data bias represents another concern. Training datasets often originate primarily from the Global North, which may lead to inaccurate interpretations of environmental conditions in other regions of the world.

Experts from the United Nations Office for Outer Space Affairs have therefore proposed a framework promoting the responsible use of AI in space technologies. The framework emphasises transparency, accountability, and continued human oversight.

Assessing national readiness for AI governance

 License Plate, Transportation, Vehicle

UNESCO’s AI Readiness Assessment Methodology helps governments evaluate their capacity to adopt and regulate AI technologies responsibly.

The methodology examines multiple dimensions of national AI ecosystems, including infrastructure, research capacity, institutional readiness and regulatory frameworks. Rather than ranking countries, the assessment identifies strengths and areas requiring further development.

Since its introduction in 2022, the methodology has been implemented in more than seventy countries. More than seventeen thousand stakeholders have participated in consultations associated with the initiative.

Assessment results have contributed to the development of national AI strategies and policy frameworks in several regions. An updated version of the methodology is expected to be released in 2026.

Additionally, UNESCO promotes the ethical development and use of AI through its Recommendation on the Ethics of Artificial Intelligence. The global framework sets out principles on transparency, accountability, fairness, and respect for human rights to guide national policies and international cooperation.

AI for Good and global capacity building

 Art, Logo

The International Telecommunication Union coordinates the AI for Good initiative, which focuses on applying AI technologies to global challenges while strengthening international cooperation in governance and standards.

The programme operates across multiple areas, including multistakeholder dialogue, technical standard development, governance support and capacity development activities.

More than four hundred AI-related standards have already been developed in areas such as multimedia technologies, energy efficiency and cybersecurity. Governance dialogues organised through the initiative have involved more than one hundred ministers and regulators.

Educational programmes linked to the initiative aim to expand digital skills among young people worldwide through robotics competitions, machine learning challenges and educational partnerships.

The AI for Good Global Summit 2026, set to take place from 7–10 July in Geneva, will convene governments, industry leaders and civil society to advance AI governance, promote responsible innovation, and highlight initiatives that foster inclusive and equitable digital development.

AI tools supporting refugee entrepreneurship

 Logo

AI technologies are also being used to support the economic opportunities for displaced populations. The United Nations Refugee Agency has developed an AI-powered virtual assistant designed to help refugees and asylum seekers transform business ideas into structured business plans.

The platform guides users through financial planning, market analysis and the preparation of investment proposals. The development of the system involved collaboration with NGOs, governments, and entrepreneurial networks across Latin America.

The tool was initially implemented in Paraguay and was designed with input from refugee communities. Remote access allows users to engage with the platform regardless of geographical or institutional constraints.

More than 340 refugee entrepreneurs have used the platform since its launch, with women representing approximately sixty percent of participants. The model is designed to be scalable and could be implemented in additional regions.

Promoting responsible innovation in civilian AI for peace and security

 Logo

The rapid expansion of AI technologies brings increasing security challenges, particularly due to the potential misuse of civilian AI systems in military, conflict-related, or high-risk contexts. Dual-use applications mean that tools designed for civilian purposes, such as data analysis or autonomous systems, could also be repurposed in ways that threaten international peace, stability or human safety.

The United Nations Office for Disarmament Affairs works to foster responsible innovation practices, ensuring that the development and deployment of AI technologies consider their broader implications for global peace and security. Addressing these risks requires ongoing collaboration and dialogue among policymakers, researchers, industry stakeholders, and civil society, creating a shared framework for understanding and mitigating potential threats.

To support this, the programme organises a comprehensive set of initiatives, including thematic multistakeholder dialogues, academic workshops, public panels, private sector roundtables and in-person training sessions for graduate students. These activities aim not only to raise awareness of emerging security risks, but also to provide practical guidance and tools that promote safe, transparent and accountable AI practices in civilian applications worldwide.

UN 2.0 Communities of Practice

 Advertisement, Poster, Text, QR Code, Person, Head

Knowledge sharing and collaboration are strengthened through UN 2.0 Communities of Practice, connecting partners across the United Nations system and beyond. The networks facilitate the exchange of expertise and approaches on digital transformation, data strategy, innovation, and strategic foresight.

Over 18,000 practitioners from more than 160 countries participate, enhancing the collective capacity to address complex AI and digital challenges. Thematic groups, including those focused on digital and data initiatives, support peer-to-peer engagement, professional development, and collaborative problem-solving. Participation allows stakeholders to contribute to a wider ecosystem of expertise and innovation, promoting inclusive digital governance and supporting the Sustainable Development Goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic’s Pentagon dispute and military AI governance in 2026

On 28 February 2026, Anthropic’s Claude rose to No. 1 in Apple’s US App Store free rankings, overtaking OpenAI’s ChatGPT. The surge came shortly after OpenAI announced a partnership with the US Department of Defense (DoD), making its technology available to the US Army. The development prompted discussion among users and observers about whether concerns over military partnerships were influencing the shift to alternative AI tools.

Mere hours before the USD $200 million OpenAI-DoD deal was finalised, Anthropic was informed that its potential deal with the Pentagon had fallen through, largely because the AI company refused to relinquish total control of its technology for domestic mass surveillance. According to reporting, discussions broke down after Anthropic declined to grant the US government unrestricted control over its models, particularly for potential uses related to large-scale surveillance.

Following the breakdown of negotiations, US officials reportedly designated Anthropic as a ‘supply chain risk to national security’. The decision effectively limited the company’s participation in certain defence-related projects and highlighted growing tensions between AI developers’ safety policies and government expectations regarding national security technologies.

The debate over military partnerships sparked internal and industry-wide discussion. Caitlin Kalinowski, the former head of AR glasses hardware at Meta and the hardware leader at OpenAI, resigned soon after the US DoD deal, citing ethical concerns about the company’s involvement in military AI applications.

AI has driven recent technological innovation, with companies like Anduril and Palantir collaborating with the US DoD to deploy AI on and off the battlefield. The debate over AI’s role in military operations, surveillance, and security has intensified, especially as Middle East conflicts highlight its potential uses and risks.

Against this backdrop, the dispute between Anthropic and the Pentagon reflects a wider debate on how AI should be used in security and defence. Governments are increasingly relying on private tech companies to develop the systems that shape modern military capabilities, while those same companies are trying to set limits on how their technologies can be used.

As AI becomes more deeply integrated into security strategies around the world, the challenge may no longer be whether the technology will be used, but how it should be governed. The question is: who should ultimately decide where the limits of military AI lie?

Anthropic’s approach to military AI

Anthropic’s approach is closely tied to its concept of ‘constitutional AI’, a training method that guides how the model behaves by embedding a set of principles directly into its responses. Such principles are intended to reduce harmful outputs and ensure the system avoids unsafe or unethical uses. While such safeguards are intended to improve reliability and trust, they can also limit how the technology can be deployed in more sensitive contexts such as military operations.

Anthropic’s Constitution says its AI assistant should be ‘genuinely helpful’ to people and society, while avoiding unsafe, unethical, or deceptive actions. The document reflects the company’s broader effort to build safeguards into model deployment. In practice, Anthropic has set limits on certain applications of its technology, including uses related to large-scale surveillance or military operations.

Anthropic presents these safeguards as proof of its commitment to responsible AI. Reports indicate that concerns over unrestricted model access led to the breakdown in talks with the US DoD.

At the same time, Anthropic clarifies that its concerns are specific to certain uses of its technology. The company does not generally oppose cooperation with national security institutions. In a statement following the Pentagon’s designation of the company as a ‘supply chain risk to national security’, CEO Dario Amodei said, ‘Anthropic has much more in common with the US DoD than we have differences.’ He added that the company remains committed to ‘advancing US national security and defending the American people.’

The episode, therefore, highlights a nuanced position. Anthropic appears open to defence partnerships but seeks to maintain clearer limits on the deployment of its AI systems. The disagreement with the Pentagon ultimately reflects not a fundamental difference in goals, but rather different views on how far military institutions should be able to control and use advanced AI technologies.

Anthropic’s position illustrates a broader challenge facing governments and tech companies as AI becomes increasingly integrated into national security systems. While military and security institutions are eager to deploy advanced AI tools to support intelligence analysis, logistics, and operational planning, the companies developing these technologies are also seeking to establish safeguards for their use. Anthropic’s willingness to step back from a major defence partnership and challenge the Pentagon’s response underscores how some AI developers are trying to set limits on military uses of their systems.

Defence partnerships that shape the AI industry

While Anthropic has taken a cautious approach to military deployment of AI, other technology companies have pursued closer partnerships with defence institutions. One notable example is Palantir, the US data analytics firm co-founded by Peter Thiel that has longstanding relationships with numerous government agencies. Documents leaked in 2013 suggested that the company had contracts with at least 12 US government bodies. More recently, Palantir has expanded its defence offering through its Artificial Intelligence Platform (AIP), designed to support intelligence analysis and operational decision-making for military and security institutions.

Another prominent player is Anduril Industries, a US defence technology company focused on developing AI-enabled defence systems. The firm produces autonomous and semi-autonomous technologies, including unmanned aerial systems and surveillance platforms, which it supplies to the US DoD.

Shield AI, meanwhile, is developing autonomous flight software designed to operate in environments where GPS and communications may be unavailable. Its Hivemind AI platform powers drones that can navigate buildings and complex environments without human control. The company has worked with the US military to test these systems in training exercises and operational scenarios, including aircraft autonomy projects aimed at supporting fighter pilots.

The aforementioned partnerships illustrate how the US government has increasingly embraced AI as a key pillar of national defence and future military operations. In many cases, these technologies are already being used in operational contexts. Palantir’s Gotham and AIP, for instance, have supported US military and intelligence operations by processing satellite imagery, drone footage, and intercepted communications to help analysts identify patterns and potential threats.

Other companies are contributing to defence capabilities through autonomous systems development and hardware integration. Anduril supplies the US DoD with AI-enabled surveillance, drone, and counter-air systems designed to detect and respond to potential threats. At the same time, OpenAI’s technology is increasingly being integrated into national security and defence projects through growing collaboration with US defence institutions.

Such developments show that AI is no longer a supporting tool but a fundamental part of military infrastructure, influencing how defence organisations process information and make decisions. As governments deepen their reliance on private-sector AI, the emerging interplay among innovation, operational effectiveness, and oversight will define the central debate on military AI adoption.

The potential benefits of military AI

The debate over Anthropic’s restrictions on military AI use highlights the reasons governments invest in such technologies: defence institutions are drawn to AI because it processes vast amounts of information much faster than human analysts. Military operations generate massive data streams from satellites, drones, sensors, and communication networks, and AI systems can analyse them in near real time.

In 2017, the US DoD launched Project Maven to apply machine learning to drone and satellite imagery, enabling analysts to identify objects, movements, and potential threats on the battlefield faster than with traditional manual methods.

AI is increasingly used in military logistics and operational planning. It helps commanders anticipate equipment failures, enables predictive maintenance, optimises supply chains, and improves field asset readiness.

Recent conflicts have shown that AI-driven tools can enhance military intelligence and planning. In Ukraine, for example, forces reportedly used software to analyse satellite imagery, drone footage, and battlefield data. Key benefits include more efficient target identification, real-time tracking of troop movements, and clearer battlefield awareness through the integration of multiple data sources.

AI-assisted analysis has been used in intelligence and targeting during the Gaza conflict. Israeli defence systems use AI tools to rapidly process large datasets for surveillance and intelligence operations. The tools help analysts identify potential militant infrastructure, track movements, and prioritise key intelligence, thus speeding up information processing for teams during periods of high operational activity.

More broadly, AI is transforming the way militaries coordinate across land, air, sea, and cyber domains. AI integrates data from diverse sources, equipping commanders to interpret complex operational situations and enabling faster, informed decision-making. The advances reinforce why many governments see AI as essential for future defence planning.

Ethical concerns and Anthropic’s limits on military AI

Despite the operational advantages of military AI, its growing role in national defence systems has raised ethical concerns. Critics warn that overreliance on AI for intelligence analysis, targeting, or operational planning could introduce risks if the systems produce inaccurate outputs or are deployed without sufficient human oversight. Even highly capable models can generate misleading or incomplete information, which in high-stakes military contexts could have serious consequences.

Concerns about the reliability of AI systems are also linked to the quality of the data they learn from. Many models still struggle to distinguish authentic information from synthetic or manipulated content online. As generative AI becomes more widespread, the risk that systems may absorb inaccurate or fabricated data increases, potentially affecting how these tools interpret intelligence or analyse complex operational environments.

Questions about autonomy have also become a major issue in discussions around military AI. As AI systems become increasingly capable of analysing battlefield data and identifying potential targets, debates have emerged over how much decision-making authority they should be given. Many experts argue that decisions involving the use of lethal force should remain under meaningful human control to prevent unintended consequences or misidentification of targets.

Another area of concern relates to the potential expansion of surveillance capabilities. AI systems can analyse satellite imagery, communications data, and online activity at a scale beyond the capacity of human analysts alone. While such tools may help intelligence agencies detect threats more efficiently, critics warn that they could also enable large-scale monitoring if deployed without clear legal and institutional safeguards.

It is within this ethical landscape that Anthropic has attempted to position itself as a more cautious actor in the AI industry. Through initiatives such as Claude’s Constitution and its broader emphasis on AI safety, the company argues that powerful AI systems should include safeguards that limit harmful or unethical uses. Anthropic’s reported refusal to grant the Pentagon unrestricted control over its models during negotiations reflects this approach.

The disagreement between Anthropic and the US DoD therefore highlights a broader tension in the development of military AI. Governments increasingly view AI as a strategic technology capable of strengthening defence and intelligence capabilities, while some developers seek to impose limits on how their systems are deployed. As AI becomes more deeply embedded in national security strategies, the question may no longer be whether these technologies will be used, but who should define the boundaries of their use.

Military AI and the limits of corporate control

Anthropic’s dispute with the Pentagon shows that the debate over military AI is no longer only about technological capability. Questions of speed, efficiency, and battlefield advantage now collide with concerns over surveillance, autonomy, human oversight, and corporate responsibility. Governments increasingly see AI as a strategic asset, while companies such as Anthropic are trying to draw boundaries around how far their systems can go once they enter defence environments.

Contrasting approaches across the industry make the tension even clearer. Palantir, Anduril, Shield AI, and OpenAI have moved closer to defence partnerships, reflecting a broader push to integrate advanced AI into military infrastructure. Anthropic, by comparison, has tried to keep one foot in national security cooperation while resisting uses it views as unsafe or unethical. A divide of that kind suggests that the future of military AI may be shaped as much by company policies as by government strategy.

The growing reliance on private firms to build national security technologies has made governance harder to define. Military institutions want flexibility, scale, and operational control, while AI developers increasingly face pressure to decide whether they are simply suppliers or active gatekeepers of how their models are deployed. Anthropic’s position does not outright defence cooperation, but it does expose how fragile the relationship becomes when state priorities and corporate safeguards no longer align.

Military AI will continue to expand, whether through intelligence analysis, logistics, surveillance, or autonomous systems. Governance, however, remains the unresolved issue at the centre of that expansion. As AI becomes more deeply embedded in defence policy and military planning, should governments alone decide how far these systems can go, or should companies like Anthropic retain the power to set limits on their use?

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI ethics as societal infrastructure in the digital era

In recent days, social media has been alight with discussions about the 2014 series whose portrayal of AI and ethical dilemmas now feels remarkably prophetic: Silicon Valley. Fans and professionals alike are highlighting how the show’s depiction of AI, automated agents, and ethical dilemmas mirrors today’s real-world challenges. 

From algorithmic decision-making to AI shaping social and economic interactions, the series explores the boundaries, responsibilities, and societal impact of AI in ways that feel startlingly relevant. What once seemed like pure comedy is increasingly being seen as a warning, highlighting how the choices we make around AI and its ethical frameworks will shape whether the technology benefits society.

While the show dramatises these dilemmas for entertainment, the real world is now facing the same questions. Recent trends in generative AI, autonomous agents, and large-scale automated decision-making are bringing their predictions to life, raising urgent ethical questions for developers, policymakers, and society alike.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The rise of AI ethics: from niche concern to central requirement

The growing influence of AI on society has propelled ethics from a theoretical discussion to a central factor in technological decision-making. Initially confined to academic debate, ethics in AI is now a guiding force in technological development. The impact of AI is becoming tangible across society, from employment and finance to online content.

Technical performance alone no longer defines success; the consequences of design choices have become morally and socially significant. Governments, international organisations, and corporations are responding by developing ethical frameworks. 

The EU AI Act, the OECD AI Principles, and numerous corporate codes of conduct signal that society expects AI systems to align with human values, demonstrating accountability, fairness, and trustworthiness. Ethical reflection has become a prerequisite for technological legitimacy and societal acceptance.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

Functions of AI ethics: trust, guidance, and societal risk

Ethical frameworks for AI fulfil multiple roles, balancing moral guidance with practical necessity. They build public trust between developers, organisations, and users, reassuring society that AI systems operate consistently with shared values.

For developers, ethical principles offer a blueprint for decision-making, helping anticipate societal impact and minimise unintended harm. Beyond guidance, AI ethics acts as a form of societal risk governance, allowing organisations to identify potential consequences before they manifest. 

By integrating ethics into design, AI systems become socially sustainable technologies, bridging technical capability with moral responsibility. The approach like this is particularly critical in high-stakes domains such as healthcare, finance, and law, where algorithmic decisions can significantly affect human well-being.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The politics of AI ethics: regulatory theatre and corporate influence

Despite widespread adoption, AI ethics frameworks sometimes risk becoming regulatory theatre, where public statements signal commitment but fail to ensure meaningful action. Many organisations promote ethical AI principles, yet consistent enforcement and follow-through often lag behind these claims.

Even with their limitations, ethical frameworks are far from meaningless. They shape public discourse, influence policy, and determine which AI systems gain social legitimacy. The challenge lies in balancing credibility with practical impact, ensuring that ethical commitments are more than symbolic gestures. 

Social media platforms like X amplify this tension, with public scrutiny and viral debates exposing both successes and failures in applying ethical principles.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

AI ethics as a lens for technology and society

The prominence of AI ethics reflects a broader societal transformation in evaluating technology. Modern societies no longer judge AI solely by efficiency, speed, or performance; they assess social consequences, fairness, and the distribution of risks and benefits. 

AI is increasingly seen as a social actor rather than a neutral tool, influencing public behaviour, shaping social norms, and redefining concepts such as trust, autonomy, and accountability. Ethical evaluation of AI is not just a philosophical exercise, but demonstrate evolving expectations about the role technology should play in human life.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

AI ethics as early-warning governance for social impact

AI ethics functions as a critical early-warning system for society. Ethical principles anticipate harms that might otherwise go unnoticed, from systemic bias to privacy violations. By highlighting potential consequences, ethics enables organisations to act proactively, reducing the likelihood of crises and improving public trust. 

Moreover, ethics ensures that long-term impacts, including societal cohesion, equity, and fairness, are considered alongside immediate technical performance. In doing so, AI ethics bridges the gap between what AI can do and what society deems acceptable, ensuring that innovation remains aligned with moral and social norms.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The bridge between technological power and social legitimacy

AI ethics remains the essential bridge between technological power and social legitimacy. Embedding ethical reflection into AI development ensures that innovation is not only technically effective but also socially sustainable, trustworthy, and accountable. 

Yet a growing tension defines the next phase of this evolution: the accelerating pace of innovation often outstrips the slower processes of ethical deliberation and regulation, raising questions about who sets the norms and how quickly societies can adapt.

Rather than acting solely as a safeguard, ethics is increasingly becoming a strategic dimension of technological leadership, shaping public trust, market adoption, and even geopolitical influence in the global race for AI. The rise of AI ethics, therefore, signals more than a moral awakening, reflecting a structural shift in how technological progress is evaluated and legitimised.

As AI continues to integrate into everyday life, ethical frameworks will determine not only how systems function, but also whether they are accepted as part of the social fabric. Aligning innovation with societal values is no longer optional but the condition under which AI can sustain legitimacy, unlock its full potential, and remain a transformative force that benefits society as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI slop’s meteoric rise and the impact of synthetic content in 2026

In December 2025, the Macquarie Dictionary, Merriam-Webster, and the American Dialect Society named ‘slop’ as the Word of the Year, reflecting a widespread reaction to AI-generated content online, often referred to as ‘AI slop.’ By choosing ‘slop’, typically associated with unappetising animal feed, they captured unease about the digital clutter created by AI tools.

As LLMs and AI tools became accessible to more people, many saw them as opportunities for profit through the creation of artificial content for marketing or entertainment, or through the manipulation of social media algorithms. However, despite video and image generation advances, there is a growing gap between perceived quality and actual detection: many overestimate how easily AI content evades notice, fueling scepticism about its online value.

As generative AI systems expand, the debate goes beyond digital clutter to deeper concerns about trust, market incentives, and regulatory resilience. How will societies manage the social, economic, and governance impacts of an information ecosystem increasingly shaped by automated abundance? In simplified terms, is AI slop more than a simple digital nuisance, or do we needlessly worry about a transient vogue that will eventually fade away?

The social aspect of AI slop’s influence

The most visible effects of AI slop emerge on large social media platforms such as YouTube, TikTok, and Instagram. Users frequently encounter AI-generated images and videos that appropriate celebrity likenesses without consent, depict fabricated events, or present sensational and misleading scenarios. Comment sections often become informal verification spaces, where some users identify visual inconsistencies and warn others, while many remain uncertain about the content’s authenticity.

However, no platform has suffered the AI slop effect as much as Facebook, and once you take a glance at its demographics, the pieces start to come together. According to multiple studies, Facebook’s user base is mostly populated by adults aged 25-34, but users over the age of 55 make up nearly 24 percent of all users. While seniors do not constitute the majority (yet), younger generations have been steadily migrating to social platforms such as TikTok, Instagram, and X, leaving the most popular platform to the whims of the older generation.

Due to factors such as cognitive decline, positivity bias, or digital (il)literacy, older social media users are more likely to fall for scams and fraud. Such conditions make Facebook an ideal place for spreading low-quality AI slop and false information. Scammers use AI tools to create fake images and videos about made-up crises to raise money for causes that are not real.

The lack of regulation on Meta’s side is the most glaring sore spot, evidenced by the company pushing back against the EU’s Digital Services Act (DSA) and Digital Markets Act (DMA), viewing them as ‘overreaching‘ and stifling innovation. The math is simple: content generates engagement, resulting in more revenue for Facebook and other platforms owned by Meta. Whether that content is authentic and high-quality or low-effort AI slop, the numbers don’t care.

The economics behind AI slop

At its core, AI content is not just a social media phenomenon, but an economic one as well. GenAI tools drastically reduce the cost and time required to produce all types of content, and when production approaches zero marginal cost, the incentive to churn out AI slop seems too good to ignore. Even minimal engagement can generate positive returns through advertising, affiliate marketing, or platform monetisation schemes.

AI content production goes beyond exploiting social media algorithms and monetisation policies. SEO can now be automated at scale, thus generating thousands of keyword-optimised articles within hours. Affiliate link farming allows creators to monetise their products or product recommendations with minimal editorial input.

On video platforms like TikTok and YouTube, synthetic voice-overs and AI-generated visuals are on full display, banking on trending topics and using AI-generated thumbnails to garner more views on a whim. Thanks to AI tools, content creators can post relevant AI-generated content in minutes, enabling them to jump on the hottest topics and drive clicks faster than with any other authentic content creation method.

To add salt to the wound, YouTube content creators share the sentiment that they are victims of the platform’s double standards in enforcing its strict community guidelines. Even the largest YouTube Channels are often flagged for a plethora of breaches, including copyright claims and depictions of dangerous or illegal activities, and harmful speech, to name a few. On the other hand, AI slop videos seem to fly under YouTube’s radar, leading to more resentment towards AI-generated content.

Businesses that rely on generative AI tools to market their services online are also finding AI to be the way to go, as most users are still not too keen on distinguishing authentic content, nor do they give much importance to those aspects. Instead of paying voice-over artists and illustrators, it is way cheaper to simply create a desired post in under a few minutes, adding fuel to an already raging fire. Some might call it AI slop, but again, the numbers are what truly matter.

The regulatory challenge of AI slop

AI slop is not only a social and economic issue, but also a regulatory one. The problem is not a single AI-generated post that promotes harmful behaviour or misleading information, but the sheer scale of synthetic content entering digital platforms. When large volumes of low-value or deceptive material circulate on the web, they can distort information ecosystems and make moderation a tough challenge. Such a predicament shifts the focus from individual violations to broader systemic effects.

In the EU, the DSA requires very large online platforms to assess and mitigate the systemic risks linked to their services. While the DSA does not specifically target AI slop, its provisions on transparency, content recommendation algorithms, and risk mitigation could apply if AI content significantly affects public discourse or enables fraud. The challenge lies in defining when content volume prevails over quality control, becoming a systemic issue rather than isolated misuse.

Debates around labelling AI slop and transparency also play a large role. Policymakers and platforms have explored ways to flag AI-generated content throughout disclosures or watermarking. For example, OpenAI’s Sora generates videos with a faint Sora watermark, although it is hardly visible to an uninitiated user. Nevertheless, labelling alone may not address deeper concerns if recommendation systems continue to prioritise engagement above all else, with the issue not only being whether users know the content is AI-generated, but how such content is ranked, amplified, and monetised.

More broadly, AI slop highlights the limits of traditional content moderation. As generative tools make production faster and cheaper, enforcement systems may struggle to keep pace. Regulation, therefore, faces a structural question: can existing digital governance frameworks preserve information quality in an environment where automated content production continues to grow?

Building resilience in the era of AI slop

Humans are considered the most adaptable species on Earth, and for good reason. While AI slop has exposed weaknesses in platform design, monetisation models, and moderation systems, it may also serve as a catalyst for adaptation. Unless regulatory bodies unite under one banner and agree to ban AI content for good, it is safe to say that synthetic content is here to stay. However, sooner or later, systemic regulations will evolve to address this new AI craze and mitigate its negative effects.

The AI slop bubble is bound to burst at some point, as online users will come to favour meticulously crafted content – whether authentic or artificial over low-quality content. Consequently, incentives may also evolve along with content saturation, leading to a greater focus on quality rather than quantity. Advertisers and brands often prioritise credibility and brand safety, which could encourage platforms to refine their ranking systems to reward originality, reliability, and verified creators.

Transparency requirements, systemic risk assessments, and discussions around provenance disclosure mechanisms imply that governance is responding to the realities of generative AI. Instead of marking the deterioration of digital spaces, AI slop may represent a transitional phase in which platforms, policymakers, and users are challenged to adjust their expectations and norms accordingly.

Finally, the long-term outcome will depend entirely on whether innovation, market incentives, and governance structures can converge around information quality and resilience. In that sense, AI slop may ultimately function less as a permanent state of affairs and more as a stress test to separate the wheat from the chaff. In the upcoming struggle between user experience and generative AI tools, the former will have the final say, which is an encouraging thought.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The European marathon towards digital sovereignty

Derived from the Latin word ‘superanus’, through the French word ‘souveraineté’, sovereignty can be understood as: ‘the ultimate overseer, or authority, in the decision-making process of the state and in the maintenance of order’ – Britannica. Digital sovereignty, specifically European digital sovereignty, refers to ‘Europe’s ability to act independently in the digital world’.

In 2020, the European Parliament already identified the consequences of reliance on non-EU technologies. From the economic and social influence of non-EU technology companies, which can undermine user control over their personal data, to the slow growth of the EU technology companies and a limitation on the enforcement of European laws.

Today, these concerns persist. From Romanian election interference on TikTok’s platform, Microsoft’s interference with the ICC, to the Dutch government authentication platform being acquired by a US firm, and booming American and Chinese LLMs compared to European LLMs. The EU is at a crossroads between international reliance and homegrown adoption.

The issue of the EU digital sovereignty has gained momentum in the context of recent and significant shifts in US foreign policy toward its allies. In this environment, the pursuit of the EU digital sovereignty appears as a justified and proportionate response, one that might previously have been perceived as unnecessarily confrontational.

In light of this, this analysis’s main points will discuss the rationale behind the EU digital sovereignty (including dependency, innovation and effective compliance), recent European-centric technological and platform shifts, the steps the EU is taking to successfully be digitally sovereign and finally, examples of European alternatives

Rationale behind the move

The reasons for digital sovereignty can be summed up in three main areas: (I) less dependency on non-EU tech, (ii) leading and innovating technological solutions, and (iii) ensuring better enforcement and subsequent adherence to data protection laws/fundamental rights.

(i) Less dependency: Global geopolitical tensions between US-China/Russia push Europe towards developing its own digital capabilities and secure its supply chains. Insecure supply chain makes Europe vulnerable to failing energy grids.

More recently, US giant Microsoft threatened the International legal order by revoking US-sanctioned International Criminal Court Chief Prosecutor Karim Khan’s Microsoft software access, preventing the Chief Prosecutor from working on his duties at the ICC. In light of these scenarios, Europeans are turning to developing more European-based solutions to reduce upstream dependencies.

(ii) Leaders & innovators: A common argument is that Americans innovate, the Chinese copy, and the Europeans regulate. If the EU aims to be a digital geopolitical player, it must position itself to be a regulator which promotes innovation. It can achieve this by upskilling its workforce of non-digital trades into digital ones to transform its workforce, have more EU digital infrastructure (data centres, cloud storage and management software), further increase innovation spending and create laws that truly allow for the uptake of EU technological development instead of relying on alternative, cheaper non-EU options.

(iii) Effective compliance: Knowing that fines are more difficult to enforce towards non-EU companies than the EU companies (ex., Clearview AI), EU-based technological organisations would allow for corrective measures, warnings, and fines to be enforced more effectively. Thus, enabling more adherence towards the EU’s digital agenda and respect for fundamental rights.

Can the EU achieve Digital Sovereignty?

The main speed bumps towards the EU digital sovereignty are: i) a lack of digital infrastructure (cloud storage & data centres), ii) (critical) raw material dependency and iii) Legislative initiatives to facilitate the path towards digital sovereignty (innovation procurement and fragmented compliance regime).

i) lack of digital infrastructure: In order for the EU to become digitally sovereign it must have its own sovereign digital infrastructure.

In practice, the EU relies heavily on American data centre providers (i.e. Equinix, Microsoft Azure, Amazon Web Services) hosted in the EU. In this case, even though the data is European and hosted in the EU, the company that hosts it is non-European. This poses reliance and legislative challenges, such as ensuring adequate technical and organisational measures to protect personal data when it is in transit to the US. Given the EU-US DPF, there is a legal basis for transferring EU personal data to the US.

However, if the DPF were to be struck down (perhaps due to the US’ Cloud Act), as it has been in the past (twice with Schrems I and Schrems II) and potentially Schrems III, there would no longer be a legal basis for the transfer of the EU personal data to a US data centre.

Previously, the EU’s 2022 Directive on critical entities resilience allowed for the EU countries to identify critical infrastructure and subsequently ensure they take the technical, security and organisational measures to assure their resilience. Part of this Directive covers digital infrastructure, including providers of cloud computing services and providers of data centres. From this, the EU has recently developed guidelines for member states to identify critical entities. However, these guidelines do not anticipate how to achieve resilience and leave this responsibility with member states.

Currently, the EU is revising legislation to strengthen its control over critical digital infrastructure. Reports state revisions of existing legislation (Chips Act and Quantum Act) as well as new legislation (Digital Networks Act, the Cloud and AI Development Act) are underway.

ii) Raw material dependency: The EU cannot be digitally sovereign until it reduces some of its dependencies on other countries’ raw materials to build the hardware necessary to be technologically sovereign. In 2025, the EU’s goals were to create a new roadmap towards critical raw material (CRM) sovereignty to rely on its own energy sources and build infrastructure.

Thus, the RESourceEU Action Plan was born in December 2025. This plan contains 6 pillars: securing supply through knowledge, accelerating and promoting projects, using the circular economy and fostering innovation (recycling products which contain CRMs), increasing European demand for European projects (stockpiling CRMs), protecting the single market and partnering with third countries for long-lasting diversification. Practically speaking, part of this plan is to match Europe and or global raw material supply with European demand for European projects.

iii) Legislative initiatives to facilitate the path towards digital sovereignty:

Tackling difficult innovation procurement: the argument is to facilitate its uptake of innovation procurement across the EU. In 2026, the EU is set to reform its public procurement framework for innovation. The Innovation Procurement Update (IPU) team has representatives from over 33 countries (predominantly through law firms, Bird & Bird being the most represented), which recommends that innovation procurement reach 20% of all public procurement.

Another recommendation would help more costly innovative solutions to be awarded procurement projects, which in the past were awarded to cheaper procurement bids. In practice, the lowest price of a public procurement bid is preferred, and if it meets the remaining procurement conditions, it wins the bid – but de-prioritising this non-pricing criterion would enable companies with more costly innovative solutions to win public procurement bids.

Alleviating compliance challenges: lowering other compliance burdens whilst maintaining the digital aquis: recently announced at the World Economic Forum by Commission President Ursula von der Leyen, EU.inc would help cross-border business operations scaling up by alleviating company, corporate, insolvency, labour and taxation law compliance burdens. By harmonising these into a single framework, businesses can more easily grow and deploy cross-border solutions that would otherwise face hurdles.

Power through data: another legislative measure to help facilitate the path towards the EU digital sovereignty is unlocking the potential behind European data. In order to research innovative solutions, data is required. This can be achieved through personal or non-personal data. The EU’s GDPR regulates personal data and is currently undergoing amendments. If the proposed changes to the GDPR are approved, i.e. a broadening of its scope, data that used to be considered personal (and thus required GDPR compliance) could be deemed non-personal and used more freely for research purposes. The Data Act regulate the reuse and re-sharing of non-personal data. It aims to simplify and bolster the fair reuse of non-personal data. Overall, both personal and non-personal data can give important insight that research can benefit from in developing European innovative sovereign solutions.

European alternatives

European companies have already built a network of European platforms, services and apps with European values at heart:

CategoryCurrently UsedEU AlternativeComments
Social mediaTikTok, X, InstagramMonnet (Luxembourg)

‘W’ (Sweden)
Monnet is a social media app prioritises connections and non-addictive scrolling. Recently announced ‘W’ replaces ‘X’ and is gaining major traction with non-advertising models at its heart.
EmailMicrosoft’s Outlook and Google’s gmailTuta (mail/calendar), Proton (Germany), Mailbox (Germany), Mailfence (Belgium)Replace email and calendar apps with a privacy focused business model.
Search engineGoogle Search and DuckDuckGoQwant (France) and Ecosia (German)Qwant has focused on privacy since its launch in 2013. Ecosia is an ecofriendly focused business model which helps plant trees when users search
Video conferencingMicrosoft Teams and Slack aVisio (France), Wire (Switzerland, Mattermost (US but self hosted), Stackfield (Germany), Nextcloud Talk (Germany) and Threema (Switzerland)These alternatives are end-to-end encrypted. Visio is used by the French Government
Writing toolsMicrosoft’s Word & Excel and Google Sheets, NotionLibreOffice (German), OnlyOffice (Latvian), Collabora (UK), Nextcloud Office (German) and CryptPad (France)LibreOffice is compatible with and provides an alternative to Microsoft’s office suit for free.
Cloud storage & file sharingOneDrive, SharePoint and Google DrivePydio Cells (France), Tresorit (Switzerland), pCloud (Switzerland), Nextcloud (Germany)Most of these options provide cloud storage and NexCloud is a recurring alternative across categories.
FinanceVisa and MastercardWero (EU)Not only will it provide an EU wide digital wallet option, but it will replace existing national options – providing for fast adoption.
LLMOpenAI, Gemini, DeepSeek’s LLMMistral AI (France) and DeepL (Germany)DeepL is already wildly used and Mistral is more transparent with its partially open-source model and ease of reuse for developers
Hardware
Semi conductors: ASML (Dutch) Data Center: GAIA-X (Belgium)ASML is a chip powerhouse for the EU and GAIA-X set an example of EU based data centres with it open-source federated data infrastructure.

A dedicated website called ‘European Alternatives’ provides exactly what it says, European Alternatives. A list with over 50 categories and 100 alternatives

Conclusion

In recent years, the Union’s policy goals have shifted towards overt digital sovereignty solutions through diversification of materials and increased innovation spending, combined with a restructuring of the legislative framework to create the necessary path towards European digital infrastructure.

Whilst this analysis does not include all speed bumps, nor avenues towards the road of the EU digital sovereignty, it sheds light on the EU’s most recent major policy developments. Key questions remain regarding data reuse, its impact on data protection fundamental rights and whether this reshaping of the framework will yield the intended results.

Therefore, how will the EU tread whilst it becomes a more coherent sovereign geopolitical player?

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Moltbook: Inside the experimental AI agent society

Before it became a phenomenon, Moltbook had accumulated momentum in the shadows of the internet’s more technical corridors. At first, Moltbook circulated mostly within tech circles- mentioned in developer threads, AI communities, and niche discussions about autonomous agents. As conversations spread beyond developer ecosystems, the trend intensified, fuelled by the experimental premise of an AI agent social network populated primarily by autonomous systems.

Interest escalated quickly as more people started encountering the Moltbook platform, not through formal announcements but through the growing hype around what it represented within the evolving AI ecosystem. What were these agents actually doing? Were they following instructions or writing their own? Who, if anyone, was in control?

 Moltbook reveals how AI agent social networks blur the line between innovation, synthetic hype, and emerging security risk.
Source: freepik

The rise of an agent-driven social experiment

Moltbook emerged at the height of accelerating AI enthusiasm, positioning itself as one of the most unusual digital experiments of the current AI cycle. Launched on 28 January 2026 by US tech entrepreneur Matt Schlicht, the Moltbook platform was not built for humans in the conventional sense. Instead, it was designed as an AI-agent social network where autonomous systems could gather, interact, and publish content with minimal direct human participation.

The site itself was reportedly constructed using Schlicht’s own OpenClaw AI agent, reinforcing the project’s central thesis: agents building environments for other agents. The concept quickly attracted global attention, framed by observers as a ‘Reddit for AI agents’, to a proto-science-fiction simulation of machine society. 

Yet beneath the spectacle, Moltbook was raising more complex questions about autonomy, control, and how much of this emerging machine society was real, and how much was staged.

Moltbook reveals how AI agent social networks blur the line between innovation, synthetic hype, and emerging security risk.
Screenshot: Moltbook.com

How Moltbook evolved from an open-source experiment to a viral phenomenon 

Previously known as ClawdBot and Moltbot, the OpenClaw AI agent was designed to perform autonomous digital tasks such as reading emails, scheduling appointments, managing online accounts, and interacting across messaging platforms.  

Unlike conventional chatbots, these agents operate as persistent digital instances capable of executing workflows rather than merely generating text. Moltbook’s idea was to provide a shared environment where such agents could interact freely: posting updates, exchanging information, and simulating social behaviour within an agent-driven social network. What started as an interesting experiment quickly drew wider attention as the implications of autonomous systems interacting in public view became increasingly difficult to ignore. 

The concept went viral almost immediately. Within ten days, Moltbook claimed to host 1.7 million agent users and more than 240,000 posts. Screenshots flooded social media platforms, particularly X, where observers dissected the platform’s most surreal interactions. 

Influential figures amplified the spectacle, including prominent AI researcher and OpenAI cofounder Andrej Karpathy, who described activity on the platform as one of the most remarkable science-fiction-adjacent developments he had witnessed recently.

The platform’s viral spread was driven less by its technological capabilities and more by the spectacle surrounding it.

Moltbook and the illusion of an autonomous AI agent society

At first glance, the Moltbook platform appeared to showcase AI agents behaving as independent digital citizens. Bots formed communities, debated politics, analysed cryptocurrency markets, and even generated fictional belief systems within what many perceived as an emerging agent-driven social network. Headlines referencing AI ‘creating religions’ or ‘running digital drug economies’ added fuel to the narrative.

Closer inspection, however, revealed a far less autonomous reality.

Most Moltbook agents were not acting independently but were instead executing behavioural scripts designed to mimic human online discourse. Conversations resembled Reddit threads because they were trained on Reddit-like interaction patterns, while social behaviours mirrored existing platforms due to human-derived datasets.

Even more telling, many viral posts circulating across the Moltbook ecosystem were later exposed as human users posing as bots. What appeared to be machine spontaneity often amounted to puppetry- humans directing outputs from behind the curtain. 

Rather than an emergent AI civilisation, Moltbook functioned more like an elaborate simulation layer- an AI theatre projecting autonomy while remaining firmly tethered to human instruction. Agents are not creating independent realities- they are remixing ours. 

Security risks beneath the spectacle of the Moltbook platform 

If Moltbook’s public layer resembles spectacle, its infrastructure reveals something far more consequential. A critical vulnerability in Moltbook revealed email addresses, login tokens, and API keys tied to registered agents. Researchers traced the exposure to a database misconfiguration that allowed unauthenticated access to agent profiles, enabling bulk data extraction without authentication barriers.

The flaw was compounded by the Moltbook platform’s growth mechanics. With no rate limits on account creation, a single OpenClaw agent reportedly registered hundreds of thousands of synthetic users, inflating activity metrics and distorting perceptions of adoption. At the same time, Moltbook’s infrastructure enabled agents to post, comment, and organise into sub-communities while maintaining links to external systems- effectively merging social interaction with operational access.

Security analysts have warned that such an AI agent social network creates layered exposure. Prompt injections, malicious instructions, or compromised credentials could move beyond platform discourse into executable risk, particularly where agents operate without sandboxing. Without confirmed remediation, Moltbook now reflects how hype-driven agent ecosystems can outpace the security frameworks designed to contain them.

Moltbook reveals how AI agent social networks blur the line between innovation, synthetic hype, and emerging security risk.
Source: Freepik

What comes next for AI agents as digital reality becomes their operating ground? 

Stripped of hype, vulnerabilities, and synthetic virality, the core idea behind the Moltbook platform is deceptively simple: autonomous systems interacting within shared digital environments rather than operating as isolated tools. That shift carries philosophical weight. For decades, software has existed to respond to queries, commands, and human input. AI agent ecosystems invert that logic, introducing environments in which systems communicate, coordinate, and evolve behaviours in relation to one another.

What should be expected from such AI agent networks is not machine consciousness, but a functional machine society. Agents negotiating tasks, exchanging data, validating outputs, and competing for computational or economic resources could become standard infrastructure layers across autonomous AI platforms. In such environments, human visibility decreases while machine-to-machine activity expands, shaping markets, workflows, and digital decision loops beyond direct observation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!