The European marathon towards digital sovereignty

Derived from the Latin word ‘superanus’, through the French word ‘souveraineté’, sovereignty can be understood as: ‘the ultimate overseer, or authority, in the decision-making process of the state and in the maintenance of order’ – Britannica. Digital sovereignty, specifically European digital sovereignty, refers to ‘Europe’s ability to act independently in the digital world’.

In 2020, the European Parliament already identified the consequences of reliance on non-EU technologies. From the economic and social influence of non-EU technology companies, which can undermine user control over their personal data, to the slow growth of the EU technology companies and a limitation on the enforcement of European laws.

Today, these concerns persist. From Romanian election interference on TikTok’s platform, Microsoft’s interference with the ICC, to the Dutch government authentication platform being acquired by a US firm, and booming American and Chinese LLMs compared to European LLMs. The EU is at a crossroads between international reliance and homegrown adoption.

The issue of the EU digital sovereignty has gained momentum in the context of recent and significant shifts in US foreign policy toward its allies. In this environment, the pursuit of the EU digital sovereignty appears as a justified and proportionate response, one that might previously have been perceived as unnecessarily confrontational.

In light of this, this analysis’s main points will discuss the rationale behind the EU digital sovereignty (including dependency, innovation and effective compliance), recent European-centric technological and platform shifts, the steps the EU is taking to successfully be digitally sovereign and finally, examples of European alternatives

Rationale behind the move

The reasons for digital sovereignty can be summed up in three main areas: (I) less dependency on non-EU tech, (ii) leading and innovating technological solutions, and (iii) ensuring better enforcement and subsequent adherence to data protection laws/fundamental rights.

(i) Less dependency: Global geopolitical tensions between US-China/Russia push Europe towards developing its own digital capabilities and secure its supply chains. Insecure supply chain makes Europe vulnerable to failing energy grids.

More recently, US giant Microsoft threatened the International legal order by revoking US-sanctioned International Criminal Court Chief Prosecutor Karim Khan’s Microsoft software access, preventing the Chief Prosecutor from working on his duties at the ICC. In light of these scenarios, Europeans are turning to developing more European-based solutions to reduce upstream dependencies.

(ii) Leaders & innovators: A common argument is that Americans innovate, the Chinese copy, and the Europeans regulate. If the EU aims to be a digital geopolitical player, it must position itself to be a regulator which promotes innovation. It can achieve this by upskilling its workforce of non-digital trades into digital ones to transform its workforce, have more EU digital infrastructure (data centres, cloud storage and management software), further increase innovation spending and create laws that truly allow for the uptake of EU technological development instead of relying on alternative, cheaper non-EU options.

(iii) Effective compliance: Knowing that fines are more difficult to enforce towards non-EU companies than the EU companies (ex., Clearview AI), EU-based technological organisations would allow for corrective measures, warnings, and fines to be enforced more effectively. Thus, enabling more adherence towards the EU’s digital agenda and respect for fundamental rights.

Can the EU achieve Digital Sovereignty?

The main speed bumps towards the EU digital sovereignty are: i) a lack of digital infrastructure (cloud storage & data centres), ii) (critical) raw material dependency and iii) Legislative initiatives to facilitate the path towards digital sovereignty (innovation procurement and fragmented compliance regime).

i) lack of digital infrastructure: In order for the EU to become digitally sovereign it must have its own sovereign digital infrastructure.

In practice, the EU relies heavily on American data centre providers (i.e. Equinix, Microsoft Azure, Amazon Web Services) hosted in the EU. In this case, even though the data is European and hosted in the EU, the company that hosts it is non-European. This poses reliance and legislative challenges, such as ensuring adequate technical and organisational measures to protect personal data when it is in transit to the US. Given the EU-US DPF, there is a legal basis for transferring EU personal data to the US.

However, if the DPF were to be struck down (perhaps due to the US’ Cloud Act), as it has been in the past (twice with Schrems I and Schrems II) and potentially Schrems III, there would no longer be a legal basis for the transfer of the EU personal data to a US data centre.

Previously, the EU’s 2022 Directive on critical entities resilience allowed for the EU countries to identify critical infrastructure and subsequently ensure they take the technical, security and organisational measures to assure their resilience. Part of this Directive covers digital infrastructure, including providers of cloud computing services and providers of data centres. From this, the EU has recently developed guidelines for member states to identify critical entities. However, these guidelines do not anticipate how to achieve resilience and leave this responsibility with member states.

Currently, the EU is revising legislation to strengthen its control over critical digital infrastructure. Reports state revisions of existing legislation (Chips Act and Quantum Act) as well as new legislation (Digital Networks Act, the Cloud and AI Development Act) are underway.

ii) Raw material dependency: The EU cannot be digitally sovereign until it reduces some of its dependencies on other countries’ raw materials to build the hardware necessary to be technologically sovereign. In 2025, the EU’s goals were to create a new roadmap towards critical raw material (CRM) sovereignty to rely on its own energy sources and build infrastructure.

Thus, the RESourceEU Action Plan was born in December 2025. This plan contains 6 pillars: securing supply through knowledge, accelerating and promoting projects, using the circular economy and fostering innovation (recycling products which contain CRMs), increasing European demand for European projects (stockpiling CRMs), protecting the single market and partnering with third countries for long-lasting diversification. Practically speaking, part of this plan is to match Europe and or global raw material supply with European demand for European projects.

iii) Legislative initiatives to facilitate the path towards digital sovereignty:

Tackling difficult innovation procurement: the argument is to facilitate its uptake of innovation procurement across the EU. In 2026, the EU is set to reform its public procurement framework for innovation. The Innovation Procurement Update (IPU) team has representatives from over 33 countries (predominantly through law firms, Bird & Bird being the most represented), which recommends that innovation procurement reach 20% of all public procurement.

Another recommendation would help more costly innovative solutions to be awarded procurement projects, which in the past were awarded to cheaper procurement bids. In practice, the lowest price of a public procurement bid is preferred, and if it meets the remaining procurement conditions, it wins the bid – but de-prioritising this non-pricing criterion would enable companies with more costly innovative solutions to win public procurement bids.

Alleviating compliance challenges: lowering other compliance burdens whilst maintaining the digital aquis: recently announced at the World Economic Forum by Commission President Ursula von der Leyen, EU.inc would help cross-border business operations scaling up by alleviating company, corporate, insolvency, labour and taxation law compliance burdens. By harmonising these into a single framework, businesses can more easily grow and deploy cross-border solutions that would otherwise face hurdles.

Power through data: another legislative measure to help facilitate the path towards the EU digital sovereignty is unlocking the potential behind European data. In order to research innovative solutions, data is required. This can be achieved through personal or non-personal data. The EU’s GDPR regulates personal data and is currently undergoing amendments. If the proposed changes to the GDPR are approved, i.e. a broadening of its scope, data that used to be considered personal (and thus required GDPR compliance) could be deemed non-personal and used more freely for research purposes. The Data Act regulate the reuse and re-sharing of non-personal data. It aims to simplify and bolster the fair reuse of non-personal data. Overall, both personal and non-personal data can give important insight that research can benefit from in developing European innovative sovereign solutions.

European alternatives

European companies have already built a network of European platforms, services and apps with European values at heart:

CategoryCurrently UsedEU AlternativeComments
Social mediaTikTok, X, InstagramMonnet (Luxembourg)

‘W’ (Sweden)
Monnet is a social media app prioritises connections and non-addictive scrolling. Recently announced ‘W’ replaces ‘X’ and is gaining major traction with non-advertising models at its heart.
EmailMicrosoft’s Outlook and Google’s gmailTuta (mail/calendar), Proton (Germany), Mailbox (Germany), Mailfence (Belgium)Replace email and calendar apps with a privacy focused business model.
Search engineGoogle Search and DuckDuckGoQwant (France) and Ecosia (German)Qwant has focused on privacy since its launch in 2013. Ecosia is an ecofriendly focused business model which helps plant trees when users search
Video conferencingMicrosoft Teams and Slack aVisio (France), Wire (Switzerland, Mattermost (US but self hosted), Stackfield (Germany), Nextcloud Talk (Germany) and Threema (Switzerland)These alternatives are end-to-end encrypted. Visio is used by the French Government
Writing toolsMicrosoft’s Word & Excel and Google Sheets, NotionLibreOffice (German), OnlyOffice (Latvian), Collabora (UK), Nextcloud Office (German) and CryptPad (France)LibreOffice is compatible with and provides an alternative to Microsoft’s office suit for free.
Cloud storage & file sharingOneDrive, SharePoint and Google DrivePydio Cells (France), Tresorit (Switzerland), pCloud (Switzerland), Nextcloud (Germany)Most of these options provide cloud storage and NexCloud is a recurring alternative across categories.
FinanceVisa and MastercardWero (EU)Not only will it provide an EU wide digital wallet option, but it will replace existing national options – providing for fast adoption.
LLMOpenAI, Gemini, DeepSeek’s LLMMistral AI (France) and DeepL (Germany)DeepL is already wildly used and Mistral is more transparent with its partially open-source model and ease of reuse for developers
Hardware
Semi conductors: ASML (Dutch) Data Center: GAIA-X (Belgium)ASML is a chip powerhouse for the EU and GAIA-X set an example of EU based data centres with it open-source federated data infrastructure.

A dedicated website called ‘European Alternatives’ provides exactly what it says, European Alternatives. A list with over 50 categories and 100 alternatives

Conclusion

In recent years, the Union’s policy goals have shifted towards overt digital sovereignty solutions through diversification of materials and increased innovation spending, combined with a restructuring of the legislative framework to create the necessary path towards European digital infrastructure.

Whilst this analysis does not include all speed bumps, nor avenues towards the road of the EU digital sovereignty, it sheds light on the EU’s most recent major policy developments. Key questions remain regarding data reuse, its impact on data protection fundamental rights and whether this reshaping of the framework will yield the intended results.

Therefore, how will the EU tread whilst it becomes a more coherent sovereign geopolitical player?

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Social media ban for children gains momentum in Germany

Germany’s coalition government is weighing new restrictions on children’s access to social media as both governing parties draft proposals to tighten online safeguards. The debate comes amid broader economic pressures, with industry reporting significant job losses last year.

The conservative bloc and the centre-left Social Democrats are examining measures that could curb or block social media access for minors. Proposals under discussion include age-based restrictions and stronger platform accountability.

The Social Democrats in Germany have proposed banning access for children under 14 and introducing dedicated youth versions of platforms for users aged 14 to 16. Supporters argue that clearer age thresholds could reduce exposure to harmful content and addictive design features.

The discussions align with a growing European trend toward stricter digital child protection rules. Several governments are exploring tougher age verification and content moderation standards, reflecting mounting concerns over online safety and mental health.

The policy debate unfolded as German industry reported cutting 124,100 jobs in 2025 amid ongoing economic headwinds. Lawmakers face the dual challenge of safeguarding younger users while navigating wider structural pressures affecting Europe’s largest economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Government ramps up online safety for children in the UK

The UK government has announced new measures to protect children online, giving parents clearer guidance and support. PM Keir Starmer said no platform will get a free pass, with illegal AI chatbot content targeted immediately.

New powers, to be introduced through upcoming legislation, will allow swift action following a consultation on children’s digital well-being.

Proposed measures include enforcing social media age limits, restricting harmful features like infinite scrolling, and strengthening safeguards against sharing non-consensual intimate images.

Ministers are already consulting parents, children, and civil society groups. The Department for Science, Innovation and Technology launched ‘You Won’t Know until You Ask’ to advise on safety settings, talking to children, and handling harmful content.

Charities such as NSPCC and the Molly Rose Foundation welcomed the announcement, emphasising swift action on age limits, addictive design, and AI content regulation. Children’s feedback will help shape the new rules, aiming to make the UK a global leader in online safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI features disabled on MEP tablets amid European Parliament security concerns

The European Parliament has disabled AI features on the tablets it provides to lawmakers, citing cybersecurity and data protection concerns. Built-in AI tools like writing and virtual assistants have been disabled, while third-party apps remain mostly unaffected.

The decision follows an assessment highlighting that some AI features send data to cloud services rather than processing it locally.

Lawmakers have been advised to take similar precautions on their personal devices. Guidance includes reviewing AI settings, disabling unnecessary features, and limiting app permissions to reduce exposure of work emails and documents.

Officials stressed that these measures are intended to prevent sensitive data from being inadvertently shared with service providers.

The move comes amid broader European scrutiny of reliance on overseas digital platforms, particularly US-based services. Concerns over data sovereignty and laws like the US Cloud Act have amplified fears that personal and sensitive information could be accessed by foreign authorities.

AI tools, which require extensive access to user data, have become a key focus in ongoing debates over digital security in the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google outlines progress in responsible AI development

Google published its latest Responsible AI Progress Report, showing how AI Principles guide research, product development, and business decisions. Rising model capabilities and adoption have moved the focus from experimentation to real-world industry integration.

Governance and risk management form a central theme of the report, with Google describing a multilayered oversight structure spanning the entire AI lifecycle.

Advanced testing methods, including automated adversarial evaluations and expert review, are used to identify and mitigate potential harms as systems become more personalised and multimodal.

Broader access and societal impact remain key priorities. AI tools are increasingly used in science, healthcare, and environmental forecasting, highlighting their growing role in tackling global challenges.

Collaboration with governments, academia, and civil society is presented as essential for maintaining trust and setting industry standards. Sharing research and tools continues to support responsible AI innovation and broaden its benefits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Shein faces formal proceedings under EU Digital Services Act

The European Commission has opened formal proceedings against Shein under the Digital Services Act over addictive design and illegal product risks. The move follows preliminary reviews of company reports and responses to information requests. Officials said the decision does not prejudge the outcome.

Investigators will review safeguards to prevent illegal products being sold in the European Union, including items that could amount to child sexual abuse material, such as child-like sex dolls. Authorities will also assess how the platform detects and removes unlawful goods offered by third-party sellers.

The Commission will examine risks linked to platform design, including engagement-based rewards that may encourage excessive use. Officials will assess whether adequate measures are in place to limit potential harm to users’ well-being and ensure effective consumer protection online.

Transparency obligations under the DSA are another focal point. Platforms must clearly disclose the main parameters of their recommender systems and provide at least one easily accessible option that is not based on profiling. The Commission will assess whether Shein meets these requirements.

Coimisiún na Meán, the Digital Services Coordinator of Ireland, will assist the investigation as Ireland is Shein’s EU base. The Commission may seek more information or adopt interim measures if needed. Proceedings run alongside consumer protection action and product safety enforcement.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EDPS urges stronger safeguards in EU temporary chat-scanning rules

Concerns over privacy safeguards have resurfaced as the European Data Protection Supervisor urges legislators to limit indiscriminate chat-scanning in the upcoming extension of temporary EU rules.

The supervisor warns that the current framework risks enabling broad surveillance instead of focusing on targeted action against criminal content.

The EU institutions are considering a short-term renewal of the interim regime governing the detection of online material linked to child protection.

Privacy officials argue that such measures need clearer boundaries and stronger oversight to ensure that automated scanning tools do not intrude on the communications of ordinary users.

EDPS is also pressing lawmakers to introduce explicit safeguards before any renewal is approved. These include tighter definitions of scanning methods, independent verification, and mechanisms that prevent the processing of unrelated personal data.

According to the supervisor, temporary legislation must not create long-term precedents that weaken confidentiality across messaging services.

The debate comes as the EU continues discussions on a wider regulatory package covering child-protection technologies, encryption and platform responsibilities.

Privacy authorities maintain that targeted tools can be more practical than blanket scanning, which they consider a disproportionate response.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China boosts AI leadership with major model launches ahead of Lunar New Year

Leading Chinese AI developers have unveiled a series of advanced models ahead of the Lunar New Year, strengthening the country’s position in the global AI sector.

Major firms such as Alibaba, ByteDance, and Zhipu AI introduced new systems designed to support more sophisticated agents, faster workflows and broader multimedia understanding.

Industry observers also expect an imminent release from DeepSeek, whose previous model disrupted global markets last year.

Alibaba’s Qwen 3.5 model provides improved multilingual support across text, images and video while enabling rapid AI agent deployment instead of slower generation pipelines.

ByteDance followed up with updates to its Doubao chatbot and the second version of its image-to-video tool, SeeDance, which has drawn copyright concerns from the Motion Picture Association due to the ease with which users can recreate protected material.

Zhipu AI expanded the landscape further with GLM-5, an open-source model built for long-context reasoning, coding tasks, and multi-step planning. The company highlighted the model’s reliance on Huawei hardware as part of China’s efforts to strengthen domestic semiconductor resilience.

Meanwhile, excitement continues to build for DeepSeek’s fourth-generation system, expected to follow the widespread adoption and market turbulence associated with its V3 model.

Authorities across parts of Europe have restricted the use of DeepSeek models in public institutions because of data security and cybersecurity concerns.

Even so, the rapid pace of development in China suggests intensifying competition in the design of agent-focused systems capable of managing complex digital tasks without constant human oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Meta explores AI system for digital afterlife

Meta has been granted a patent describing an AI system that could simulate a person’s social media activity, even after their death. The patent, originally filed in 2023 and approved in late December, outlines how AI could replicate a user’s online presence by drawing on their past posts, messages and interactions.

According to the filing, a large language model could analyse a person’s digital history, including comments, chats, voice messages and reactions, to generate new content that mirrors their tone and behaviour. The system could respond to other users, publish updates and continue conversations in a way that resembles the original account holder.

The patent suggests the technology could be used when someone is temporarily absent from a platform, but it also explicitly addresses the possibility of continuing activity after a user’s death. It notes that such a scenario would carry more permanent implications, as the person would not be able to return and reclaim control of the account.

More advanced versions of the concept could potentially simulate voice or even video interactions, effectively creating a digital persona capable of engaging with others in real time. The idea aligns with previous comments by Meta CEO Mark Zuckerberg, who has said AI could one day help people interact with digital representations of loved ones, provided consent mechanisms are in place.

Meta has stressed that the patent does not signal an imminent product launch, describing it as a protective filing for a concept that may never be developed. Still, similar services offered by startups have already sparked ethical debate, raising questions about digital identity, consent and the emotional impact of recreating the online presence of someone who has died.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

AI cheating allegation sparks discrimination lawsuit

A University of Michigan student has filed a federal lawsuit accusing the university of disability discrimination after professors allegedly claimed she used AI to write her essays. The student, identified in court documents as ‘Jane Doe,’ denies using AI and argues that symptoms linked to her medical conditions were wrongly interpreted as signs of cheating.

According to the complaint, Doe has obsessive-compulsive disorder and generalised anxiety disorder. Her lawyers argue that traits associated with those conditions, including a formal tone, structured writing, and consistent style, were cited by instructors as evidence that her work was AI-generated. They say she provided proof and medical documentation supporting her case but was still subjected to disciplinary action and prevented from graduating.

The lawsuit alleges that the university failed to provide appropriate disability-related accommodations during the academic integrity process. It also claims that the same professor who raised the concerns remained responsible for grading and overseeing remedial work, despite what the complaint describes as subjective judgments and questionable AI-detection methods.

The case highlights broader tensions on campuses as educators grapple with the rapid rise of generative AI tools. Professors across the United States report growing difficulty distinguishing between student work and machine-generated text, while students have increasingly challenged accusations they say rely on unreliable detection software.

Similar legal disputes have emerged elsewhere, with students and families filing lawsuits after being accused of submitting AI-written assignments. Research has suggested that some AI-detection systems can produce inaccurate results, raising concerns about fairness and due process in academic settings.

The University of Michigan has been asked to comment on the lawsuit, which is likely to intensify debate over how institutions balance academic integrity, disability rights, and the limits of emerging AI detection technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot