The European Commission has proposed changes to the GDPR and the EU AI Act as part of its Digital Omnibus Package, seeking to clarify how personal data may be processed for AI development and operation across the EU.
A new provision would recognise AI development and operation as a potential legitimate interest under the GDPR, subject to necessity and a balancing test. Controllers in the EU would still need to demonstrate safeguards, including data minimisation, transparency and an unconditional right to object.
The package also introduces a proposed legal ground for processing sensitive data in AI systems where removal is not feasible without disproportionate effort. Claims that strict conditions would apply, requiring technical protections and documentation throughout the lifecycle of AI models in the EU.
Further amendments would permit biometric data processing for identity verification under defined conditions and expand the rules allowing sensitive data to be used for bias detection beyond high-risk AI systems.
Overall, the proposals aim to provide greater legal certainty without overturning existing data protection principles. The EU lawmakers and supervisory authorities continue to debate the proposals before any final adoption.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic and the Government of Rwanda have signed a three-year Memorandum of Understanding to expand AI deployment across health, education and public sector services in Rwanda. The agreement marks Anthropic’s first multi-sector government partnership in Africa.
In Rwanda’s health system, Anthropic will support national priorities, including efforts to eliminate cervical cancer and reduce malaria and maternal mortality. Rwanda’s Ministry of Health will work with Anthropic to integrate AI tools aligned with national objectives.
Public sector developer teams in Rwanda will gain access to Claude and Claude Code, alongside training, API credits and technical support. The partnership also formalises an education programme launched in 2025 that provided 2,000 Claude Pro licences to educators in Rwanda.
Officials in Rwanda have said the collaboration focuses on capacity development, responsible deployment and local autonomy. Anthropic stated that investment in skills and infrastructure in Rwanda aims to enable safe and independent use of AI by teachers, health workers and public servants.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Human-curated knowledge remains central in the AI era, according to the co-founder of Wikipedia. Speaking at the AI Impact Summit 2026, he stressed that editorial judgement, reliable sourcing, and community debate are essential to maintaining trust. AI tools may assist contributors, but oversight and accountability must remain human-led.
Wikipedia has become part of the digital infrastructure underpinning AI systems. Large language models are extensively trained on their openly licensed content, increasing the platform’s responsibility to safeguard accuracy. Wales emphasised that while AI is now embedded in global information systems, it still depends on human-verified knowledge foundations.
Concerns about reliability and misinformation featured prominently in the discussion. AI systems can fabricate convincing but inaccurate details, highlighting the continued importance of journalism and source verification. Wikipedia’s model, requiring citations and scrutinising source credibility, positions it as a safeguard against rapidly generated false content.
The conversation also addressed bias and language diversity. AI models trained predominantly on English-language data risk marginalising other linguistic communities. Wikipedia’s co-founder pointed to the importance of multilingual knowledge ecosystems and inclusive data practices to ensure global representation in both AI development and online information governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Derived from the Latin word ‘superanus’, through the French word ‘souveraineté’, sovereignty can be understood as: ‘the ultimate overseer, or authority, in the decision-making process of the state and in the maintenance of order’ – Britannica. Digital sovereignty, specifically European digital sovereignty, refers to ‘Europe’s ability to act independently in the digital world’.
In 2020, the European Parliament already identified the consequences of reliance on non-EU technologies. From the economic and social influence of non-EU technology companies, which can undermine user control over their personal data, to the slow growth of the EU technology companies and a limitation on the enforcement of European laws.
Today, these concerns persist. From Romanian election interference on TikTok’s platform, Microsoft’s interference with the ICC, to the Dutch government authentication platform being acquired by a US firm, and booming American and Chinese LLMs compared to European LLMs. The EU is at a crossroads between international reliance and homegrown adoption.
The issue of the EU digital sovereignty has gained momentum in the context of recent and significant shifts in US foreign policy toward its allies. In this environment, the pursuit of the EU digital sovereignty appears as a justified and proportionate response, one that might previously have been perceived as unnecessarily confrontational.
In light of this, this analysis’s main points will discuss the rationale behind the EU digital sovereignty (including dependency, innovation and effective compliance), recent European-centric technological and platform shifts, the steps the EU is taking to successfully be digitally sovereign and finally, examples of European alternatives
Rationale behind the move
The reasons for digital sovereignty can be summed up in three main areas: (I) less dependency on non-EU tech, (ii) leading and innovating technological solutions, and (iii) ensuring better enforcement and subsequent adherence to data protection laws/fundamental rights.
(i) Less dependency: Global geopolitical tensions between US-China/Russia push Europe towards developing its own digital capabilities and secure its supply chains. Insecure supply chain makes Europe vulnerable to failing energy grids.
More recently, US giant Microsoft threatened the International legal order by revoking US-sanctioned International Criminal Court Chief Prosecutor Karim Khan’s Microsoft software access, preventing the Chief Prosecutor from working on his duties at the ICC. In light of these scenarios, Europeans are turning to developing more European-based solutions to reduce upstream dependencies.
(ii) Leaders & innovators: A common argument is that Americans innovate, the Chinese copy, and the Europeans regulate. If the EU aims to be a digital geopolitical player, it must position itself to be a regulator which promotes innovation. It can achieve this by upskilling its workforce of non-digital trades into digital ones to transform its workforce, have more EU digital infrastructure (data centres, cloud storage and management software), further increase innovation spending and create laws that truly allow for the uptake of EU technological development instead of relying on alternative, cheaper non-EU options.
(iii) Effective compliance: Knowing that fines are more difficult to enforce towards non-EU companies than the EU companies (ex., Clearview AI), EU-based technological organisations would allow for corrective measures, warnings, and fines to be enforced more effectively. Thus, enabling more adherence towards the EU’s digital agenda and respect for fundamental rights.
Can the EU achieve Digital Sovereignty?
The main speed bumps towards the EU digital sovereignty are: i) a lack of digital infrastructure (cloud storage & data centres), ii) (critical) raw material dependency and iii) Legislative initiatives to facilitate the path towards digital sovereignty (innovation procurement and fragmented compliance regime).
i) lack of digital infrastructure: In order for the EU to become digitally sovereign it must have its own sovereign digital infrastructure.
In practice, the EU relies heavily on American data centre providers (i.e. Equinix, Microsoft Azure, Amazon Web Services) hosted in the EU. In this case, even though the data is European and hosted in the EU, the company that hosts it is non-European. This poses reliance and legislative challenges, such as ensuring adequate technical and organisational measures to protect personal data when it is in transit to the US. Given the EU-US DPF, there is a legal basis for transferring EU personal data to the US.
However, if the DPF were to be struck down (perhaps due to the US’ Cloud Act), as it has been in the past (twice with Schrems I and Schrems II) and potentially Schrems III, there would no longer be a legal basis for the transfer of the EU personal data to a US data centre.
Previously, the EU’s 2022 Directive on critical entities resilience allowed for the EU countries to identify critical infrastructure and subsequently ensure they take the technical, security and organisational measures to assure their resilience. Part of this Directive covers digital infrastructure, including providers of cloud computing services and providers of data centres. From this, the EU has recently developed guidelines for member states to identify critical entities. However, these guidelines do not anticipate how to achieve resilience and leave this responsibility with member states.
ii) Raw material dependency: The EU cannot be digitally sovereign until it reduces some of its dependencies on other countries’ raw materials to build the hardware necessary to be technologically sovereign. In 2025, the EU’s goals were to create a new roadmap towards critical raw material (CRM) sovereignty to rely on its own energy sources and build infrastructure.
Thus, the RESourceEU Action Plan was born in December 2025. This plan contains 6 pillars: securing supply through knowledge, accelerating and promoting projects, using the circular economy and fostering innovation (recycling products which contain CRMs), increasing European demand for European projects (stockpiling CRMs), protecting the single market and partnering with third countries for long-lasting diversification. Practically speaking, part of this plan is to match Europe and or global raw material supply with European demand for European projects.
iii) Legislative initiatives to facilitate the path towards digital sovereignty:
Tackling difficult innovation procurement: the argument is to facilitate its uptake of innovation procurement across the EU. In 2026, the EU is set to reform its public procurement framework for innovation. The Innovation Procurement Update (IPU) team has representatives from over 33 countries (predominantly through law firms, Bird & Bird being the most represented), which recommends that innovation procurement reach 20% of all public procurement.
Another recommendation would help more costly innovative solutions to be awarded procurement projects, which in the past were awarded to cheaper procurement bids. In practice, the lowest price of a public procurement bid is preferred, and if it meets the remaining procurement conditions, it wins the bid – but de-prioritising this non-pricing criterion would enable companies with more costly innovative solutions to win public procurement bids.
Alleviating compliance challenges: lowering other compliance burdens whilst maintaining the digital aquis: recently announced at the World Economic Forum by Commission President Ursula von der Leyen, EU.inc would help cross-border business operations scaling up by alleviating company, corporate, insolvency, labour and taxation law compliance burdens. By harmonising these into a single framework, businesses can more easily grow and deploy cross-border solutions that would otherwise face hurdles.
Power through data: another legislative measure to help facilitate the path towards the EU digital sovereignty is unlocking the potential behind European data. In order to research innovative solutions, data is required. This can be achieved through personal or non-personal data. The EU’s GDPR regulates personal data and is currently undergoing amendments. If the proposed changes to the GDPR are approved, i.e. a broadening of its scope, data that used to be considered personal (and thus required GDPR compliance) could be deemed non-personal and used more freely for research purposes. The Data Act regulate the reuse and re-sharing of non-personal data. It aims to simplify and bolster the fair reuse of non-personal data. Overall, both personal and non-personal data can give important insight that research can benefit from in developing European innovative sovereign solutions.
European alternatives
European companies have already built a network of European platforms, services and apps with European values at heart:
Category
Currently Used
EU Alternative
Comments
Social media
TikTok, X, Instagram
Monnet (Luxembourg)
‘W’ (Sweden)
Monnet is a social media app prioritises connections and non-addictive scrolling. Recently announced ‘W’ replaces ‘X’ and is gaining major traction with non-advertising models at its heart.
Email
Microsoft’s Outlook and Google’s gmail
Tuta (mail/calendar), Proton (Germany), Mailbox (Germany), Mailfence (Belgium)
Replace email and calendar apps with a privacy focused business model.
Search engine
Google Search and DuckDuckGo
Qwant (France) and Ecosia (German)
Qwant has focused on privacy since its launch in 2013. Ecosia is an ecofriendly focused business model which helps plant trees when users search
Video conferencing
Microsoft Teams and Slack a
Visio (France), Wire (Switzerland, Mattermost (US but self hosted), Stackfield (Germany), Nextcloud Talk (Germany) and Threema (Switzerland)
These alternatives are end-to-end encrypted. Visio is used by the French Government
Writing tools
Microsoft’s Word & Excel and Google Sheets, Notion
Most of these options provide cloud storage and NexCloud is a recurring alternative across categories.
Finance
Visa and Mastercard
Wero (EU)
Not only will it provide an EU wide digital wallet option, but it will replace existing national options – providing for fast adoption.
LLM
OpenAI, Gemini, DeepSeek’s LLM
Mistral AI (France) and DeepL (Germany)
DeepL is already wildly used and Mistral is more transparent with its partially open-source model and ease of reuse for developers
Hardware
Semi conductors: ASML (Dutch) Data Center: GAIA-X (Belgium)
ASML is a chip powerhouse for the EU and GAIA-X set an example of EU based data centres with it open-source federated data infrastructure.
A dedicated website called ‘European Alternatives’ provides exactly what it says, European Alternatives. A list with over 50 categories and 100 alternatives
Conclusion
In recent years, the Union’s policy goals have shifted towards overt digital sovereignty solutions through diversification of materials and increased innovation spending, combined with a restructuring of the legislative framework to create the necessary path towards European digital infrastructure.
Whilst this analysis does not include all speed bumps, nor avenues towards the road of the EU digital sovereignty, it sheds light on the EU’s most recent major policy developments. Key questions remain regarding data reuse, its impact on data protection fundamental rights and whether this reshaping of the framework will yield the intended results.
Therefore, how will the EU tread whilst it becomes a more coherent sovereign geopolitical player?
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Germany’s coalition government is weighing new restrictions on children’s access to social media as both governing parties draft proposals to tighten online safeguards. The debate comes amid broader economic pressures, with industry reporting significant job losses last year.
The conservative bloc and the centre-left Social Democrats are examining measures that could curb or block social media access for minors. Proposals under discussion include age-based restrictions and stronger platform accountability.
The Social Democrats in Germany have proposed banning access for children under 14 and introducing dedicated youth versions of platforms for users aged 14 to 16. Supporters argue that clearer age thresholds could reduce exposure to harmful content and addictive design features.
The discussions align with a growing European trend toward stricter digital child protection rules. Several governments are exploring tougher age verification and content moderation standards, reflecting mounting concerns over online safety and mental health.
The policy debate unfolded as German industry reported cutting 124,100 jobs in 2025 amid ongoing economic headwinds. Lawmakers face the dual challenge of safeguarding younger users while navigating wider structural pressures affecting Europe’s largest economy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK government has announced new measures to protect children online, giving parents clearer guidance and support. PM Keir Starmer said no platform will get a free pass, with illegal AI chatbot content targeted immediately.
New powers, to be introduced through upcoming legislation, will allow swift action following a consultation on children’s digital well-being.
Proposed measures include enforcing social media age limits, restricting harmful features like infinite scrolling, and strengthening safeguards against sharing non-consensual intimate images.
Ministers are already consulting parents, children, and civil society groups. The Department for Science, Innovation and Technology launched ‘You Won’t Know until You Ask’ to advise on safety settings, talking to children, and handling harmful content.
Charities such as NSPCC and the Molly Rose Foundation welcomed the announcement, emphasising swift action on age limits, addictive design, and AI content regulation. Children’s feedback will help shape the new rules, aiming to make the UK a global leader in online safety.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Parliament has disabled AI features on the tablets it provides to lawmakers, citing cybersecurity and data protection concerns. Built-in AI tools like writing and virtual assistants have been disabled, while third-party apps remain mostly unaffected.
The decision follows an assessment highlighting that some AI features send data to cloud services rather than processing it locally.
Lawmakers have been advised to take similar precautions on their personal devices. Guidance includes reviewing AI settings, disabling unnecessary features, and limiting app permissions to reduce exposure of work emails and documents.
Officials stressed that these measures are intended to prevent sensitive data from being inadvertently shared with service providers.
The move comes amid broader European scrutiny of reliance on overseas digital platforms, particularly US-based services. Concerns over data sovereignty and laws like the US Cloud Act have amplified fears that personal and sensitive information could be accessed by foreign authorities.
AI tools, which require extensive access to user data, have become a key focus in ongoing debates over digital security in the EU.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google published its latest Responsible AI Progress Report, showing how AI Principles guide research, product development, and business decisions. Rising model capabilities and adoption have moved the focus from experimentation to real-world industry integration.
Governance and risk management form a central theme of the report, with Google describing a multilayered oversight structure spanning the entire AI lifecycle.
Advanced testing methods, including automated adversarial evaluations and expert review, are used to identify and mitigate potential harms as systems become more personalised and multimodal.
Broader access and societal impact remain key priorities. AI tools are increasingly used in science, healthcare, and environmental forecasting, highlighting their growing role in tackling global challenges.
Collaboration with governments, academia, and civil society is presented as essential for maintaining trust and setting industry standards. Sharing research and tools continues to support responsible AI innovation and broaden its benefits.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has opened formal proceedings against Shein under the Digital Services Act over addictive design and illegal product risks. The move follows preliminary reviews of company reports and responses to information requests. Officials said the decision does not prejudge the outcome.
Investigators will review safeguards to prevent illegal products being sold in the European Union, including items that could amount to child sexual abuse material, such as child-like sex dolls. Authorities will also assess how the platform detects and removes unlawful goods offered by third-party sellers.
The Commission will examine risks linked to platform design, including engagement-based rewards that may encourage excessive use. Officials will assess whether adequate measures are in place to limit potential harm to users’ well-being and ensure effective consumer protection online.
Transparency obligations under the DSA are another focal point. Platforms must clearly disclose the main parameters of their recommender systems and provide at least one easily accessible option that is not based on profiling. The Commission will assess whether Shein meets these requirements.
Coimisiún na Meán, the Digital Services Coordinator of Ireland, will assist the investigation as Ireland is Shein’s EU base. The Commission may seek more information or adopt interim measures if needed. Proceedings run alongside consumer protection action and product safety enforcement.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Concerns over privacy safeguards have resurfaced as the European Data Protection Supervisor urges legislators to limit indiscriminate chat-scanning in the upcoming extension of temporary EU rules.
The supervisor warns that the current framework risks enabling broad surveillance instead of focusing on targeted action against criminal content.
The EU institutions are considering a short-term renewal of the interim regime governing the detection of online material linked to child protection.
Privacy officials argue that such measures need clearer boundaries and stronger oversight to ensure that automated scanning tools do not intrude on the communications of ordinary users.
EDPS is also pressing lawmakers to introduce explicit safeguards before any renewal is approved. These include tighter definitions of scanning methods, independent verification, and mechanisms that prevent the processing of unrelated personal data.
According to the supervisor, temporary legislation must not create long-term precedents that weaken confidentiality across messaging services.
The debate comes as the EU continues discussions on a wider regulatory package covering child-protection technologies, encryption and platform responsibilities.
Privacy authorities maintain that targeted tools can be more practical than blanket scanning, which they consider a disproportionate response.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!