weekly newsletter

Home | Newsletters & Shorts | Weekly #247 From bytes to borders: The quest for digital sovereignty

Weekly #247 From bytes to borders: The quest for digital sovereignty

 Logo, Text

23-30 January 2026


HIGHLIGHT OF THE WEEK

From bytes to borders: The quest for digital sovereignty

Governments have long debated controlling data, infrastructure, and technology within their borders. But there is a renewed sense of urgency, as geopolitical tensions are driving a stronger push to identify dependencies, build domestic capacity, and limit exposure to foreign technologies.

At the European level, France is pushing to make digital sovereignty measurable and actionable. Paris has proposed the creation of an EU Digital Sovereignty Observatory to map member states’ reliance on non-European technologies, from cloud services and AI systems to cybersecurity tools. Paired with a digital resilience index, the initiative aims to give policymakers a clearer picture of strategic dependencies and a stronger basis for coordinated action on procurement, investment, and industrial policy. 

In Burkina Faso, the focus is on reducing reliance on external providers while consolidating national authority over core digital systems. The government has launched a Digital Infrastructure Supervision Centre to centralise oversight of national networks and strengthen cybersecurity monitoring. New mini data centres for public administration are being rolled out to ensure that sensitive state data is stored and managed domestically. 

Sovereignty debates are also translating into decisions to limit, replace, or restructure the use of digital services provided by foreign entities. France has announced plans to phase out US-based collaboration platforms such as Microsoft Teams, Zoom, Google Meet, and Webex from public administration, replacing them with a domestically developed alternative, ‘Visio’. 

The EU has advanced its timeline for the IRIS2 satellite network, according to the EU Commissioner for Defence and Space, Andrius Kubilius. A planned multi-orbit constellation of 290 satellites, IRIS2 aims to begin initial government communication services by 2029, a year earlier than originally planned. The network is designed to provide encrypted communications for citizens, governments and public agencies. It also aims to reduce reliance on external providers, as Europe is ‘quite dependent on American services,’ per Kubilius.

In the USA, the TikTok controversy can also be seen through sovereignty angles: Rather than banning TikTok, authorities have pushed the platform to restructure its operations for the US market. A new entity will manage TikTok’s US operations, with user data and algorithms handled inside the US. The recommendation algorithm is meant to be trained only on US user data to meet American regulatory requirements.

In more security-driven contexts, the concept is sharper still. Russia’s Security Council has recently labelled services such as Starlink and Gmail as national security threats, describing them as tools for ‘destructive information and technical influence.’ These assessments are expected to feed into Russia’s information security doctrine, reinforcing the treatment of digital services provided by foreign companies not as neutral infrastructure but as potential vectors of geopolitical risk.

 Book, Comics, Publication, Adult, Male, Man, Person, Clothing, Formal Wear, Suit, Advertisement, Poster, Accessories, Glasses, Coat, Purple, Tie, Face, Head, Electronics, Phone

The big picture. The common thread is clear: Digital sovereignty is now a key consideration for governments worldwide. The approaches may differ, but the goal remains largely the same – to ensure that a nation’s digital future is shaped by its own priorities and rules. But true independence is hampered by deeply embedded global supply chains, prohibitive costs of building parallel systems, and the risk of stifling innovation through isolation. While the strategic push for sovereignty is clear, untangling from interdependent tech ecosystems will require years of investment, migration, and adaptation. The current initiatives mark the beginning of a protracted and challenging transition.

IN OTHER NEWS THIS WEEK

This week in AI governance

China. China is planning to launch space-based AI data centres over the next five years. State aerospace contractor CASC has committed to building gigawatt-class orbital computing hubs that integrate cloud, edge and terminal capabilities, enabling in-orbit processing of Earth-generated data. The news comes on the heels of Elon Musk’s announcement at WEF 2026 that SpaceX plans to launch solar-powered AI data centre satellites within the next two to three years.

The UN. The UN has raised the alarm about AI-driven threats to child safety, highlighting how AI systems can accelerate the creation, distribution, and impact of harmful content, including sexual exploitation, abuse, and manipulation of children online. As smart toys, chatbots, and recommendation engines increasingly shape youth digital experiences, the absence of adequate safeguards risks exposing a generation to novel forms of exploitation and harm.  


Child safety online: Bans, trials, and investigations

The momentum on banning children from accessing social media continues, as France’s National Assembly has advanced legislation to ban children under 15 from accessing social media, voting substantially in favour of a bill that would require platforms to block under‑15s and enforce age‑verification measures. The bill now goes to the Senate for approval, with targeted implementation before the next school year.

In India, the state governments of Goa and Andhra Pradesh are exploring similar restrictions, considering proposals to bar social media use for children under 16 amid rising concern about online safety and youth well‑being. Previously, in December, the Madras High Court urged India’s federal government to consider an Australia-style ban.

In a first for social media platforms, a landmark trial in Los Angeles is seeing Meta (Instagram and Facebook), YouTube (Google/Alphabet), Snapchat, and TikTok, accused of intentionally designing their apps to be addictive, with serious consequences for young users’ mental health. As the trial began, Snap Inc. and TikTok had already reached confidential settlements, leaving Meta and YouTube as the remaining defendants in front of a jury. 

Separately in federal court, Meta, Snap, YouTube and TikTok asked a judge to dismiss school districts’ lawsuits that seek damages for costs tied to student mental health challenges

In both cases, the companies are arguing that Section 230 of US law shields them from liability, while the plaintiffs counter that their claims focus on allegedly addictive design features rather than user-generated content. 

Legal experts and advocates are watching closely, noting that the outcomes could set a precedent for thousands of related lawsuits and ultimately influence corporate design practices.

Roblox is under formal investigation in the Netherlands, as the Autoriteit Consument & Markt (ACM) has opened a formal investigation to assess whether Roblox is taking sufficient measures to protect children and teenagers who use the service. The probe will examine Roblox’s compliance with the European Union’s Digital Services Act (DSA), which obliges online services to implement appropriate and proportionate measures to ensure safety, privacy and security for underage users, and could take up to a year.

Regulatory scrutiny can also bear fruit: Meta, which was under intense scrutiny by regulators and civil society over chatbots that previously permitted provocative or exploitative conversations with minors, is pausing teenagers’ access to its AI characters globally while it redesigns the experience with enhanced safety and parental controls. The company said teens will be blocked from interacting with certain AI personas until a revised platform is ready, guided by principles akin to a PG-13 rating system to limit exposure to inappropriate content. 

Bottom line. The pressure on platforms is mounting, and there is no indication that it will let up.


The Grok deepfakes aftershocks

The fallout from Grok’s misuse to produce non-consensual sexualised and deepfake images continues.

The European Commission has opened a formal investigation into X under the bloc’s Digital Services Act (DSA). The probe focuses on whether the company met its legal obligations to mitigate risks from AI-generated sexualised deepfakes and other harmful imagery produced by Grok — especially those that may involve minors or non-consensual content. 

Regulatory authorities in South Korea are examining whether Grok has violated personal data protection and safety standards by enabling the production of explicit deepfakes, and whether the matter falls within its legal remit.

However, Malaysian authorities, who temporarily blocked access to Grok in early January, have restored access after the platform introduced additional safety controls aimed at curbing the generation and editing of problematic content. 

Why does it matter? Grok’s ongoing scrutiny shows that not all regulators are satisfied with the safeguards implemented so far, highlighting that remedies may need to be tailored to different jurisdictions.



LOOKING AHEAD
 Person, Face, Head, Binoculars

11th Geneva Engage Awards

Diplo and the Geneva Internet Platform (GIP) are organising the 11th edition of the Geneva Engage Awards, recognising the efforts of International Geneva actors in digital outreach and online engagement. 

This year’s theme, ‘Back to Basics: The Future of Websites in the AI Era,’ highlights new practices in which users increasingly rely on AI assistants and AI-generated summaries that may not cite primary or the most relevant sources.

The awards honour organisations across three main categories: international organisations, NGOs, and permanent representations. They assess efforts in social media engagement, web accessibility, and AI leadership, reinforcing Geneva’s role as a trusted source of reliable information as technology changes rapidly.

Tech attache briefing: The future of the Internet Governance Forum (IGF)

The Geneva Internet Platform (GIP) is organising a briefing for tech attaches, which will look at the role and evolution of the IGF over the past 20 years and discuss ways to implement the requests of the General Assembly. The event will begin with a briefing and exchange among diplomats, followed by an open dialogue with the IGF Secretariat. The event is invitation-only. 



READING CORNER
Certifying humanity feature

As AI content floods the web, how do we know what’s real? Explore the case for a “Human-Certified” label and why authentic human thought is becoming our most valuable digital asset.

BLOG featured image 2026 13 Genevas AI footprint

Geneva’s AI footprint Modern AI platforms are trained on vast amounts of online information, including content from websites, blogs, and publications.