Weekly #249 Why cyberspace doesn’t exist

 Logo, Text

6 – 13 February 2026


HIGHLIGHT OF THE WEEK

Why cyberspace doesn’t exist

Thirty years ago, on 8 February 1996, two developments kicked off a powerful narrative about the internet: that it occupied a realm apart from ordinary law and politics: the Declaration of the Independence of Cyberspace and the US Communications Decency Act (CDA). 

Declaration of the Independence of Cyberspace. In Davos, John Perry Barlow’s Declaration of the Independence of Cyberspace asserted that the ‘Governments of the Industrial World’ have ‘no sovereignty’ in cyberspace. 

This vision spawned a generation of thought arguing that the internet meant the ‘end of geography.’ Thousands of articles, books, theses and speeches have been delivered arguing that we need new governance for the ‘new brave world’ of the digital.

This intellectual and policy house of cards was built on the assumption that there is cyberspace beyond physical space. It was (and is) a wrong assumption. There is no cyberspace. Every email, every post, every AI query is ultimately a physical event: pulses of electrons carrying bits and bytes through cables under the ocean, Wi-Fi, data servers, and internet infrastructure.

The CDA and its Section 230. On the same day as Barlow’s declaration, President Clinton signed into law the US Communications Decency Act (CDA), which had been adopted by the US Congress. Buried within it was Section 230, which granted internet platforms an unprecedented immunity: they could not be treated as publishers or speakers of the content they hosted.

For the first time in history, commercial entities were granted a broad shield from liability for the very business from which they profited. It was a departure from the long tradition of legal liability, for example, of a newspaper for the text it publishes or of broadcasters for their transmissions.

This provision was justified as a way to protect a nascent industry from crippling litigation. At the time, internet companies were small and experimental. The immunity enabled rapid growth and innovation. 

Over time, however, those start-ups became some of the most valuable corporations in history, with global reach and market capitalisations of trillions of dollars. The legal framework, however, largely remained intact, even as internet companies developed sophisticated algorithms that curate, amplify, and monetise user content at scale. This divergence created a central tension in contemporary law and economics: immensely powerful intermediaries operating with limited accountability for systemic effects.

 Book, Comics, Publication, Clothing, Coat, Adult, Male, Man, Person, Face, Head

The convergence of the two. The conceptual separation of ‘cyberspace’ made this arrangement easier to defend. If the internet was a new world, exceptional rules seemed justified.

But critics quickly challenged that reasoning. US judge Frank H. Easterbrook argued that we do not need internet law, as we did not need the ‘law of the horse’ when horses were introduced as the dominant mode of transportation. The internet should be regulated by applying existing legal principles. Law regulates relationships among people and institutions, regardless of the technologies they use. The medium may change; the underlying principles endure.

Experience has largely vindicated that view. Digital technologies have not dissolved geography; they have intensified it. States assert jurisdiction over data flows, content moderation, taxation, competition, and security. High-precision geolocation, data localisation requirements, and national regulatory regimes demonstrate that the internet operates squarely within territorial boundaries.

However, CDA remains in force, extending into the age of AI. Companies developing large language models and other AI systems often rely on intermediary protections and analogous doctrines to limit liability. As a result, AI tools can be deployed globally with comparatively limited ex ante oversight. Yet their outputs can shape public discourse, influence elections, affect mental health, and generate economic disruption.

The central question is not whether innovation should be constrained, but whether it should be aligned with established principles of responsibility. Technologies do not exist outside society; they are embedded within it. If an entity designs, deploys, and profits from a system, it should bear responsibility for its foreseeable impacts. The age of legal exceptionalism should end. 

IN OTHER NEWS LAST WEEK

This week in AI governance

The UN. The General Assembly approved the creation of a historic global scientific advisory body on AI, the Independent International Scientific Panel on Artificial Intelligence (AI). The first of its kind, the panel’s main task is to ‘issue evidence-based scientific assessments synthesising and analysing existing research related to the opportunities, risks and impacts of AI’, in the form of one annual ‘policy-relevant but non-prescriptive summary report’ to be presented to the Global Dialogue on AI Governance. The Panel will also ‘provide updates on its work up to twice a year to hear views through an interactive dialogue of the plenary of the General Assembly with the Co-Chairs of the Panel’. 

AI governance was a key focus at the recent UN Special Dialogue entitled ‘From Principles to Practice: Special Dialogue on Artificial Intelligence and Preventing and Countering Violent Extremism’. Diplomats and experts discussed how AI is reshaping global stability, conflict dynamics and international law. Participants highlighted risks from autonomous systems and misinformation campaigns and stressed the need for multilateral cooperation and shared norms to mitigate emerging threats.

Germany. Germany has unveiled plans for a ‘Sovereign AI Factory’, a government‑backed initiative to develop sovereign AI models and infrastructure tailored to local language, cultural context and industrial needs. The project will support domestic innovation by providing compute resources, datasets and certification frameworks that conform to European safety and privacy standards, with the aim of reducing reliance on non‑EU AI providers. Berlin says the factory will also serve as a collaborative platform for research institutions and industry to co‑design secure, interoperable AI systems for public and private sectors.

Pakistan. Pakistan’s government has pledged major investment in AI by 2030, rolling out a comprehensive national strategy to accelerate digital transformation across the economy. The plan focuses on building AI capacity in key sectors — including agriculture, healthcare and education — through funding for research hubs, public‑private partnerships and targeted upskilling programmes. Officials say the investment is intended to attract foreign direct investment, boost exports and position Pakistan as a regional tech player, while also addressing ethical and governance frameworks to guide responsible AI deployment.

Slovenia. Slovenia has set out an ambitious national AI vision, outlining strategic priorities such as human‑centric AI, robust ethical frameworks, and investment in research and talent. The roadmap emphasises collaboration with European partners and adherence to international standards, positioning Slovenia as a proactive voice in shaping AI governance dialogues at the upcoming summit.

Chile. Chile has introduced Latam-GPT to strengthen Latin America’s presence in global AI. The project, developed by the National Centre for Artificial Intelligence with support across South America, aims to correct long-standing biases by training systems on the region’s own data instead of material drawn mainly from the USA or Europe. President Gabriel Boric said the model will help maintain cultural identity and allow the region to take a more active role in technological development. Latam-GPT is not designed as a conversational tool but rather as a vast dataset that serves as the foundation for future applications. More than eight terabytes of information have been collected, mainly in Spanish and Portuguese, with plans to add indigenous languages as the project expands.

India. India has begun enforcing a three-hour removal rule for AI-generated deepfake content, requiring platforms and intermediaries to take down specified material within 180 minutes of notification or face regulatory sanctions. The accelerated timeframe is designed to blunt the rapid spread of deceptive, synthetic media amid heightened concerns about misinformation and social disruption.

Brazil. Brazil’s National Data Protection Agency and National Consumer Rights Bureau have ordered X to stop serving explicit image generation via its Grok AI, citing risks of harmful outputs reaching minors and contravention of local digital safety norms. The directive demands immediate technical measures to block certain prompts and outputs as part of ongoing scrutiny of platform content moderation practices.

Global coalition on child safety. A broad coalition of child rights advocates, digital safety organisations and policymakers has called on governments to ban ‘nudification’ AI tools, urging criminalisation of software that converts clothed images into sexually explicit versions without consent. The group argues that existing content moderation approaches are insufficient to protect minors and stresses that pre-emptive legal prohibitions are needed to prevent widespread exploitation.

The UK. The UK Supreme Court has ruled that AI-assisted inventions can qualify for patents when the human contributor’s inventive role is identifiable and substantial, a decision legal experts say will boost innovation by clarifying intellectual property protections in hybrid human-AI development. The judgment aims to incentivise investment in AI research while maintaining established patentability standards.

South Korea. South Korea has launched a labour‑government body to address the pressures of AI automation on the workforce, creating a cross‑sector council tasked with forecasting trends in job displacement and recommending policy responses. The initiative brings together labour unions, industry leaders and government ministries to coordinate reskilling and upskilling programmes, strengthen social safety nets, and explore income support models for workers affected by automation.


Child safety online: The momentum holds steady

Bans, bans, bans. The ban club just keeps growing, as Portugal’s parliament has approved a law restricting social media access for minors under 16, requiring express and verified parental consent for accessing platforms like Instagram, TikTok, and Facebook. Access will be controlled through the Digital Mobile Key, Portugal’s national digital ID system, ensuring effective age verification and platform compliance. The law strengthens protections amid growing concerns over social media’s impact on young people’s mental health, and detailed implementation and enforcement rules are now set for parliamentary committee review.

Czech Prime Minister Andrej Babiš publicly endorsed a proposal to ban children under 15 from using major social platforms, framing it as a protective measure against damaging effects on mental health and well-being. The government is actively considering legislation this year that could formalise such restrictions.

The EU as a whole is revisiting the idea of an EU-wide social media age restriction. The issue was raised in the European Commission’s new action plan against cyberbullying, published on Tuesday, 10 February. The plan confirms that a panel of child protection experts will advise the Commission by the summer on possible EU-wide age restrictions for social media use. The panel will assess options for a coordinated European approach, including potential legislation and awareness-raising measures for parents.

The document notes that diverging national rules could lead to uneven protection for children across the bloc. A harmonised EU framework, the Commission argues, would help ensure consistent safeguards and reduce fragmentation in how platforms apply age restrictions.

The big picture. The membership of the ban club has reached double digits. We’ll continue following the developments.

The addiction trial begins. In the USA, a landmark trial opened in Los Angeles this week against Meta and YouTube, centring on claims that their platforms are deliberately designed to be addictive and have harmed young users’ mental health. 

The plaintiff, Kaley, now 20, alleges that Instagram and YouTube caused her anxiety, body dysmorphia, and suicidal thoughts. Her lawyers likened features like infinite scroll, autoplay, likes, and beauty filters to a ‘digital casino’ for children, citing internal documents showing the platforms targeted young users and even used YouTube as a ‘digital babysitter.’

Meta and YouTube’s defence argued that social media was not responsible for Kaley’s struggles, citing her difficult family background, therapists’ records, and the availability of safety tools. YouTube highlighted that Kaley’s average daily usage has been 29 minutes since 2020 and compared the platform to other entertainment services, emphasising that she is not addicted. Meta stressed that Instagram offered creative outlets and new tools to manage screen time, and that social media may have provided support during family difficulties.

What’s next? Executives, including Meta CEO Mark Zuckerberg, Instagram CEO Adam Mosseri, and YouTube CEO Neal Mohan, are expected to testify in the coming weeks.

Meanwhile, across the Atlantic, the British government has launched a campaign called ‘You Won’t Know Until You Ask’ to encourage parents to talk with their children about the harmful content they might encounter online. It will include guidance to parents on safety settings, conversation prompts, and age-appropriate advice for tackling misinformation and harmful content.

Zooming out.  Government research found that roughly half of parents had never had such conversations. Among those who have, almost half say the conversations are one-offs or rare. This shows a need to normalise frequent conversations about online content.


Russia and the Netherlands make moves for digital sovereignty

In Russia, authorities have intensified efforts to control the country’s digital communication landscape, reflecting a broader push for ‘sovereign’ internet infrastructure. 

The Russian communications regulator Roskomnadzor has tightened restrictions on Telegram, slowing delivery of media and limiting certain features to pressure users toward domestic alternatives. Roskomnadzor stated that Telegram is not taking meaningful measures to combat fraud, is failing to protect users’ personal data, and is violating Russian laws. Telegram’s founder has condemned the measures as authoritarian, warning they may interfere with essential communication services.

This crackdown has escalated with the full blocking of Meta’s WhatsApp, which 100 million Russians use. Authorities justified the ban by pointing to WhatsApp’s refusal to meet Russian legal requirements. Users are being encouraged to adopt government-supported platforms that critics say enable state surveillance, raising concerns about privacy and access to independent communication channels. Meta called the ban harmful to both safety and privacy.

Despite these moves, Russia is pausing aggressive action against Google, citing the country’s dependence on Android devices and warning that a sudden ban could disrupt millions of users. Officials indicated that any transition to domestic alternatives will be gradual, reflecting a cautious approach to reducing reliance on foreign tech.

Meanwhile, in the Netherlands, digital sovereignty has moved to the forefront of parliamentary debate. Lawmakers have renewed calls to shift public and private-sector data away from US-based cloud services, citing risks under US legislation such as the Cloud Act. Concerns have intensified following the proposed acquisition of Solvinity, which hosts parts of the Dutch DigiD digital identity system, by a US firm. MPs emphasised the need for stronger safeguards, the promotion of European or Dutch cloud alternatives, and the updating of procurement rules to protect sensitive data.


EU challenging Meta’s grip on AI access in WhatsApp

The European Commission has formally notified Meta that it has breached EU competition law by blocking third‑party AI assistants from accessing WhatsApp, limiting in‑app AI interactions to Meta’s own Meta AI.

Regulators argue Meta likely holds a dominant position in consumer messaging within the EU and that its restrictions could cause serious and irreparable market harm by foreclosing rivals’ access to WhatsApp’s large user base. 

The Commission is considering interim measures to prevent continued exclusion and protect competitive entry.



LOOKING AHEAD
 Person, Face, Head, Binoculars

The UN Institute for Disarmament Research (UNIDIR), in partnership with the Organisation internationale de la Francophonie (OIF), will hold an event to explore the phenomenon of hybrid threats, examining their main types and impacts. The event will be held on 16 February (Monday), in Geneva. Registration for the event is open. 

The India AI Impact Summit 2026 will be held on 19–20 February 2026 in New Delhi, India, under the auspices of the Ministry of Electronics and Information Technology (MeitY). The summit brings stakeholders to explore how AI can be developed and deployed to generate positive societal, economic, and environmental outcomes. Structured around guiding principles of People, Planet, and Progress, the Summit’s programme focuses on thematic areas such as human capital and inclusion, safe and trusted AI, innovation and resilience, democratising AI resources, and AI for economic growth and social good. 

The World Intellectual Property Organization (WIPO) will launch the 2026 edition of its World Intellectual Property Report, entitled ‘Technology on the Move’, on 17 February (Tuesday) in Geneva and online. The programme for the launch includes opening remarks by WIPO leadership, a keynote address on the diffusion of generative AI in the global economy, a presentation of the World Intellectual Property Report 2026 by the WIPO Economics and Data Analytics team, and an industry panel discussion exploring perspectives on technology diffusion.



READING CORNER
2149739753

The AI agent social network Moltbook is fuelling the hype around autonomous ecosystems while raising security and digital reality concerns.

AI and Accessibility

From the RYO bionic hand to AI smart glasses, explore how AI is shifting assistive tech from compensation to empowerment while raising vital governance questions.

Lettre d’information du Digital Watch – Numéro 106 – Mensuelle janvier 2026

Rétrospective de décembre 2025 et janvier 2026

La newsletter de ce mois-ci revient sur les mois de décembre 2025 et janvier 2026 et explore les forces qui façonnent le paysage numérique en 2026 :

Bilan du SMSI+20 : examen approfondi du document final et de la réunion d’examen de haut niveau, et implications pour la coopération numérique mondiale.

Sécurité des enfants en ligne : la dynamique en faveur des interdictions se poursuit, tandis que des procès historiques aux États-Unis examinent la dépendance aux plateformes et la responsabilité.

Souveraineté numérique : les gouvernements réévaluent leurs politiques en matière de données, d’infrastructures et de technologies afin de limiter l’exposition étrangère et de renforcer les capacités nationales.

Grok Shock : l’outil d’IA de X, Grok, fait l’objet d’un examen réglementaire après des signalements de contenus sexuels non consensuels et de deepfakes.

Geneva Engage Awards : les temps forts de la 11e édition, qui récompense l’excellence en matière de communication et d’engagement numériques dans la Genève internationale.

Prévisions annuelles en matière d’IA et de numérique : nous mettons en avant les 10 tendances et événements qui, selon nous, façonneront le paysage numérique au cours de l’année à venir.


GOUVERNANCE NUMÉRIQUE

Les États-Unis se sont retirés d’un large éventail d’organisations, de conventions et de traités internationaux qu’ils jugent contraires à leurs intérêts, notamment de dizaines d’organismes des Nations unies et d’entités non onusiennes. Dans le domaine de la technologie et de la gouvernance numérique, ils ont explicitement abandonné deux initiatives : la Freedom Online Coalition et le Forum mondial sur l’expertise cybernétique. Les implications du retrait de la CNUCED et du Département des affaires économiques et sociales des Nations unies restent floues, compte tenu de leurs liens avec des processus tels que le SMSI, le suivi de l’Agenda 2030, le Forum sur la gouvernance de l’Internet et les travaux plus larges sur la gouvernance des données.

TECHNOLOGIES

Le président américain Donald Trump a signé une proclamation présidentielle imposant un droit de douane de 25 % sur certaines puces informatiques avancées et orientées vers l’IA, y compris des produits haut de gamme tels que le H200 de Nvidia et le MI325X d’AMD, dans le cadre d’un examen de sécurité nationale. Les responsables ont décrit cette mesure comme une « première étape » visant à renforcer la production nationale et à réduire la dépendance vis-à-vis des fabricants étrangers, en particulier ceux de Taïwan, tout en captant les revenus provenant des importations qui ne contribuent pas à la capacité de production américaine. L’administration a laissé entendre que d’autres mesures pourraient suivre en fonction de l’évolution des négociations avec les partenaires commerciaux et l’industrie.

Les États-Unis et Taïwan ont annoncé un accord commercial historique axé sur les semi-conducteurs. Dans le cadre de cet accord, les droits de douane sur un large éventail de produits taïwanais exportés seront réduits ou supprimés, tandis que les entreprises taïwanaises de semi-conducteurs, notamment des sociétés de premier plan telles que TSMC, se sont engagées à investir au moins 250 milliards de dollars dans la fabrication de puces, l’IA et des projets énergétiques aux États-Unis, avec le soutien d’un crédit supplémentaire de 250 milliards de dollars garanti par le gouvernement.

Le conflit juridique et politique qui oppose depuis longtemps le fabricant néerlandais de semi-conducteurs Nexperia, une entreprise basée aux Pays-Bas et détenue par la société chinoise Wingtech Technology, se poursuit également. Ce litige a éclaté à l’automne 2025, lorsque les autorités néerlandaises ont brièvement pris le contrôle de Nexperia, invoquant la sécurité nationale et des inquiétudes concernant d’éventuels transferts de technologie vers la Chine. La direction européenne de Nexperia et les représentants de Wingtech s’affrontent actuellement devant un tribunal d’Amsterdam, qui doit décider s’il y a lieu d’ouvrir une enquête officielle sur des allégations de mauvaise gestion. Le tribunal devrait rendre sa décision dans un délai de quatre semaines.

Selon certaines informations, des scientifiques chinois auraient construit un prototype de machine de lithographie par ultraviolets extrêmes, une technologie longtemps dominée par ASML. Cette entreprise néerlandaise est le seul fournisseur de systèmes EUV et un maillon essentiel dans la fabrication de puces de pointe. Les outils EUV sont indispensables à la production de puces de pointe utilisées dans l’IA, le calcul haute performance et les armes modernes, car ils permettent de graver des circuits ultra-fins sur des plaquettes de silicium. Le prototype produirait déjà de la lumière EUV, mais n’aurait pas encore permis de fabriquer des puces fonctionnelles. D’anciens ingénieurs d’ASML auraient participé à ce projet en procédant à la rétro-ingénierie de composants clés.

Le Canada a lancé la phase 1 du programme Canadian Quantum Champions dans le cadre d’un investissement de 334,3 millions de dollars prévu dans le budget 2025, fournissant jusqu’à 92 millions de dollars de financement initial, soit jusqu’à 23 millions de dollars chacun à Anyon Systems, Nord Quantique, Photonic et Xanadu, afin de faire progresser les ordinateurs quantiques tolérants aux pannes et de conserver des capacités clés au Canada, les progrès étant évalués grâce à une nouvelle plateforme d’analyse comparative dirigée par le Conseil national de recherches.

Les États-Unis auraient suspendu la mise en œuvre de leur accord Tech Prosperity Deal avec le Royaume-Uni, un pacte conclu lors de la visite du président Trump à Londres en septembre, qui visait à approfondir la coopération dans le domaine des technologies de pointe telles que l’IA et le quantique et comprenait des engagements d’investissement prévus par les grandes entreprises technologiques américaines. Selon le Financial Times, cette suspension reflète la frustration générale des États-Unis face à la position du Royaume-Uni sur des questions commerciales plus larges, Washington cherchant à obtenir des concessions du Royaume-Uni sur les barrières non tarifaires, en particulier les normes réglementaires applicables aux denrées alimentaires et aux produits industriels, avant de faire avancer l’accord technologique.

Lors du 16e sommet UE-Inde à New Delhi, l’UE et l’Inde sont entrées dans une nouvelle phase de coopération en concluant un accord de libre-échange historique et en lançant un partenariat en matière de sécurité et de défense, signe d’un alignement plus étroit dans un contexte de pressions économiques et géopolitiques mondiales. L’accord commercial vise à réduire les barrières tarifaires et non tarifaires et à renforcer les chaînes d’approvisionnement, tandis que le volet sécurité élargit la coopération dans des domaines tels que la sécurité maritime, les menaces cybernétiques et hybrides, la lutte contre le terrorisme, l’espace et la collaboration industrielle en matière de défense.La Corée du Sud et l’Italie ont convenu d’approfondir leur partenariat stratégique en élargissant leur coopération dans les domaines de la haute technologie, en particulier l’IA, les semi-conducteurs et l’espace. Les responsables ont présenté cette initiative comme un moyen de renforcer la compétitivité à long terme grâce à une collaboration plus étroite en matière de recherche, à des échanges de talents et à des initiatives de développement conjointes, même si les programmes spécifiques n’ont pas encore été détaillés publiquement.

INFRASTRUCTURE

L’UE a adopté la loi sur les réseaux numériques, qui vise à réduire la fragmentation grâce à une harmonisation limitée du spectre et à un système de numérotation à l’échelle de l’UE pour les services commerciaux transfrontaliers, sans pour autant parvenir à un véritable marché unifié des télécommunications. Le principal obstacle reste la résistance des États membres qui souhaitent conserver le contrôle de la gestion du spectre, en particulier pour la 4G, la 5G et le Wi-Fi, ce qui fait de ce paquet une mesure progressive plutôt qu’une refonte structurelle, malgré les appels de longue date en faveur d’une intégration plus poussée.

Le deuxième Sommet international sur la résilience des câbles sous-marins s’est conclu par la déclaration de Porto sur la résilience des câbles sous-marins, qui réaffirme le rôle essentiel des câbles de télécommunications sous-marins pour la connectivité mondiale, le développement économique et l’inclusion numérique. La déclaration s’appuie sur la déclaration d’Abuja de 2025 et contient des orientations pratiques supplémentaires ainsi que des recommandations non contraignantes visant à renforcer la coopération internationale et la résilience, notamment en rationalisant les procédures d’autorisation et de réparation, en améliorant les cadres juridiques et réglementaires, en favorisant la diversité géographique et la redondance, en adoptant les meilleures pratiques en matière d’atténuation des risques, en améliorant la planification de la protection des câbles et en stimulant le renforcement des capacités et l’innovation, afin de soutenir une infrastructure numérique mondiale plus fiable et plus inclusive.

CYBERSÉCURITÉ

Roblox fait l’objet d’une enquête officielle aux Pays-Bas, l’Autoriteit Consument & Markt (ACM) ayant ouvert une enquête officielle afin d’évaluer si Roblox prend des mesures suffisantes pour protéger les enfants et les adolescents qui utilisent le service. L’enquête examinera la conformité de Roblox avec la loi européenne sur les services numériques (DSA), qui oblige les services en ligne à mettre en œuvre des mesures appropriées et proportionnées pour garantir la sécurité et la confidentialité des utilisateurs mineurs, et pourrait durer jusqu’à un an.

Meta, qui faisait l’objet d’une surveillance intense de la part des régulateurs et de la société civile en raison de ses chatbots qui autorisaient auparavant des conversations provocantes ou abusives avec des mineurs, suspend l’accès des adolescents à ses personnages IA à l’échelle mondiale pendant qu’elle repense l’expérience en renforçant la sécurité et le contrôle parental. La société a déclaré que les adolescents ne pourront plus interagir avec certains personnages IA jusqu’à ce qu’une plateforme révisée soit prête, guidée par des principes similaires à un système de classification PG-13 afin de limiter l’exposition à des contenus inappropriés.

L’ETSI a publié une nouvelle norme, EN 304 223, qui définit les exigences en matière de cybersécurité pour les systèmes d’IA tout au long de leur cycle de vie, en traitant les menaces spécifiques à l’IA telles que l’empoisonnement des données et l’injection rapide, avec des conseils supplémentaires sur les risques liés à l’IA générative attendus dans un rapport complémentaire.

L’UE a proposé un nouveau paquet de mesures en matière de cybersécurité afin de renforcer la sécurité de la chaîne d’approvisionnement, d’étendre et d’accélérer la certification, de rationaliser la conformité et la notification au titre de la directive NIS2, et de conférer à l’ENISA des pouvoirs opérationnels plus étendus, tels que l’alerte en cas de menace, la gestion des vulnérabilités et l’assistance en cas de rançongiciel.

Un groupe d’agences internationales de cybersécurité a publié de nouvelles directives techniques concernant la sécurité des technologies opérationnelles (OT) utilisées dans les environnements industriels et les infrastructures critiques. Ces directives, élaborées sous l’égide du Centre national de cybersécurité (NCSC) du Royaume-Uni, fournissent des recommandations pour connecter en toute sécurité les systèmes de contrôle industriels, les capteurs et autres équipements opérationnels qui soutiennent les services essentiels. Selon les agences co-auteurs, les environnements industriels sont la cible d’une série d’acteurs, notamment des groupes cybercriminels et des acteurs liés à des États.

Le Royaume-Uni a lancé un programme intitulé « Software Security Ambassadors Scheme » (Programme des ambassadeurs de la sécurité logicielle), dirigé par le ministère des Sciences, de l’Innovation et de la Technologie et le Centre national de cybersécurité. Ce programme invite les organisations participantes à promouvoir un nouveau code de bonnes pratiques en matière de sécurité logicielle dans leurs secteurs respectifs et à améliorer la sécurité du développement et des achats afin de renforcer la résilience de la chaîne d’approvisionnement.

Les responsables britanniques et chinois de la sécurité ont convenu de créer un nouveau forum de dialogue sur la cybersécurité afin de discuter des cyberattaques et de gérer les menaces numériques, dans le but de créer des canaux de communication plus clairs, de réduire le risque d’erreurs d’appréciation dans le cyberespace et de promouvoir un comportement responsable des États en matière de sécurité numérique.

ÉCONOMIE

Les ministres de l’UE ont appelé à accélérer les progrès vers les objectifs numériques de l’Union pour 2030, en demandant un renforcement des compétences numériques, une adoption plus large des technologies et des règles plus simples pour les PME et les start-ups, tout en préservant la protection des données et les droits fondamentaux, ainsi qu’une application plus stricte et plus cohérente des règles en matière de sécurité en ligne, de contenus illégaux, de protection des consommateurs et de cyber-résilience.

La Corée du Sud a approuvé des modifications législatives visant à reconnaître les titres tokenisés et à établir des règles pour leur émission et leur négociation dans le cadre du système réglementé des marchés de capitaux. La mise en œuvre est prévue pour janvier 2027, après une période de préparation. Ce cadre permet aux émetteurs éligibles de créer des produits de dette et d’actions basés sur la blockchain, tandis que la négociation se ferait par l’intermédiaire d’intermédiaires agréés, conformément aux règles existantes en matière de protection des investisseurs.

La Russie conserve le rouble comme seul moyen de paiement légal et continue de rejeter les cryptomonnaies en tant que monnaie, mais les législateurs s’orientent vers une reconnaissance juridique plus large des cryptomonnaies en tant qu’actifs, avec notamment une proposition visant à les traiter comme des biens matrimoniaux en cas de divorce, parallèlement à une utilisation limitée et réglementée des cryptomonnaies dans le commerce extérieur.

Le Royaume-Uni prévoit d’intégrer pleinement les cryptoactifs dans son périmètre de réglementation financière, les entreprises de cryptomonnaies étant réglementées par la Financial Conduct Authority à partir de 2027 selon des règles similaires à celles applicables aux produits financiers traditionnels, dans le but de renforcer la protection des consommateurs, la transparence et la confiance du marché tout en soutenant l’innovation et en réprimant les activités illicites, parallèlement aux efforts visant à définir des normes internationales grâce à une coopération telle que celle mise en place par un groupe de travail britannique-américain.

Le projet d’extension des licences cryptographiques à Hong Kong suscite l’inquiétude du secteur, qui craint que des seuils plus stricts ne contraignent davantage d’entreprises à obtenir une licence complète, n’augmentent les coûts de mise en conformité et ne prévoient pas de période de transition claire, ce qui pourrait perturber les activités pendant le traitement des demandes.

Les efforts de la Pologne pour introduire une loi complète sur les cryptomonnaies ont abouti à une impasse après que le Sejm n’ait pas réussi à renverser le veto du président Karol Nawrocki sur un projet de loi visant à aligner les règles nationales sur le cadre MiCA de l’UE. Le gouvernement a fait valoir que la réforme était essentielle pour la protection des consommateurs et la sécurité nationale, mais le président l’a rejetée, la jugeant trop contraignante et menaçant la liberté économique. À la suite de cela, le Premier ministre Donald Tusk s’est engagé à renouveler ses efforts pour faire adopter une législation sur les cryptomonnaies.

Les efforts déployés par la Pologne pour introduire une loi globale sur les cryptomonnaies ont abouti à une impasse après que la Diète nationale (Sejm) n’ait pas réussi à passer outre le veto du président Karol Nawrocki sur un projet de loi visant à aligner les règles nationales sur le cadre MiCA de l’UE. Le gouvernement a fait valoir que cette réforme était essentielle pour la protection des consommateurs et la sécurité nationale, mais le président l’a rejetée, la jugeant trop contraignante et menaçante pour la liberté économique. À la suite de cela, le Premier ministre Donald Tusk s’est engagé à renouveler ses efforts pour faire adopter une législation sur les cryptomonnaies.

En Norvège, la Norges Bank a conclu que les conditions actuelles ne justifiaient pas le lancement d’une monnaie numérique de banque centrale, arguant que le système de paiement norvégien restait sûr, efficace et bien adapté aux utilisateurs. La banque soutient que la couronne norvégienne continue de fonctionner de manière fiable, grâce à des dispositifs d’urgence solides et à des performances opérationnelles stables. La gouverneure Ida Wolden Bache a déclaré que cette évaluation reflétait davantage un choix de timing qu’un rejet des CBDC, précisant que la banque pourrait en introduire une si les conditions changeaient ou si de nouveaux risques apparaissaient dans le paysage des paiements nationaux.

Les États membres de l’UE introduiront un nouveau droit de douane sur les importations de faible valeur issues du commerce électronique à compter du 1er juillet 2026. En vertu de l’accord, un droit de douane de 3 euros par article sera appliqué aux colis d’une valeur inférieure à 150 euros importés directement dans l’UE en provenance de pays tiers. Ce droit temporaire vise à combler le vide jusqu’à ce que le centre de données douanières de l’UE, une initiative de réforme douanière plus large destinée à fournir des données complètes sur les importations et à renforcer les capacités de contrôle, soit pleinement opérationnel en 2028.

DÉVELOPPEMENT

L’UNESCO a exprimé sa préoccupation croissante face à l’utilisation croissante des coupures d’Internet par les gouvernements qui cherchent à gérer les crises politiques, les manifestations et les périodes électorales. Des données récentes indiquent que plus de 300 coupures ont eu lieu dans 54 pays au cours des deux dernières années, 2024 étant l’année la plus grave depuis 2016. Selon l’UNESCO, la restriction de l’accès à Internet porte atteinte au droit universel à la liberté d’expression et affaiblit la capacité des citoyens à participer à la vie sociale, culturelle et politique. L’accès à l’information reste essentiel non seulement pour l’engagement démocratique, mais aussi pour les droits liés à l’éducation, à la réunion et à l’association, en particulier en période d’instabilité. Les perturbations d’Internet exercent également une pression considérable sur les journalistes, les médias et les systèmes d’information publique qui diffusent des informations vérifiées.

L’OCDE affirme que l’IA générative se répand rapidement dans les écoles, mais que les résultats sont mitigés : les chatbots à usage général peuvent améliorer la qualité du travail des élèves sans pour autant améliorer leurs résultats aux examens, et peuvent affaiblir l’apprentissage profond lorsqu’ils remplacent les « efforts productifs ». Elle soutient que les outils d’IA spécifiques à l’éducation, conçus autour des sciences de l’apprentissage et utilisés comme tuteurs ou assistants collaboratifs, sont plus susceptibles d’améliorer les résultats et devraient être prioritaires et rigoureusement évalués.

Le Royaume-Uni va tester des outils de tutorat basés sur l’IA dans les écoles secondaires, dans le but de les rendre disponibles à l’échelle nationale d’ici la fin 2027. Les enseignants participeront à leur conception et à leur test, et la sécurité, la fiabilité et l’alignement sur le programme national seront considérés comme des exigences fondamentales. Cette initiative vise à fournir un soutien personnalisé et à contribuer à réduire les écarts de réussite scolaire. Elle pourrait bénéficier chaque année à près de 450 000 élèves défavorisés de la 3e à la 1re, tout en positionnant ces outils comme un complément à l’enseignement en classe et non comme un substitut.

SOCIOCULTUREL

L’UE a désigné WhatsApp comme une très grande plateforme en ligne au titre de la loi sur les services numériques (DSA) après que celle-ci a déclaré plus de 51 millions d’utilisateurs mensuels dans l’Union, ce qui entraîne des obligations plus strictes en matière d’évaluation et d’atténuation des risques systémiques tels que la désinformation et de renforcement de la protection des mineurs et des utilisateurs vulnérables. La Commission européenne supervisera directement la conformité, avec des amendes pouvant atteindre 6 % du chiffre d’affaires annuel mondial, et WhatsApp a jusqu’à la mi-mai pour aligner ses politiques et ses évaluations des risques sur les exigences de la DSA.

L’UE a rendu sa première décision de non-conformité au DSA à l’encontre de X, infligeant une amende de 120 millions d’euros à la plateforme pour avoir induit en erreur avec sa vérification payante « blue check », pour le manque de transparence de ses publicités en raison d’un référentiel publicitaire incomplet et les entraves à l’accès aux données publiques pour les chercheurs. X doit proposer des solutions pour le système de vérification dans un délai de 60 jours ouvrables et soumettre un plan plus large sur l’accès aux données et la transparence publicitaire dans un délai de 90 jours, sous peine de faire l’objet de nouvelles mesures coercitives.

L’UE a accepté les engagements contraignants pris par TikTok dans le cadre de la DSA pour rendre les publicités plus transparentes, notamment en affichant les publicités exactement telles que les utilisateurs les voient, en ajoutant des détails sur le ciblage et les données démographiques, en mettant à jour son référentiel publicitaire dans les 24 heures et en élargissant les outils et l’accès pour les chercheurs et le public, avec des délais de mise en œuvre allant de deux à douze mois.

WhatsApp fait face à une pression croissante de la part des autorités russes, qui affirment que le service ne respecte pas les règles nationales en matière de stockage des données et de coopération avec les forces de l’ordre, tandis que Meta n’a aucune présence légale en Russie et rejette les demandes d’informations sur les utilisateurs. Les responsables font la promotion d’alternatives soutenues par l’État, telles que l’application de messagerie nationale Max, et les détracteurs avertissent que cibler WhatsApp limiterait les communications privées plutôt que de répondre à de véritables menaces pour la sécurité.


Réglementation nationale en matière d’IA

Vietnam. L’Assemblée nationale vietnamienne a adopté la première loi complète du pays sur l’IA, établissant un régime de gestion des risques, des tests en sandbox, un fonds national de développement de l’IA et des programmes de bons pour les start-ups afin d’équilibrer les mesures de protection strictes et les incitations à l’innovation. La législation de 35 articles, largement inspirée des modèles de l’UE et d’autres pays, centralise la surveillance de l’IA sous l’égide du gouvernement et entrera en vigueur en mars 2026.

Royaume-Uni. Plus d’une centaine de parlementaires britanniques de tous bords politiques poussent le gouvernement à adopter des règles contraignantes sur les systèmes d’IA avancés, affirmant que les cadres actuels sont en retard par rapport aux progrès technologiques rapides et présentent des risques pour la sécurité nationale et mondiale. Cette campagne multipartite, soutenue par d’anciens ministres et des personnalités du monde technologique, vise à instaurer des normes de test obligatoires, une surveillance indépendante et une coopération internationale renforcée, remettant en cause la préférence du gouvernement pour la réglementation existante, largement volontaire.

États-Unis. Le président américain Donald Trump a signé un décret vcontre ce que l’administration considère comme les lois les plus contraignantes et excessives en matière d’IA au niveau des États. La Maison Blanche estime que la multiplication des règles étatiques menace de freiner l’innovation, d’alourdir la charge des développeurs et d’affaiblir la compétitivité des États-Unis.

Pour remédier à cette situation, le décret crée un groupe de travail sur les litiges liés à l’IA chargé de contester les lois étatiques jugées contraires à la politique définie dans le décret, afin de maintenir et de renforcer la domination mondiale des États-Unis dans le domaine de l’IA grâce à un cadre politique national peu contraignant. Le département du Commerce est chargé d’examiner toutes les réglementations étatiques en matière d’IA dans un délai de 90 jours afin d’identifier celles qui imposent des contraintes excessives. Il utilise également les fonds fédéraux comme levier, en conditionnant l’octroi de certaines subventions à l’alignement des États sur la politique nationale en matière d’IA.

Plans et investissements nationaux

Russie. La Russie met en œuvre un plan national visant à étendre l’utilisation de l’IA générative dans l’administration publique et les secteurs clés, avec la création d’un siège central chargé de coordonner les ministères et les agences. Les responsables considèrent que le déploiement accru de systèmes génératifs nationaux est un moyen de renforcer la souveraineté, d’améliorer l’efficacité et de stimuler le développement économique régional, en donnant la priorité à l’IA développée localement plutôt qu’aux plateformes étrangères.

Qatar. Le Qatar a lancé Qai, une nouvelle entreprise nationale d’IA conçue pour accélérer la transformation numérique du pays et renforcer sa présence mondiale dans le domaine de l’IA. Qai fournira des infrastructures informatiques haute performance et une infrastructure d’IA évolutive, en collaboration avec des instituts de recherche, des décideurs politiques et des partenaires du monde entier afin de promouvoir l’adoption de technologies de pointe qui soutiennent le développement durable et la diversification économique.

L’UE. L’UE a mis en place un programme ambitieux de gigafactories afin de renforcer son leadership en matière d’IA en développant les infrastructures et les capacités de calcul dans tous les États membres. Cela implique l’extension d’un réseau d’« usines » et d’antennes d’IA qui fournissent des capacités de calcul haute performance et une expertise technique aux start-ups, aux PME et aux chercheurs, en intégrant le soutien à l’innovation aux cadres réglementaires tels que la loi sur l’IA.

Australie. L’Australie a conclu un accord de 4,6 milliards de dollars pour la création d’un nouveau pôle d’IA dans l’ouest de Sydney, en partenariat avec des acteurs du secteur privé, afin de construire un campus d’IA doté d’une infrastructure GPU étendue capable de prendre en charge des charges de travail avancées. Cet investissement s’inscrit dans le cadre d’efforts nationaux plus larges visant à établir une capacité nationale d’innovation et de calcul en matière d’IA.

Maroc. Le Maroc s’apprête à dévoiler « Maroc IA 2030 », une feuille de route nationale en matière d’IA conçue pour structurer l’écosystème de l’IA du pays et renforcer la transformation numérique. Ce plan vise à ajouter environ 10 milliards de dollars au PIB d’ici 2030, à créer des dizaines de milliers d’emplois liés à l’IA et à intégrer l’IA dans l’industrie et le gouvernement, notamment en modernisant les services publics et en renforçant l’autonomie technologique. Au cœur de cette stratégie se trouve le lancement de l’Institut JAZARI ROOT, plaque tournante d’un réseau prévu de centres d’excellence en IA qui fera le lien entre la recherche, l’innovation régionale et le déploiement pratique. D’autres initiatives comprennent la mise en place d’une infrastructure de données souveraine et des partenariats avec des entreprises mondiales spécialisées dans l’IA. Les autorités mettent également l’accent sur le renforcement des compétences nationales et de la confiance dans l’IA, avec des structures de gouvernance et des propositions législatives qui devraient accompagner la mise en œuvre.

Initiatives de renforcement des capacités

États-Unis. L’administration Trump a dévoilé une nouvelle initiative, baptisée US Tech Force, qui vise à reconstruire les capacités techniques du gouvernement américain après d’importantes réductions d’effectifs, en mettant particulièrement l’accent sur l’IA et la transformation numérique.

Selon le site officiel TechForce.gov, les participants travailleront sur des missions fédérales à fort impact, relevant des défis civiques et nationaux à grande échelle. Le programme se positionne comme un pont entre la Silicon Valley et Washington, encourageant les technologues expérimentés à introduire les pratiques industrielles dans les environnements gouvernementaux.  Le programme reflète la préoccupation croissante au sein de l’administration quant au manque d’expertise interne des agences fédérales pour déployer et superviser les technologies de pointe, d’autant plus que l’IA devient centrale dans l’administration publique, la défense et la prestation de services.

Taïwan. Le gouvernement taïwanais s’est fixé l’objectif ambitieux de former 500 000 professionnels de l’IA d’ici 2040 dans le cadre de sa stratégie de développement à long terme de l’IA, soutenue par un fonds de capital-risque de 100 milliards de nouveaux dollars taïwanais (environ 3,2 milliards de dollars américains) et une initiative nationale de centre de calcul. Le président Lai Ching-te a annoncé cet objectif lors d’un forum sur les talents en IA organisé à Taipei en 2026, soulignant la nécessité d’une large culture de l’IA dans toutes les disciplines afin de maintenir la compétitivité nationale, de soutenir les écosystèmes d’innovation et d’accélérer la transformation numérique dans les petites et moyennes entreprises. Le gouvernement met en place des programmes de formation pour les étudiants et les fonctionnaires et met l’accent sur la coopération entre l’industrie, le monde universitaire et le gouvernement afin de développer un vivier de talents polyvalents en IA.

El Salvador. El Salvador s’est associé à xAI pour lancer le premier programme éducatif national au monde basé sur l’IA, déployant le modèle Grok dans plus de 5 000 écoles publiques afin d’offrir un tutorat personnalisé et adapté au programme scolaire à plus d’un million d’élèves au cours des deux prochaines années. Cette initiative aidera les enseignants grâce à des outils d’IA adaptatifs, tout en développant conjointement des méthodologies, des ensembles de données et des cadres de gouvernance pour une utilisation responsable de l’IA dans les salles de classe, dans le but de combler les lacunes en matière d’apprentissage et de moderniser le système éducatif. Le président Nayib Bukele a décrit cette initiative comme un bond en avant dans la transformation numérique nationale.

Centre de ressources de l’ONU sur l’IA. Le Centre de ressources de l’ONU sur l’IA a été mis en ligne en tant que plateforme centralisée regroupant les activités et l’expertise en matière d’IA dans l’ensemble du système des Nations unies. Présentée par le Groupe de travail interinstitutions des Nations unies sur l’IA, la plateforme a été développée grâce à la collaboration conjointe du PNUD, de l’UNESCO et de l’UIT. Elle permet aux parties prenantes d’explorer les initiatives par agence, par pays et par ODD. Le centre soutient la collaboration interinstitutions, les capacités des États membres de l’ONU et une meilleure cohérence dans la gouvernance et la terminologie de l’IA.

Partenariats

Canada-UE. Le Canada et l’UE ont élargi leur partenariat numérique en matière d’IA et de sécurité, s’engageant à approfondir leur coopération sur les systèmes d’IA fiables, la gouvernance des données et les infrastructures numériques partagées. Cela comprend des protocoles d’accord visant à faire progresser l’interopérabilité, à harmoniser les normes et à encourager la collaboration sur les services numériques fiables.

Le Réseau international pour la mesure, l’évaluation et la science avancées en matière d’IA. Ce réseau mondial a renforcé la coopération en matière d’analyse comparative des progrès réalisés dans la gouvernance de l’IA, en se concentrant sur des indicateurs qui permettent de comparer les politiques nationales, d’identifier les lacunes et de soutenir la prise de décision fondée sur des données probantes dans la réglementation internationale de l’IA. Ce réseau comprend l’Australie, le Canada, l’UE, la France, le Japon, le Kenya, la République de Corée, Singapour, le Royaume-Uni et les États-Unis. Le Royaume-Uni a assumé le rôle de coordinateur du réseau.

BRICS. Les discussions sur la gouvernance de l’IA au sein du bloc BRICS se sont intensifiées, les États membres cherchant à harmoniser leurs approches nationales et à partager des principes communs pour un déploiement éthique, inclusif et coopératif de l’IA. Il est toutefois encore prématuré de parler de la création d’une IA-BRICS, a déclaré le vice-ministre des Affaires étrangères Sergueï Riabkov, sherpa russe du BRICS.

ASEAN-Japon. Le Japon et l’Association des nations de l’Asie du Sud-Est (ASEAN) ont convenu d’approfondir leur coopération en matière d’IA, comme l’a officialisé une déclaration commune lors d’une réunion des ministres du numérique à Hanoï. Ce partenariat se concentre sur le développement conjoint de modèles d’IA, l’harmonisation des législations connexes et le renforcement des liens de recherche afin d’améliorer les capacités technologiques et la compétitivité régionales face à la concurrence mondiale des États-Unis et de la Chine.

Pax Silica. Un groupe diversifié de nations a annoncé la création de Pax Silica, un nouveau partenariat visant à mettre en place des chaînes d’approvisionnement sûres, résilientes et axées sur l’innovation pour les technologies qui sous-tendent l’ère de l’IA. Il s’agit notamment des minéraux et des intrants énergétiques essentiels, de la fabrication de pointe, des semi-conducteurs, des infrastructures d’IA et de la logistique. Les analystes préviennent que des divergences d’opinion pourraient apparaître si Washington poussait à l’adoption de mesures plus strictes à l’encontre de la Chine, ce qui pourrait accroître la pression politique et économique sur les pays participants. Cependant, les États-Unis, qui dirigent la plateforme, ont précisé que celle-ci se concentrerait sur le renforcement des chaînes d’approvisionnement entre ses membres plutôt que sur la pénalisation des pays non membres, comme la Chine.

Gouvernance des contenus

Italie. L’autorité antitrust italienne a officiellement clos son enquête sur le développeur chinois d’IA DeepSeek après que l’entreprise ait accepté de prendre des engagements contraignants pour rendre les risques liés aux hallucinations de l’IA (résultats faux ou trompeurs) plus clairs et plus accessibles aux utilisateurs. Les régulateurs ont déclaré que DeepSeek améliorera la transparence en fournissant des avertissements et des informations plus clairs adaptés aux utilisateurs italiens, alignant ainsi le déploiement de son chatbot sur les exigences réglementaires locales. Si ces conditions ne sont pas remplies, des mesures coercitives pourraient être prises en vertu de la loi italienne.

Espagne. Le gouvernement espagnol a approuvé un projet de loi visant à lutter contre les deepfakes générés par l’IA et à renforcer les règles de consentement relatives à l’utilisation d’images et de voix. Le projet de loi fixe à 16 ans l’âge minimum pour consentir à l’utilisation d’images et interdit la réutilisation d’images en ligne ou de ressemblances générées par l’IA sans autorisation explicite, y compris à des fins commerciales, tout en autorisant les satires ou les œuvres créatives clairement identifiées impliquant des personnalités publiques. Cette réforme renforce les mesures de protection des enfants et s’inscrit dans le cadre des projets plus larges de l’UE visant à criminaliser les deepfakes sexuels non consensuels d’ici 2027. Les procureurs examinent également si certains contenus générés par l’IA pourraient être qualifiés de pornographie enfantine en vertu de la législation espagnole.

Malte. Le gouvernement maltais prépare des mesures juridiques plus strictes pour lutter contre les abus de la technologie deepfake. La législation actuelle est en cours de révision, avec des propositions visant à introduire des sanctions pour l’utilisation abusive de l’IA dans les cas de harcèlement, de chantage et d’intimidation, en s’appuyant sur les lois existantes en matière de cyberharcèlement et de cybertraque et en étendant des protections similaires aux préjudices résultant de contenus générés par l’IA. Les responsables soulignent que si l’adoption de l’IA est une priorité nationale, des garanties solides contre les utilisations abusives sont essentielles pour protéger les individus et les droits numériques.

Chine. L’autorité chinoise de régulation du cyberespace a proposé de nouvelles restrictions concernant les chatbots « petits amis » et « petites amies » basés sur l’IA. Le projet de réglementation exige que les plateformes interviennent lorsque les utilisateurs expriment des tendances suicidaires ou autodestructrices lors de leurs interactions avec des services d’IA émotionnellement interactifs, tout en renforçant la protection des mineurs et en limitant les contenus préjudiciables. L’autorité de régulation définit ces services comme des systèmes d’IA qui simulent des traits de personnalité humains et des interactions émotionnelles.

Note aux lecteurs : nous avons rendu compte séparément de la réaction négative suscitée en janvier 2026 par Grok, à la suite d’allégations selon lesquelles il aurait été utilisé pour générer des images sexualisées et deepfake non consensuelles.

Sécurité

L’ONU. L’ONU a tiré la sonnette d’alarme concernant les menaces que représente l’IA pour la sécurité des enfants, soulignant que les systèmes d’IA peuvent accélérer la création, la distribution et l’impact de contenus préjudiciables, notamment l’exploitation sexuelle, les abus et la manipulation des enfants en ligne. Alors que les jouets intelligents, les chatbots et les moteurs de recommandation façonnent de plus en plus les expériences numériques des jeunes, l’absence de garanties adéquates risque d’exposer toute une génération à de nouvelles formes d’exploitation et de préjudice.

Experts internationaux. Le deuxième rapport international sur la sécurité de l’IA constate que les capacités de l’IA continuent de progresser rapidement, les principaux systèmes surpassant les experts humains dans des domaines tels que les mathématiques, les sciences et certaines tâches logicielles autonomes, tandis que les performances restent inégales. L’adoption est rapide mais inégale à l’échelle mondiale. Les préjudices croissants comprennent les deepfakes, l’utilisation abusive dans le cadre de fraudes et de contenus non consensuels, ainsi que les impacts systémiques sur l’autonomie et la confiance. Les mesures de protection techniques et les cadres de sécurité volontaires se sont améliorés, mais restent incomplets, et une gestion efficace des risques à plusieurs niveaux fait toujours défaut.

L’UE et les États-Unis. L’Agence européenne des médicaments (EMA) et la Food and Drug Administration (FDA) américaine ont publié dix principes de bonnes pratiques en matière d’IA dans le cycle de vie des médicaments. Ces lignes directrices fournissent des orientations générales pour l’utilisation de l’IA dans la recherche, les essais cliniques, la fabrication et la surveillance de la sécurité. Ces principes concernent les développeurs de produits pharmaceutiques, les demandeurs d’autorisation de mise sur le marché et les titulaires d’autorisations, et serviront de base aux futures orientations en matière d’IA dans différentes juridictions.


L’examen du SMSI+20, mené 20 ans après le Sommet mondial sur la société de l’information, s’est achevé en décembre 2025 à New York avec l’adoption d’un document final de haut niveau par l’Assemblée générale des Nations unies. Cet examen évalue les progrès accomplis dans la construction d’une société de l’information centrée sur l’humain, inclusive et axée sur le développement, met en évidence les domaines nécessitant des efforts supplémentaires et décrit les mesures visant à renforcer la coopération internationale.

Une décision institutionnelle majeure a été prise : faire du Forum sur la gouvernance de l’Internet (FGI) un organe permanent des Nations unies. Le résultat comprend également des mesures visant à renforcer son fonctionnement : élargir la participation, en particulier celle des pays en développement et des communautés sous-représentées, améliorer les travaux intersessions, soutenir les initiatives nationales et régionales et adopter des méthodes de collaboration innovantes et transparentes. Le secrétariat du FGI doit être renforcé, un financement durable assuré et des rapports annuels sur les progrès réalisés fournis aux organes des Nations unies, notamment à la Commission de la science et de la technologie au service du développement (CSTD).

Les négociations ont porté sur la création d’un segment gouvernemental au sein du FGI. Si certains États membres ont soutenu cette idée comme un moyen de favoriser le dialogue entre les gouvernements, d’autres ont craint qu’elle ne compromette la nature multipartite du FGI. Le compromis final encourage le dialogue entre les gouvernements avec la participation de toutes les parties prenantes.

Au-delà du FGI, le résultat confirme la poursuite du Forum annuel du SMSI et invite le Groupe des Nations unies sur la société de l’information (UNGIS) à accroître son efficacité, sa souplesse et le nombre de ses membres.

Les facilitateurs des lignes d’action du SMSI sont chargés d’élaborer des feuilles de route ciblées reliant les lignes d’action du SMSI aux ODD et aux engagements du Pacte numérique mondial (PMN).

Le Groupe d’experts intergouvernemental sur le SMSI (UNGIS) est chargé d’élaborer une feuille de route commune pour renforcer la cohérence entre le SMSI et le Pacte numérique mondial, qui sera présentée à la CSTD en 2026. Le Secrétaire général soumettra des rapports biennaux sur la mise en œuvre du SMSI, et le prochain examen de haut niveau est prévu pour 2035.

Le document place la réduction de la fracture numérique au cœur du programme du SMSI+20. Il aborde de multiples aspects de l’exclusion numérique, notamment l’accessibilité, l’abordabilité, la qualité de la connectivité, l’inclusion des groupes vulnérables, le multilinguisme, la diversité culturelle et la connexion de toutes les écoles à l’Internet. Il souligne que la connectivité seule est insuffisante, mettant en avant l’importance du développement des compétences, de la mise en place d’environnements politiques favorables et de la protection des droits de l’Homme.

Le résultat met également l’accent sur un développement numérique ouvert, équitable et non discriminatoire, notamment des politiques prévisibles et transparentes, des cadres juridiques et le transfert de technologies vers les pays en développement. La durabilité environnementale est mise en avant, avec des engagements à tirer parti des technologies numériques tout en abordant les questions de la consommation d’énergie, des déchets électroniques, des minéraux critiques et des normes internationales pour des produits numériques durables.

Les droits de l’Homme et les considérations éthiques sont réaffirmés comme fondamentaux. Le document souligne que les droits en ligne reflètent ceux hors ligne, appelle à la mise en place de garanties contre les effets négatifs des technologies numériques et exhorte le secteur privé à respecter les droits de l’homme tout au long du cycle de vie des technologies. Il aborde les préjudices en ligne tels que la violence, les discours haineux, la désinformation, le cyberharcèlement et l’exploitation sexuelle des enfants, tout en promouvant la liberté des médias, la vie privée et la liberté d’expression.

Le renforcement des capacités et le financement sont reconnus comme essentiels. Le document souligne la nécessité de renforcer les compétences numériques, l’expertise technique et les capacités institutionnelles, y compris dans le domaine de l’IA. Il invite l’Union internationale des télécommunications à créer un groupe de travail interne chargé d’évaluer les lacunes et les défis des mécanismes financiers pour le développement numérique et de présenter ses recommandations à la CSTD d’ici 2027. Il invite également le Groupe de travail interinstitutions des Nations unies sur l’IA à recenser les initiatives existantes en matière de renforcement des capacités, à identifier les lacunes et à élaborer des programmes tels qu’une bourse de renforcement des capacités en matière d’IA pour les fonctionnaires et les programmes de recherche.

Enfin, le document final souligne l’importance du suivi et de la mesure, demandant un examen systématique des indicateurs et méthodologies existants en matière de TIC par le Partenariat sur la mesure des TIC au service du développement, en coopération avec les facilitateurs des lignes d’action et la Commission statistique des Nations unies. Le Partenariat est chargé de faire rapport à la CSTD en 2027. Dans l’ensemble, la CSTD, l’ECOSOC et l’Assemblée générale continuent de jouer un rôle central dans le suivi et l’examen du SMSI.

Le texte final reflète un large compromis et a été adopté sans vote, bien que certains États membres et groupes aient exprimé des préoccupations concernant certaines dispositions.

 Adult, Male, Man, Person, Clothing, Formal Wear, Suit, Book, Comics, Publication, Face, Head, Coat, Text

Diplo et la Geneva Internet Platform (GIP) ont fourni des rapports en temps réel sur les deux jours d’examen de haut niveau du SMSI lors de la réunion de l’Assemblée générale des Nations unies (AGNU) les 16 et 17 décembre 2025. Notre page web dédiée contient les rapports des sessions, les listes des intervenants, des graphiques de connaissances, des analyses approfondies et les principales conclusions de cet événement historique.


La dynamique des interdictions des réseaux sociaux pour les enfants

L’Australie est entrée dans l’histoire en décembre en commençant à appliquer ses restrictions historiques sur les réseaux sociaux pour les moins de 16 ans, les premières règles nationales de ce type au monde.

Cette mesure, qui consiste en une nouvelle exigence relative à l’âge minimum pour utiliser les réseaux sociaux (SMMA) dans le cadre de la loi sur la sécurité en ligne, oblige les principales plateformes à prendre des « mesures raisonnables » pour supprimer les comptes des mineurs et bloquer les nouvelles inscriptions, sous peine d’amendes de 49,5 millions de dollars australiens et de rapports mensuels de conformité.

Dès l’entrée en vigueur de cette mesure, la commissaire à la sécurité électronique, Julie Inman Grant, a exhorté les familles, en particulier celles vivant dans les régions rurales et isolées d’Australie, à consulter le nouveau guide publié, qui explique le fonctionnement de la limite d’âge, les raisons pour lesquelles elle a été relevée de 13 à 16 ans et la manière d’accompagner les jeunes pendant cette transition.

Le nouveau cadre ne doit pas être considéré comme une interdiction, mais comme un report, a souligné Mme Grant, qui a relevé l’âge minimum pour créer un compte de 13 à 16 ans afin de créer « un répit face aux fonctionnalités puissantes et persuasives conçues pour les rendre accros et qui permettent souvent des contenus et des comportements nuisibles ».

 Book, Publication, Comics, Person, Animal, Bird, Face, Head, Bus Stop, Outdoors, Bench, Furniture

Cela fait près de deux mois que l’interdiction, nous continuons à utiliser le mot « interdiction » dans le texte, car il fait déjà partie du langage courant, est entrée en vigueur. Voici ce qui s’est passé entre-temps.

Réactions des adolescents. Le changement a été brutal pour les jeunes Australiens. À la veille de la date limite, les adolescents ont publié des messages d’adieu, déplorant la perte des communautés, des espaces créatifs et des réseaux de pairs qui ancraient leur vie quotidienne. Les défenseurs de la jeunesse ont fait remarquer que ceux qui dépendent des plateformes pour leur éducation, leurs réseaux de soutien, leurs espaces communautaires LGBTQ+ ou leur expression créative seraient touchés de manière disproportionnée.

Solutions de contournement et leurs limites. Comme on pouvait s’y attendre, des solutions de contournement ont immédiatement vu le jour. Certains adolescents ont tenté (et réussi) de tromper les outils d’estimation de l’âge du visage en déformant leurs expressions ; d’autres se sont tournés vers les VPN pour masquer leur emplacement. Cependant, les experts soulignent que les VPN gratuits monétisent souvent les données des utilisateurs ou contiennent des logiciels espions, ce qui soulève de nouveaux risques. Et cela pourrait être vain : les plateformes conservent un ensemble complet de signaux qu’elles peuvent utiliser pour déduire la véritable localisation et l’âge d’un utilisateur, notamment les adresses IP, les données GPS, les identifiants des appareils, les paramètres de fuseau horaire, les numéros de téléphone portable, les informations sur les boutiques d’applications et les modèles de comportement. Les marqueurs liés à l’âge, tels que l’analyse linguistique, les modèles d’activité pendant les heures scolaires, l’estimation de l’âge du visage ou de la voix, les interactions axées sur les jeunes et l’âge d’un compte, fournissent aux entreprises des outils supplémentaires pour identifier les utilisateurs mineurs.

Préoccupations en matière de confidentialité et d’efficacité. Les détracteurs affirment que cette politique soulève de sérieuses préoccupations en matière de confidentialité, car les systèmes de vérification de l’âge, qu’ils soient basés sur le téléchargement de pièces d’identité officielles, la biométrie ou des évaluations basées sur l’IA, obligent les gens à communiquer des données sensibles qui pourraient être utilisées à mauvais escient, violées ou normalisées dans le cadre de la surveillance quotidienne. D’autres soulignent que la technologie de reconnaissance faciale est la moins fiable pour les adolescents, le groupe même qu’elle est censée réglementer. Certains se demandent si les amendes ont un sens, étant donné que Meta gagne environ 50 millions de dollars australiens en moins de deux heures.

La portée limitée des règles a suscité un examen plus approfondi. Les sites de rencontre, les plateformes de jeux et les chatbots basés sur l’IA ne sont pas concernés par l’interdiction, même si certains chatbots ont été associés à des interactions préjudiciables avec des mineurs. Les éducateurs et les défenseurs des droits des enfants affirment que la culture numérique et la résilience protégeraient mieux les jeunes que la suppression pure et simple de l’accès. De nombreux adolescents déclarent qu’ils créeront de faux profils ou partageront des comptes communs avec leurs parents, ce qui soulève des doutes quant à l’efficacité à long terme de cette mesure.

Réaction de l’industrie. La plupart des grandes plateformes ont publiquement critiqué l’élaboration et le contenu de la loi. Elles affirment que la loi sera extrêmement difficile à appliquer, même si elles se préparent à s’y conformer pour éviter des amendes. Le groupe industriel NetChoice a qualifié cette mesure de « censure généralisée », tandis que Meta et Snap affirment que le véritable pouvoir d’application appartient à Apple et Google, grâce aux contrôles d’âge dans les boutiques d’applications, plutôt qu’aux plateformes.

Reddit a déposé un recours devant la Haute Cour contre cette interdiction, citant en justice le Commonwealth d’Australie et la ministre des Communications Anika Wells, et affirmant que la loi est appliquée de manière inexacte à Reddit. La plateforme soutient qu’il s’agit d’une plateforme destinée aux adultes et qu’elle ne dispose pas des fonctionnalités traditionnelles des réseaux sociaux qui posent problème au gouvernement.

Position du gouvernement. Le gouvernement, qui s’attend à une mise en œuvre mouvementée, présente cette mesure comme étant conforme à d’autres restrictions liées à l’âge (telles que l’interdiction de consommer de l’alcool avant 18 ans) et comme une réponse aux préoccupations persistantes du public concernant les dangers d’Internet. Les responsables affirment que l’Australie joue un rôle pionnier en matière de sécurité des jeunes en ligne, une position qui suscite un intérêt international considérable.

Intérêt international. Cette évolution a suscité un intérêt considérable au niveau international. De plus en plus de pays cherchent à interdire l’accès des mineurs aux principales plateformes.

Toutes ces juridictions observent désormais attentivement l’Australie, à l’affût d’une preuve de concept… ou d’un échec.

 Bus Stop, Outdoors, Book, Comics, Publication, Person, Adult, Female, Woman, People, Face, Head, Art

Les premiers résultats sont là. En termes d’application de la loi (conformité des plateformes et suppression de comptes), la loi fonctionne, les entreprises de réseaux sociaux ayant désactivé ou restreint environ 4,7 millions de comptes appartenant à des utilisateurs australiens de moins de 16 ans au cours du premier mois d’application.

Toutefois, en ce qui concerne les résultats comportementaux (à savoir si les moins de 16 ans sont réellement déconnectés, plus en sécurité ou ont remplacé leurs habitudes néfastes par des habitudes plus saines), les preuves restent peu concluantes et évoluent. Le gouvernement australien a également déclaré qu’il était trop tôt pour déclarer que l’interdiction était un succès incontestable.

La question reste en suspens. Les jeunes continuent d’avoir accès aux outils de messagerie de groupe, aux services de jeux et aux applications de vidéoconférence en attendant de pouvoir créer un compte complet sur les réseaux sociaux. Mais la question demeure : si l’accès à une grande partie de l’écosystème numérique reste ouvert, quel est l’intérêt pratique de cloisonner un seul segment d’Internet ?

Les plateformes devant les tribunaux

En janvier 2026, un procès historique s’est ouvert à Los Angeles, opposant K.G.M., un plaignant âgé de 19 ans, à de grandes entreprises de réseaux sociaux. L’affaire, initialement déposée en juillet 2023, accuse des plateformes telles que Meta (Instagram et Facebook), YouTube (Google/Alphabet), Snapchat et TikTok d’avoir intentionnellement conçu leurs applications pour qu’elles créent une dépendance, avec de graves conséquences pour la santé mentale des jeunes utilisateurs.

Selon la plainte, des fonctionnalités telles que le défilement infini, les recommandations algorithmiques et les notifications constantes ont contribué à une utilisation compulsive, à l’exposition à des contenus préjudiciables, à la dépression, à l’anxiété et même à des pensées suicidaires. Le procès allègue également que les plateformes ont rendu difficile pour K.G.M. d’éviter tout contact avec des inconnus et des adultes prédateurs, malgré les restrictions parentales. L’équipe juridique de K.G.M. soutient que les entreprises ont sciemment optimisé leurs plateformes afin de maximiser l’engagement au détriment du bien-être des utilisateurs.

Au début du procès, Snap Inc. et TikTok avaient déjà conclu des accords confidentiels, laissant Meta et YouTube comme seuls défendeurs. Meta et YouTube nient avoir causé intentionnellement un préjudice, mettant en avant les fonctionnalités de sécurité existantes, les contrôles parentaux et les filtres de contenu.

Dans les deux cas, les entreprises font valoir que l’article 230 de la loi américaine les protège de toute responsabilité, tandis que les plaignants rétorquent que leurs revendications portent sur des fonctionnalités prétendument addictives plutôt que sur le contenu généré par les utilisateurs.

Les experts juridiques et les défenseurs suivent de près l’affaire, soulignant que son issue pourrait créer un précédent pour des milliers de poursuites judiciaires similaires et, à terme, influencer les pratiques de conception des entreprises.


Les gouvernements débattent depuis longtemps du contrôle des données, des infrastructures et des technologies à l’intérieur de leurs frontières. Mais un sentiment d’urgence renouvelé se fait sentir, car les tensions géopolitiques poussent à identifier les dépendances, à renforcer les capacités nationales et à limiter l’exposition aux technologies étrangères.

Au niveau européen, la France fait pression pour que la souveraineté numérique soit mesurable et applicable. Paris a proposé la création d’un Observatoire européen de la souveraineté numérique afin de cartographier la dépendance des États membres à l’égard des technologies non européennes, des services cloud et des systèmes d’IA aux outils de cybersécurité. Associée à un indice de résilience numérique, cette initiative vise à donner aux décideurs politiques une image plus claire des dépendances stratégiques et une base plus solide pour une action coordonnée en matière d’approvisionnement, d’investissement et de politique industrielle.

Le bloc a toutefois déjà commencé à travailler sur la souveraineté numérique. En janvier dernier, le Parlement européen a adopté une résolution sur la souveraineté technologique européenne et les infrastructures numériques. Dans ce texte, le Parlement appelle à la mise en place d’une infrastructure publique numérique européenne (IPN) robuste, fondée sur des normes ouvertes, l’interopérabilité, la confidentialité et la sécurité dès la conception, ainsi qu’une gouvernance favorable à la concurrence. Les domaines prioritaires comprennent les semi-conducteurs et les puces d’IA, l’informatique haute performance et quantique, les infrastructures cloud et edge, les giga-usines d’IA, les centres de données, les systèmes d’identité et de paiement numériques, ainsi que les plateformes de données d’intérêt public.

La législation sur les réseaux numériques, qui définit la souveraineté comme la capacité de l’UE à contrôler, sécuriser et adapter ses infrastructures de connectivité critiques plutôt que comme un isolement par rapport aux marchés mondiaux, a également été récemment adoptée. Des réseaux numériques sûrs et de haute qualité sont présentés comme un élément fondamental de la transformation numérique, de la compétitivité et de la sécurité de l’Europe, la fragmentation des marchés nationaux étant considérée comme un obstacle à la capacité de l’Union à agir collectivement et à réduire ses dépendances. La connectivité par satellite est explicitement identifiée comme un pilier essentiel de l’autonomie stratégique de l’UE, indispensable pour l’accès au haut débit dans les zones reculées et pour la sécurité, la gestion des crises, la défense et d’autres applications critiques, ce qui incite à s’orienter vers une autorisation harmonisée au niveau de l’UE afin de renforcer la résilience et d’éviter la dépendance vis-à-vis des fournisseurs étrangers.

L’ADN complète le soutien de l’UE à la constellation de satellites IRIS2, une constellation multi-orbite prévue de 290 satellites conçue pour fournir des communications cryptées aux citoyens, aux gouvernements et aux organismes publics et réduire la dépendance de l’UE vis-à-vis des fournisseurs externes. À la mi-janvier, le calendrier de l’IRIS2 a été modifié, selon le commissaire européen chargé de la défense et de l’espace, Andrius Kubilius.

L’UE a également avancé son calendrier pour le réseau satellitaire IRIS2, selon le commissaire européen chargé de la défense et de l’espace, Andrius Kubilius. IRIS2, une constellation multi-orbite prévue de 290 satellites, vise à lancer ses premiers services de communication gouvernementaux d’ici 2029, soit un an plus tôt que prévu initialement. Le réseau est conçu pour fournir des communications cryptées aux citoyens, aux gouvernements et aux organismes publics. Il vise également à réduire la dépendance vis-à-vis des fournisseurs externes, car l’Europe est « assez dépendante des services américains », selon M. Kubilius.

La Commission est également prête à investir dans cet objectif : elle a annoncé un financement de 307,3 millions d’euros pour renforcer les capacités en matière d’IA, de robotique, de photonique et d’autres technologies émergentes. Une part importante de cet investissement est liée à des initiatives telles que l’Open Internet Stack, qui visent à renforcer l’autonomie numérique européenne. Ce financement, ouvert aux entreprises, aux universités et aux organismes publics, reflète une volonté plus large de traduire les ambitions politiques en capacités technologiques concrètes.

D’autres mesures sont en préparation. La loi sur le développement du cloud et de l’IA, une révision de la loi sur les puces électroniques et de la loi sur le quantique, toutes prévues pour 2026, renforceront également la souveraineté numérique de l’UE, améliorant ainsi l’autonomie stratégique dans l’ensemble de la pile numérique.

En outre, la Commission européenne prépare une stratégie visant à commercialiser les logiciels open source européens, parallèlement à la loi sur le développement du cloud et de l’IA, afin de renforcer les communautés de développeurs, de soutenir l’adoption dans divers secteurs et de garantir la compétitivité du marché. En apportant un soutien stable et en favorisant la collaboration entre les pouvoirs publics et l’industrie, cette stratégie vise à créer un écosystème open source économiquement durable.

Au Burkina Faso, l’accent est mis sur la réduction de la dépendance vis-à-vis des fournisseurs externes tout en consolidant l’autorité nationale sur les systèmes numériques essentiels. Le gouvernement a lancé un centre de supervision des infrastructures numériques afin de centraliser la surveillance des réseaux nationaux et de renforcer le contrôle de la cybersécurité. De nouveaux mini-centres de données destinés à l’administration publique sont en cours de déploiement afin de garantir que les données sensibles de l’État soient stockées et gérées au niveau national.

 Person, Face, Head, Fence

Les débats sur la souveraineté se traduisent également par des décisions visant à limiter, remplacer ou restructurer l’utilisation des services numériques fournis par des entités étrangères. La France a annoncé son intention de supprimer progressivement les plateformes de collaboration basées aux États-Unis, telles que Microsoft Teams, Zoom, Google Meet et Webex, de l’administration publique, pour les remplacer par une alternative développée au niveau national, « Visio ».

L’autorité néerlandaise chargée de la protection des données a exhorté le gouvernement à agir rapidement pour protéger la souveraineté numérique du pays, après que DigiD, le système national d’identité numérique, semblait sur le point d’être racheté par une entreprise américaine. L’organisme de surveillance a fait valoir que les Pays-Bas dépendent fortement d’un petit groupe de fournisseurs de services informatiques et de cloud computing non européens, et souligne que les organismes publics ne disposent pas de stratégies de sortie claires en cas de changement soudain de propriété étrangère.

Aux États-Unis, la controverse autour de TikTok peut également être considérée sous l’angle de la souveraineté : plutôt que d’interdire TikTok, les autorités ont poussé la plateforme à restructurer ses activités pour le marché américain. Une nouvelle entité gérera les activités de TikTok aux États-Unis, les données des utilisateurs et les algorithmes étant traités à l’intérieur des États-Unis. L’algorithme de recommandation est censé être entraîné uniquement sur les données des utilisateurs américains afin de répondre aux exigences réglementaires américaines.

Dans des contextes davantage axés sur la sécurité, le concept est encore plus précis. L’Europe restant fortement dépendante des fournisseurs de télécommunications chinois et des fournisseurs de cloud et de satellites américains, la Commission européenne a proposé des règles contraignantes en matière de cybersécurité visant les chaînes d’approvisionnement critiques des TIC.

Le Conseil de sécurité russe a récemment qualifié des services tels que Starlink et Gmail de menaces pour la sécurité nationale, les décrivant comme des outils d’« influence destructive sur le plan informationnel et technique ». Ces évaluations devraient alimenter la doctrine russe en matière de sécurité de l’information, renforçant ainsi le traitement des services numériques fournis par des entreprises étrangères non pas comme des infrastructures neutres, mais comme des vecteurs potentiels de risque géopolitique.Vue d’ensemble. Le fil conducteur est clair : la souveraineté numérique est désormais une considération essentielle pour les gouvernements du monde entier. Les approches peuvent différer, mais l’objectif reste largement le même : garantir que l’avenir numérique d’une nation soit façonné par ses propres priorités et règles. Mais la véritable indépendance est entravée par des chaînes d’approvisionnement mondiales profondément ancrées, des coûts prohibitifs pour la mise en place de systèmes parallèles et le risque d’étouffer l’innovation par l’isolement. Si la volonté stratégique de souveraineté est claire, se détacher des écosystèmes technologiques interdépendants nécessitera des années d’investissement, de migration et d’adaptation. Les initiatives actuelles marquent le début d’une transition longue et difficile.


En janvier 2026, une tempête réglementaire a frappé Grok, l’outil d’IA intégré à la plateforme X d’Elon Musk, lorsque des informations ont révélé que Grok était utilisé pour produire des images sexualisées et deepfakes non consensuelles, notamment des représentations de personnes dénudées ou dans des situations compromettantes sans leur consentement.

Musk a suggéré que les utilisateurs qui recourent à de telles incitations soient tenus pour responsables, une mesure critiquée comme un transfert de responsabilité.

 Book, Comics, Publication, Person, Face, Head, People, Stephan Bodzin

La réaction a été rapide et sévère. L’Ofcom britannique a lancé une enquête en vertu de la loi sur la sécurité en ligne afin de déterminer si X s’était acquitté de son obligation de protéger les citoyens britanniques contre les contenus illégaux dans le pays. Le Premier ministre britannique Keir Starmer a condamné ces résultats « répugnants ». L’UE a déclaré que ce contenu, en particulier celui impliquant des enfants, « n’avait pas sa place en Europe ». L’Asie du Sud-Est a agi de manière décisive : la Malaisie et l’Indonésie ont entièrement bloqué Grok, invoquant la génération d’images obscènes, et les Philippines ont rapidement emboîté le pas pour des raisons de protection des enfants.

Sous pression, X a annoncé un renforcement des contrôles sur les capacités d’édition d’images de Grok. La plateforme a déclaré avoir mis en place des mesures de protection technologiques pour bloquer la génération et l’édition d’images à caractère sexuel de personnes réelles dans les juridictions où ce type de contenu est illégal.

Toutefois, les autorités réglementaires ont indiqué que cette mesure, bien que positive, ne mettrait pas fin à la surveillance.

Au Royaume-Uni, l’Ofcom a souligné que son enquête officielle sur la gestion de Grok par X et l’émergence d’images deepfake se poursuivrait, même s’il saluait les changements apportés à la politique de la plateforme. Le régulateur a souligné sa volonté de comprendre comment la plateforme a facilité la prolifération de ce type de contenu et de veiller à ce que des mesures correctives soient mises en œuvre.

Le Bureau du commissaire à l’information du Royaume-Uni (ICO) a ouvert une enquête officielle sur X et xAI afin de déterminer si le traitement des données personnelles par Grok est conforme à la législation britannique en matière de protection des données, à savoir les principes fondamentaux de protection des données (légalité, équité et transparence), et si sa conception et son déploiement comprenaient des protections intégrées suffisantes pour empêcher l’utilisation abusive des données personnelles à des fins de création d’images préjudiciables ou manipulées.

Le commissaire à la protection de la vie privée du Canada a élargi une enquête existante sur X Corp. et a ouvert une enquête parallèle sur xAI afin d’évaluer si les entreprises ont obtenu un consentement valide pour la collecte, l’utilisation et la divulgation de renseignements personnels afin de créer des deepfakes générés par l’IA, y compris du contenu sexuellement explicite.

En France, le parquet de Paris a confirmé qu’il allait élargir l’enquête pénale en cours sur X pour y inclure la complicité dans la diffusion d’images pornographiques de mineurs, de deepfakes sexuellement explicites, la négation de crimes contre l’humanité et la manipulation d’un système automatisé de traitement de données. La brigade cybercriminalité du parquet de Paris a perquisitionné les bureaux français de X dans le cadre de cette enquête élargie. Musk et l’ancienne PDG Linda Yaccarino ont été convoqués pour des entretiens volontaires. X a nié toute malversation et a qualifié la perquisition d’« acte abusif des forces de l’ordre », tandis que Musk l’a décrite comme une « attaque politique ».

La Commission européenne a ouvert une enquête officielle sur X en vertu de la loi sur les services numériques (DSA) de l’Union européenne. L’enquête vise à déterminer si l’entreprise a respecté ses obligations légales en matière d’atténuation des risques liés aux deepfakes à caractère sexuel générés par l’IA et autres images préjudiciables produites par Grok, en particulier celles pouvant impliquer des mineurs ou du contenu non consensuel.

Le ministère public fédéral brésilien, l’autorité nationale de protection des données et le secrétariat national à la consommation ont publié des recommandations coordonnées à l’intention de X afin qu’il empêche Grok de produire et de diffuser des deepfakes à caractère sexuel, avertissant que les règles brésiliennes en matière de responsabilité civile pourraient s’appliquer si les contenus préjudiciables continuaient d’être diffusés et que la plateforme devrait être désactivée jusqu’à la mise en place de mesures de protection.

En Inde, le ministère de l’Électronique et des Technologies de l’information (Meity) a exigé le retrait des contenus obscènes et illégaux générés par l’outil d’IA et a demandé un rapport sur les mesures correctives dans un délai de 72 heures. Le ministère a également ordonné à l’entreprise de revoir le cadre technique et de gouvernance de Grok. Le délai est désormais écoulé, et ni le ministère ni Grok n’ont rendu publique la moindre information à ce sujet.

Les autorités réglementaires sud-coréennes examinent actuellement si Grok a enfreint les normes de protection et de sécurité des données personnelles en permettant la production de deepfakes explicites, et si cette question relève de leur compétence juridique.

L’Indonésie, la Malaisie et les Philippines ont toutefois rétabli l’accès après que la plateforme ait mis en place des contrôles de sécurité supplémentaires visant à limiter la génération et la modification de contenus problématiques.

Les lignes rouges. La réaction a été si immédiate et généralisée précisément parce qu’elle a touché deux points sensibles plutôt universels : la violation profonde de la vie privée par des images sexuelles non consenties – une ligne morale que presque tout le monde s’accorde à dire qu’il ne faut pas franchir – combinée aux dangers uniques de l’IA, un déclencheur de sensibilité aiguë de la part des gouvernements.Vue d’ensemble. L’examen minutieux mené actuellement par Grok montre que tous les régulateurs ne sont pas satisfaits des mesures de protection mises en œuvre jusqu’à présent, soulignant que les solutions pourraient devoir être adaptées aux différentes juridictions.


Diplo et la Geneva Internet Platform (GIP) ont organisé la 11e édition des Geneva Engage Awards, qui récompensent les efforts des acteurs internationaux de Genève en matière de communication numérique et d’engagement en ligne.

Le thème de cette année, « Back to Basics : The Future of Websites in the AI Era » (Retour aux sources : l’avenir des sites web à l’ère de l’IA), a mis en évidence les nouvelles pratiques dans lesquelles les utilisateurs s’appuient de plus en plus sur des assistants IA et des résumés générés par l’IA qui peuvent ne pas citer les sources primaires ou les plus pertinentes.

La première partie de l’événement a planté le décor d’un environnement numérique en mutation, explorant la transition d’un web basé sur la recherche vers un web axé sur les réponses et ses implications pour l’engagement du public. Elle a également offert un aperçu bref et transparent de la logique qui sous-tend le classement des prix de cette année, en dévoilant les indicateurs et les modèles mathématiques utilisés pour évaluer la présence numérique et l’accessibilité. Cela a conduit à la remise des prix, qui a récompensé les acteurs basés à Genève pour leur engagement et leur influence en ligne.

Les prix ont été décernés à des organisations dans trois catégories principales : les organisations internationales, les ONG et les représentations permanentes. Les prix ont évalué les efforts en matière d’engagement sur les réseaux sociaux, d’accessibilité du web et de leadership en matière d’IA, renforçant ainsi le rôle de Genève en tant que source fiable d’informations dans un contexte d’évolution rapide des technologies. 

Dans la catégorie « Organisations internationales », la Conférence des Nations unies sur le commerce et le développement (CNUCED) a remporté la première place. L’Office des Nations unies à Genève (ONUG) et le Bureau de la coordination des affaires humanitaires des Nations unies (OCHA) ont été classés deuxièmes pour leur forte présence numérique et leur rayonnement.

Parmi les organisations non gouvernementales, l’International AIDS Society s’est classée première. Elle est suivie par l’Aga Khan Development Network (AKDN) et l’Union internationale pour la conservation de la nature (UICN), toutes deux reconnues comme finalistes pour leur engagement numérique efficace.

Dans la catégorie des représentations permanentes, la Mission permanente de la République d’Indonésie auprès de l’Office des Nations Unies et des autres organisations internationales à Genève a remporté la première place. La Mission permanente de la République du Rwanda et la Mission permanente de la France ont été classées deuxièmes.

Le prix de l’accessibilité du Web a été décerné à la Mission permanente du Canada, tandis que le prix du leadership en matière d’IA à Genève a été remis à l’Union internationale des télécommunications (UIT).

Geneva Engage Awards 2026

Après la cérémonie, l’accent a été mis non plus sur la reconnaissance, mais sur l’échange, lors d’un cocktail de réseautage et d’un « bazar du savoir ». Les participants ont circulé entre différents stands interactifs qui traduisaient des concepts abstraits liés au numérique et à l’IA en expériences tangibles. Il s’agissait notamment d’une présentation guidée de ce qui se passe techniquement lorsqu’une question est posée à un système d’IA, d’une exploration des données et de l’analyse de réseau qui sous-tendent les Geneva Engage Awards, y compris une cartographie à grande échelle des interconnexions entre les sites web liés à Genève, et de discussions sur le rôle des connaissances sélectionnées et enrichies par l’homme dans l’alimentation des systèmes d’IA, avec des aperçus pratiques sur la manière dont les organisations peuvent préserver et développer leur expertise institutionnelle.

D’autres ateliers ont mis l’accent sur des approches pratiques du renforcement des capacités en matière d’IA grâce à des stages qui privilégient l’apprentissage par la création d’agents IA, ainsi que l’utilisation de l’IA pour la rédaction de rapports post-événementiels. Ensemble, ces sessions ont démontré comment l’IA peut transformer des discussions éphémères en connaissances structurées, multilingues et durables.

Ensemble, ces sessions ont montré comment l’IA peut transformer des discussions éphémères en connaissances structurées, multilingues et durables.


Perspectives d’avenir : nos prévisions annuelles en matière d’IA et de numérique

À l’aube de cette nouvelle année, nous dédions le premier numéro de notre newsletter à nos prévisions annuelles en matière d’IA et de numérique, accompagnées des commentaires de notre directeur exécutif. En nous appuyant sur notre couverture de la politique numérique au cours de l’année écoulée sur le Digital Watch Observatory, ainsi que sur notre expérience et notre expertise professionnelles, nous mettons en avant les 10 tendances et événements qui, selon nous, façonneront le paysage numérique au cours de l’année à venir.

 Adult, Male, Man, Person, Head

 Technologies. L’intelligence artificielle est en train de devenir un produit de base qui touche tout le monde, des pays en concurrence pour la souveraineté en matière d’IA aux simples citoyens. Tout aussi importante est l’émergence de l’IA ascendante : en 2026, des modèles linguistiques de petite à grande taille pourront fonctionner sur les serveurs des entreprises ou des institutions. Le développement open source, étape importante en 2025, devrait devenir un enjeu central de la future concurrence géostratégique.

Géostratégie. La bonne nouvelle est que, malgré toutes les pressions géopolitiques, nous disposons toujours d’un Internet mondial intégré. Cependant, la fragmentation numérique s’accélère, avec la segmentation croissante liée au filtrage des réseaux sociaux, à d’autres services et à diverses évolutions qui s’organisent autour de trois grands pôles : les États-Unis, la Chine et potentiellement l’UE. La géoéconomie devient une dimension critique de cette évolution, notamment compte tenu de l’empreinte mondiale des grandes entreprises technologiques. Et toute forme de fragmentation, qu’elle soit commerciale ou fiscale, les affectera inévitablement. Tout aussi important est le rôle des « géo-émotions » : le décalage croissant entre l’opinion publique et l’enthousiasme du secteur. Alors que les entreprises restent largement optimistes à l’égard de l’IA, le scepticisme du public augmente, et cette divergence pourrait avoir des implications politiques importantes.

Gouvernance. Le dilemme central en matière de gouvernance reste de savoir si les représentants nationaux (les parlementaires au niveau national et les diplomates au niveau international) sont réellement en mesure de protéger les intérêts numériques des citoyens en matière de données, de connaissances et de cybersécurité. Bien qu’il y ait des moments de discussions productives et des événements bien organisés, les progrès substantiels restent limités. Une note positive est que la gouvernance inclusive, du moins en principe, se poursuit grâce à la participation de multiples parties prenantes, même si elle soulève ses propres questions non résolues.

Sécurité. L’adoption de la Convention de Hanoï sur la cybercriminalité à la fin de l’année est une évolution positive, et les discussions de fond se poursuivent à l’ONU malgré les critiques persistantes à l’égard de l’institution. Si l’on ne sait pas encore si ces processus renforcent notre sécurité, ils élargissent toutefois la palette d’outils de gouvernance. Dans le même temps, l’attention doit s’étendre au-delà des préoccupations traditionnelles, telles que la cyberguerre, le terrorisme et la criminalité, pour s’intéresser aux risques émergents liés à l’interconnectivité des systèmes d’IA via les API. Ces points d’intégration créent de nouvelles interdépendances et des portes dérobées potentielles pour les cyberattaques.

Droits de l’Homme. Les droits de l’Homme sont de plus en plus menacés, les récents changements de politique des entreprises technologiques et les tensions transatlantiques croissantes entre l’UE et les États-Unis mettant en évidence un paysage en mutation. Alors que les débats continuent de se concentrer principalement sur les préjugés et l’éthique, les préoccupations plus profondes en matière de droits de l’Homme, telles que le droit à la connaissance, à l’éducation, à la dignité, à un travail valorisant et à la liberté de rester humain plutôt que d’être optimisé, reçoivent beaucoup moins d’attention. Alors que l’IA remodèle la société, la communauté des droits de l’Homme doit revoir de toute urgence ses priorités, en les fondant sur la protection de la vie, de la dignité et du potentiel humain.

Économie. Le cadre traditionnel à trois piliers comprenant la sécurité, le développement et les droits de l’Homme s’oriente vers les préoccupations économiques et sécuritaires, les droits humains étant de plus en plus mis de côté. Les questions technologiques et économiques, de l’accès aux terres rares aux modèles d’IA, sont désormais traitées comme des questions de sécurité stratégique. Cette tendance devrait s’accélérer en 2026, faisant de l’économie numérique un élément central de la sécurité nationale. Une plus grande attention devrait être accordée à la fiscalité, à la stabilité du système commercial mondial et à l’impact que la fragmentation ou la perturbation potentielle du commerce mondial pourrait avoir sur le secteur technologique.

Normes. La leçon à tirer des réseaux sociaux est claire : sans normes interopérables, les utilisateurs se retrouvent confinés à des plateformes uniques. Le même risque existe pour l’IA. Pour éviter de répéter ces erreurs, il est essentiel de développer des normes d’IA interopérables. Idéalement, les particuliers et les entreprises devraient créer leur propre IA, mais lorsque cela n’est pas possible, les plateformes devraient au minimum être interopérables, permettant ainsi une transition fluide entre les différents fournisseurs tels que OpenAI, Cloudy ou DeepSeek. Cette approche peut favoriser l’innovation, la concurrence et le choix des utilisateurs dans l’écosystème émergent dominé par l’IA.

Contenu. La question clé en matière de contenu en 2026 est la tension entre les gouvernements et les technologies américaines, particulièrement ce qui concerne le respect des lois européennes. Fondamentalement, les pays ont le droit de fixer des règles relatives au contenu sur leur territoire, reflétant leurs intérêts, et les citoyens attendent de leurs gouvernements qu’ils les appliquent. Si les débats médiatiques se concentrent souvent sur les abus ou la censure, la question fondamentale demeure : un pays peut-il réglementer le contenu sur son propre territoire ? La réponse est oui, et l’adaptation à ces règles sera une source majeure de tension à l’avenir.

Développement. Les pays qui sont actuellement à la traîne en matière d’IA ne sont pas nécessairement perdants. Le succès dans le domaine de l’IA ne repose pas tant sur la possession de grands modèles ou sur des investissements massifs dans le matériel informatique, mais plutôt sur la préservation et le développement des connaissances locales. Les petits pays devraient investir dans l’éducation, les compétences et les plateformes open source afin de conserver et de développer les connaissances au niveau local. Paradoxalement, une entrée plus lente dans l’IA pourrait être un avantage, permettant aux pays de se concentrer sur ce qui compte vraiment : les personnes, les compétences et une gouvernance efficace.

Environnement. Les préoccupations concernant l’impact de l’IA sur l’environnement et les ressources en eau persistent. Il convient de se demander si les fermes d’IA massives sont vraiment nécessaires. Les petits systèmes d’IA pourraient servir d’extension à ces processus ou de soutien à la formation et à l’éducation, réduisant ainsi le besoin de plateformes gourmandes en énergie et en eau. Au minimum, le développement de l’IA devrait donner la priorité à la durabilité et à l’efficacité, en atténuant le risque de déchets numériques à grande échelle tout en permettant des avantages pratiques.


Weekly #248 The Porto roadmap for more resilient global submarine cables

 Logo, Text

30 January – 6 February 2026


HIGHLIGHT OF THE WEEK

The Porto roadmap for more resilient global submarine cables 

In early February 2026, Porto, Portugal hosted the Second International Submarine Cable Resilience Summit, building on last year’s Abuja Summit. Under the high patronage of the President of Portugal and organised by ANACOM in partnership with ITU and the International Cable Protection Committee (ICPC), the event brought together representatives from over 70 countries, including governments, industry leaders, regulators, investors, and technical experts.

The summit concluded with the Porto Declaration on Submarine Cable Resilience, reaffirming the vital role of submarine cables in economic development, social inclusion, and digital transformation. The non-binding guidance calls for closer international cooperation to make submarine cables more resilient by simplifying deployment and repair rules, removing legal and regulatory barriers, and improving coordination among authorities. It emphasises investing in diverse and redundant cable routes—especially for small islands, landlocked countries, and underserved regions—while promoting industry best practices for risk management and protection. The recommendations also stress the development of skills and the use of new technologies to improve monitoring, design, and climate resilience.

 Crowd, Person, Audience, Speech, Face, Head, People

Among the key outcomes of the summit were 2 IAB Working Group recommendations.

The recommendations on fostering connectivity and geographic diversity focus on expanding submarine cable connectivity to Small Island Developing States (SIDS), Landlocked Developing Countries (LLDCs), and underserved regions. Key measures include promoting blended-finance and public-private partnerships, de-risking investments through insurance and anchor-tenancy models, and encouraging early engagement among governments, operators, and financiers. Governments are urged to create clear regulatory frameworks, enable non-discriminatory access to landing stations, and incentivise shared infrastructure. Technical measures emphasise integrating branching units, ensuring route diversity and resiliency, conducting hazard assessments, and adopting protocols for seamless failover to backup systems. Capacity-building and the adoption of best practices aim to accelerate deployment while reducing costs and risks.

The recommendations on timely deployment and repair encourage governments to streamline permitting and approval processes through clear, transparent, and predictable frameworks, reduce barriers such as customs and cabotage restrictions—especially for emergency repairs—and designate a Single Point of Contact to coordinate across agencies. A voluntary global directory of these contact points would help industry navigate national requirements, while greater use of regional and intergovernmental forums is encouraged to promote regulatory alignment and cooperation, drawing on existing industry associations and best practices. The recommendations also aim to strengthen the global repair ecosystem and public–private cooperation. They call for expanding and diversifying repair assets, including vessels and spare parts, particularly in high-risk or underserved regions; developing rapid-response capabilities for shallow-water incidents; and promoting shared maintenance models, joint vessel funding, and public–private partnership hubs. Mapping global repair gaps, encouraging long-term maintenance agreements, sharing best practices and data, investing in training and knowledge platforms, and establishing national public–private coordination mechanisms with 24/7 contacts, joint exercises, and practical operational tools are all seen as essential to improving resilience and speeding up repair responses.

The recommendations on risk identification, monitoring and mitigation encouraged governments to develop evidence-based national strategies in collaboration with cable owners to improve visibility over cable faults and vulnerabilities, while addressing data security and sharing. Knowledge exchange, coordinated through bodies such as the ICPC and regional cable protection committees, is seen as essential, alongside voluntary, standardised mechanisms for sharing anonymised information on cable delays, faults, and outages. The recommendations also stress the importance of robust legal frameworks, enforcement, and maritime coordination. States are urged to clarify jurisdiction, implement relevant UNCLOS and IHO obligations, and involve law enforcement in investigations, supported by real-time data sharing and clearer liability standards. Greater integration of cable protection into maritime training, vessel inspection, and nautical charting is encouraged. Finally, resilience should be reinforced through regular stress tests and audits, stronger physical and digital security, better planning for decommissioning and redundancy—particularly for SIDS—and higher upfront investment to reduce long-term outage risks.

Why does it matter? With more than 99% of international data traffic carried by submarine cables and over 200 faults reported annually, the summit underscored the shared responsibility of governments and industry to safeguard this critical infrastructure. The outcomes of Porto are expected to guide policy, operational practice, and investment decisions globally, reinforcing a resilient, open, and reliable foundation for the digital economy.

IN OTHER NEWS LAST WEEK

This week in AI governance

The UN. The UN Secretary-General has submitted to the General Assembly a list of 40 distinguished individuals for consideration to serve on the Independent International Scientific Panel on Artificial Intelligence. The Panel’s main task is to ‘issuing evidence-based scientific assessments synthesising and analysing existing research related to the opportunities, risks and impacts of AI’, in the form of one annual ‘policy-relevant but non-prescriptive summary report’ to be presented to the Global Dialogue on AI Governance. The Panel will also ‘provide updates on its work up to twice a year to hear views through an interactive dialogue of the plenary of the General Assembly with the Co-Chairs of the Panel’. 

The UN Children’s Fund (UNICEF) has called on governments to criminalise the creation, possession and distribution of AI-generated child sexual abuse content, warning of a sharp rise in sexually explicit deepfakes involving children and urging stronger safety-by-design practices and robust content moderation. A study cited by the agency found that at least 1.2 million children in 11 countries reported their images being manipulated into explicit AI deepfakes, with ‘nudification’ tools that strip or alter clothing posing heightened risks. UNICEF stressed that sexualised deepfakes of minors should be treated as child sexual abuse material under the law and urged digital platforms to prevent circulation rather than merely remove content after the fact.

China. A court in eastern China has set an early legal precedent by limiting developer liability for AI hallucinations, ruling that developers are not automatically responsible unless users can prove fault and demonstrable harm. Judges characterised AI services as service providers, requiring claimants to show both provider fault and actual injury from erroneous outputs, a framework intended to balance innovation incentives with user protection.

International experts. The second International AI Safety Report 2026 has been published. The report synthesises evidence on AI capabilities — such as improved reasoning and task performance — alongside emerging risks like deepfakes, cyber misuse and emotional reliance on AI companions, while noting uneven reliability and ongoing challenges in managing risks. It aims to equip policymakers with a science-based foundation for regulatory and governance decisions without prescribing specific policies.

The UK. Britain is partnering with Microsoft, academics, and tech experts to develop a deepfake detection system to combat harmful AI-generated content. The government’s framework will standardise how detection tools are evaluated against real-world threats such as impersonation and sexual exploitation, building on recent legislation criminalising the creation of non-consensual intimate synthetic imagery. Officials cited a dramatic increase in deepfakes shared online in recent years as motivation for the initiative.

Grok. The cybercrime unit of the Paris prosecutor has raided the French office of X as part of this expanded investigation. Musk and ​former CEO Linda ​Yaccarino have been summoned for voluntary interviews. X denied any wrongdoing and called the raid an ‘abusive act of law enforcement theatre’ while Musk described it as a ‘political attack.’

The UK Information Commissioner’s Office (ICO) opened a formal investigation into X and xAI over whether Grok’s processing of personal data complies with UK data protection law, namely core data protection principles—lawfulness, fairness, and transparency—and whether its design and deployment included sufficient built-in protections to stop the misuse of personal data for creating harmful or manipulated images.

Meanwhile, Indonesia has restored access to Grok after banning it in January, having received guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.


MoltBook: Is AI singularity here? 

The rapid rise of Moltbook, a novel social platform designed specifically for AI agents to interact with one another, has ignited both excitement and scepticism.

Unlike traditional social media, where humans generate most content, Moltbook restricts posting and engagement to autonomous AI agents — human users can observe the activity but generally cannot post or comment themselves.

The platform quickly attracted attention due to its scale and rapid growth. Thousands of AI agents reportedly joined within days of its launch, creating a dynamic environment in which automated systems appeared to converse, debate, and even develop distinct communication patterns. The network relies on autonomous scheduling mechanisms that enable agents to post and interact without continuous human prompting.

The big question. Is MoltBook AI singularity in action? According to the newest research by Wiz, the network is actually mostly humans running fleets of bots. About 17,000 people control 1.5 million registered agents, and the platform lacks any way to verify if an account is truly AI or just a scripted human. So, MoltBook is a sandbox for autonomous AI interaction, not a step toward singularity.


Child safety online: The bans club grows

The momentum on banning children from accessing social media continues, as Austria, Greece, Poland, Slovenia and Spain weigh legislative moves and enforcement tools.

In Spain, Prime Minister Pedro Sánchez’s government has proposed legislation that would ban social media access for users under 16, framing the measure as a necessary child-protection tool against addiction, exploitation, and harmful content. Under the draft plan, platforms must deploy mandatory age-verification systems designed as enforceable barriers rather than symbolic safeguards—signalling a shift toward stronger regulatory enforcement rather than voluntary compliance by tech companies. Proposals also include legal accountability for technology executives over unlawful or hateful material that remains online.

Poland’s ruling coalition is currently drafting a law that would ban social media use for children under 15. Lawmakers aim to finalise the law by late February 2026 and potentially implement it by Christmas 2027. Poland aims to update its digital ID app, mObywatel, to enable users to verify their age. 

Slovenia is preparing draft legislation to ban minors under 15 from accessing social media, a move initiated by the Education Ministry.

In Austria, the government is actively debating a prohibition on social media use for children under 14. State Secretary for Digital Affairs Alexander Pröll confirmed the policy is under discussion with the aim of bringing it into force by the start of the school year in September 2026. 

Greece is reportedly close to announcing a ban on social media use for children under 15. The Ministry of Digital Governance intends to rely on the Kids Wallet application, introduced last year, as a mechanism for enforcing the measure instead of developing a new control framework. 

These individual national efforts unfold against a backdrop of increasing international regulatory coordination. On 3 February 2026, the European Commission convened with Australia’s eSafety Commissioner and the UK’s Ofcom to share insights on age assurance measures—technical and policy approaches for verifying users’ ages and enforcing age‑appropriate restrictions online. The meeting followed a joint communication signed at the end of 2025, where the three regulators pledged ongoing collaboration to strengthen online safety for children, including exploring effective age‑assurance technologies, enforcement strategies, and the role of data and independent research in regulatory action.

Zooming out. These initiatives across multiple nations confirm that Australia’s social media ban was not an isolated policy experiment, but rather the beginning of a global bandwagon effect. This momentum is particularly striking given that Australia’s own ban is not yet widely deemed a success—its effectiveness and broader impacts are still being studied and debated. 

The developments come just as Australia’s eSafety report notes that tech giants—including Apple, Google, Meta, Microsoft, Discord, Snap, Skype and WhatsApp—have made only limited progress in combating online child sexual exploitation and abuse (CSEA) despite being legally required to report measures under Australia’s Online Safety Act.


TikTok’s addictive design violates DSA, preliminary investigation finds

The European Commission has preliminarily concluded that TikTok’s design violates the bloc’s Digital Services Act (DSA) due to features that the Commission considers addictive, such as infinite scroll, autoplay, push notifications, and its highly personalised recommender system.

According to the Commission, existing safeguards on TikTok—such as screen-time management and parental control tools—do not appear sufficient to mitigate the risks associated with these design choices.

At this stage, the Commission indicates that TikTok would need to modify the core design of its service. Possible measures include phasing out or limiting infinite scroll, introducing more effective screen-time breaks, including at night, and adjusting its recommender system to reduce addictive effects.

What’s next? TikTok can now review the Commission’s case file and respond to the preliminary findings while the European Board for Digital Services is consulted. If the Commission’s findings are confirmed, it may issue a non-compliance decision that could result in fines of up to 6% of the company’s global annual turnover.


Governments continue the push for digital sovereignty

Last week saw further developments pointing to digital sovereignty as the prevailing trend, carrying over from December 2025 into January and February 2026.

In Brussels, the European Commission has begun testing the open-source Matrix protocol as a possible alternative to proprietary messaging platforms for internal communication. Matrix’s federated architecture allows communications to be hosted on European infrastructure and governed under EU rules, aligning with broader efforts to build sovereign digital public services and reduce reliance on external platforms.

In France, the government has taken a hard line on control of satellite infrastructure, another cornerstone of digital sovereignty. Paris blocked the sale of ground-station assets owned by Eutelsat to an external investor, arguing that such infrastructure underpins both civilian and military space communications and must remain under domestic authority. French officials described these facilities as critical to strategic autonomy, in part because Eutelsat represents one of Europe’s few genuine competitors to US-led satellite constellations such as Starlink.

The big picture. As governments recalibrate their digital architectures, the balance between interoperability, security, and sovereign control will remain one of the defining tensions of 21st-century technology policy.


France declares a ‘year of resistance’ against Shein and other ultra-cheap online platforms

France is stepping up its pushback against ultra-low-cost online retailers, with Minister for Small and Medium Enterprises, Trade, Crafts, Tourism, and Purchasing Power Serge Papin declaring 2026 a ‘year of resistance’ to platforms such as Shein. The government argues that physical French shops face strict rules and liability, while global online marketplaces operate under looser standards, creating unfair competition.

Paris is now challenging a December court ruling that refused to suspend Shein’s French operations after inappropriate products were found on its marketplace. 

At the same time, the government is preparing legislation that would give authorities the power to suspend online platforms without first seeking judicial approval, a significant expansion of executive oversight in the digital economy.

Fiscal measures are also being brought into play. From 1 March 2026, France plans to impose a €2 tax on small parcels to target the flood of low-value direct-to-consumer imports. This will be followed by a broader EU-level levy of €3 per parcel in the summer, aimed at narrowing the price advantage enjoyed by overseas platforms.

Why does it matter? Taken together, these steps point to a shift from targeting individual companies to tightening the rules for digital marketplaces as a whole, with potential implications beyond France.


China proposes exit bans for cybercriminals and expansion of enforcement powers

The Chinese Ministry of Public Security has drafted a new law that would allow authorities to impose exit bans for up to 3 years on convicted cybercriminals, as well as individuals and entities that facilitate, support, or abet such activities. 

The proposal would also allow authorities could bar entry to anyone convicted of cybercrime, prosecute Chinese nationals abroad, and pursue foreign entities whose actions are seen as harming national interests.

The draft also seeks to curb the spread of fake news and content that disrupts public order or violates social norms, reflecting a broader push to regulate online information.

Why does it matter?  By imposing exit bans and targeting anyone connected to cybercrime—including service providers or foreign entities—the law could affect global businesses, cross-border collaborations, and the movement of tech professionals.



LAST WEEK IN GENEVA
 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

11th Geneva Engage Awards

Diplo and the Geneva Internet Platform (GIP) organised the 11th edition of the Geneva Engage Awards, recognising the efforts of International Geneva actors in digital outreach and online engagement. 

The awards honoured organisations across three main categories: international organisations, NGOs, and permanent representations. The awards assessed efforts in social media engagement, web accessibility, and AI leadership, reinforcing Geneva’s role as a trusted source of reliable information as technology changes rapidly.

In the International Organisations category, the United Nations Conference on Trade and Development (UNCTAD) won first place. Among non-governmental organisations, the International AIDS Society ranked first. In the Permanent Representations category, the Permanent Mission of the Republic of Indonesia to the United Nations Office and other international organisations in Geneva took first place.

The Web Accessibility Award went to the Permanent Mission of Canada, while the Geneva AI Leadership Award was presented to the International Telecommunication Union (ITU).

LOOKING AHEAD
 Person, Face, Head, Binoculars

On 10 February, Diplo, the Open Knowledge Foundation, and the Geneva Internet Platform will coorganise an online event, ‘Decoding the UN CSTD Working Group on Data Governance | Part 3’, which will review progress and prospects of the UN Multi-Stakeholder Working Group on Data Governance. Discussions will cover the status of parallel working tracks, ongoing consultations for input, and expectations for the group’s 2026 report drafting.

The 2026 Munich Security Conference (MSC) will be held 13–15 February in Munich, Germany, bringing together officials, experts, and diplomats to discuss international security and foreign policy challenges, among them the security implications of technological advances. Ahead of the main event, the MSC Kick-off on 9 February in Berlin will introduce key topics and present the annual Munich Security Report.

The 39th African Union Summit brings together Heads of State and Government of the African Union’s 55 member states in Addis Ababa to define continental priorities under Agenda 2063, Africa’s long-term development blueprint. While the official Summit theme for 2026 centres on water security and sustainable infrastructure, discussions will likely feature digital transformation and AI.



READING CORNER
map blog

The borderless dream of cyberspace is over. AI has instead intensified the relevance of geography through three dimensions: geopolitics (control of infrastructure and data), geoeconomics (the market power of tech giants), and geoemotions (societal trust and sentiment toward technology). Tech companies now act as unprecedented geopolitical players, centralizing power while states push for sovereignty and regulation. Success in the digital age will depend on mastering this new reality where technology actively redefines territory, power, and human emotion.

BLOG featured image 2026 14

Do we really need frontier AI for everyday work? We’re bombarded with news about the latest frontier AI models and their ever-expanding capabilities. But the real question is whether these advances matter for most of us, most of the time.

Digital Watch newsletter – Issue 106 – January 2026

December 2025 and January 2026 in retrospect

This month’s newsletter looks back on December 2025 and January 2026 and explores the forces shaping the digital landscape in 2026:

WSIS+20 review: A close look at the outcome document and high-level review meeting, and what it means for global digital cooperation.

Child safety online: Momentum on bans continues, while landmark US trials examine platform addiction and responsibility.

Digital sovereignty: Governments are reassessing data, infrastructure, and technology policies to limit foreign exposure and build domestic capacity.

Grok Shock: Regulatory scrutiny hits Grok, X’s AI tool, after reports of non-consensual sexualised and deepfake content.

Geneva Engage Awards: Highlights from the 11th edition, recognising excellence in digital outreach and engagement in International Geneva.

Annual AI and digital forecast: We highlight the 10 trends and events we expect to shape the digital landscape in the year ahead.

Global digital governance

The USA has withdrawn from a wide range of international organisations, conventions and treaties it considers contrary to its interests, including dozens of UN bodies and non-UN entities. In the technology and digital governance space, it explicitly dropped two initiatives: the Freedom Online Coalition and the Global Forum on Cyber Expertise. The implications of withdrawing from UNCTAD and the UN Department of Economic and Social Affairs remain unclear, given their links to processes such as WSIS, follow-up to Agenda 2030, the Internet Governance Forum, and broader data-governance work.

Technologies

US President Trump signed a presidential proclamation imposing a 25% tariff on certain advanced computing and AI‑oriented chips, including high‑end products such as Nvidia’s H200 and AMD’s MI325X, under a national security review. Officials described the measure as a ‘phase one’ step aimed at strengthening domestic production and reducing dependence on foreign manufacturers, particularly those in Taiwan, while also capturing revenue from imports that do not contribute to US manufacturing capacity. The administration suggested that further actions could follow depending on how negotiations with trading partners and the industry evolve.

The USA and Taiwan announced a landmark semiconductor-focused trade agreement. Under the deal, tariffs on a broad range of Taiwanese exports will be reduced or eliminated, while Taiwanese semiconductor companies, including leading firms like TSMC, have committed to invest at least $250 billion in US chip manufacturing, AI, and energy projects, supported by an additional $250 billion in government-backed credit.

The protracted legal and political dispute over Dutch semiconductor manufacturer Nexperia,  a Netherlands‑based firm owned by China’s Wingtech Technology, also continues. The dispute erupted in autumn 2025, when Dutch authorities briefly seized control of Nexperia, citing national security and concerns about potential technology transfers to China. Nexperia’s European management and Wingtech representatives are now squaring off in an Amsterdam court, which is deciding whether to launch a formal investigation into alleged mismanagement. The court is set to make a decision within four weeks.

Reports say Chinese scientists have built a prototype extreme ultraviolet lithography machine, a technology long dominated by ASML. This Dutch firm is the sole supplier of EUV systems and a major chokepoint in advanced chipmaking. EUV tools are essential for producing cutting-edge chips used in AI, high-performance computing and modern weapons by etching ultra-fine circuits onto silicon wafers. The prototype is reportedly already generating EUV light but has not yet produced working chips, and the effort is said to include former ASML engineers who reverse-engineered key components.

Canada has launched Phase 1 of the Canadian Quantum Champions Program as part of a $334.3 million Budget 2025 investment, providing up to $92 million in initial funding, up to $23 million each to Anyon Systems, Nord Quantique, Photonic and Xanadu, to advance fault-tolerant quantum computers and keep key capabilities in Canada, with progress assessed through a new National Research Council-led benchmarking platform.

The USA has reportedly paused implementation of its Tech Prosperity Deal with the UK, a pact agreed during President Trump’s September visit to London that aimed to deepen cooperation on frontier technologies such as AI and quantum and included planned investment commitments by major US tech firms. According to the Financial Times, the suspension reflects broader US frustration with UK positions on wider trade matters, with Washington seeking UK concessions on non-tariff barriers, especially regulatory standards for food and industrial goods, before moving the technology agreement forward.

At the 16th EU–India Summit in New Delhi, the EU and India moved into a new phase of cooperation by concluding a landmark Free Trade Agreement and launching a Security and Defence Partnership, signalling closer alignment amid global economic and geopolitical pressures. The trade deal aims to cut tariff and non-tariff barriers and strengthen supply chains, while the security track expands cooperation on areas such as maritime security, cyber and hybrid threats, counterterrorism, space and defence industrial collaboration.

South Korea and Italy have agreed to deepen their strategic partnership by expanding cooperation in high-technology fields, especially AI, semiconductors and space, with officials framing the effort as a way to boost long-term competitiveness through closer research collaboration, talent exchanges and joint development initiatives, even though specific programmes have not yet been detailed publicly.

Infrastructure

The EU adopted the Digital Networks Act, which aims to reduce fragmentation with limited spectrum harmonisation and an EU-wide numbering scheme for cross-border business services, while stopping short of a truly unified telecoms market. The main obstacle remains resistance from member states that want to retain control over spectrum management, especially for 4G, 5G and Wi-Fi, leaving the package as an incremental step rather than a structural overhaul despite long-running calls for deeper integration.

The Second International Submarine Cable Resilience Summit concluded with the Porto Declaration on Submarine Cable Resilience, which reaffirms the critical role of submarine telecommunications cables for global connectivity, economic development and digital inclusion. The declaration builds on the 2025 Abuja Declaration with further practical guidance and outlines non-binding recommendations to strengthen international cooperation and resilience — including streamlining permitting and repair, improving legal/regulatory frameworks, promoting geographic diversity and redundancy, adopting best practices for risk mitigation, enhancing cable protection planning, and boosting capacity-building and innovation — to support more reliable, inclusive global digital infrastructure. 

Cybersecurity

Roblox is under formal investigation in the Netherlands, as the Autoriteit Consument & Markt (ACM) has opened a formal investigation to assess whether Roblox is taking sufficient measures to protect children and teenagers who use the service. The probe will examine Roblox’s compliance with the European Union’s Digital Services Act (DSA), which obliges online services to implement appropriate and proportionate measures to ensure safety, privacy and security for underage users, and could take up to a year.

Meta, which was under intense scrutiny by regulators and civil society over chatbots that previously permitted provocative or exploitative conversations with minors, is pausing teenagers’ access to its AI characters globally while it redesigns the experience with enhanced safety and parental controls. The company said teens will be blocked from interacting with certain AI personas until a revised platform is ready, guided by principles akin to a PG-13 rating system to limit exposure to inappropriate content. 

ETSI has issued a new standard, EN 304 223, setting cybersecurity requirements for AI systems across their full lifecycle, addressing AI-specific threats like data poisoning and prompt injection, with additional guidance for generative-AI risks expected in a companion report.

The EU has proposed a new cybersecurity package to tighten supply-chain security, expand and speed up certification, streamline NIS2 compliance and reporting, and give ENISA stronger operational powers such as threat alerts, vulnerability management and ransomware support.

A group of international cybersecurity agencies has released new technical guidance addressing the security of operational technology (OT) used in industrial and critical infrastructure environments. The guidance, led by the UK’s National Cyber Security Centre (NCSC), provides recommendations for securely connecting industrial control systems, sensors, and other operational equipment that support essential services. According to the co-authoring agencies, industrial environments are being targeted by a range of actors, including cybercriminal groups and state-linked actors. 

The UK has launched a Software Security Ambassadors Scheme led by the Department for Science, Innovation and Technology and the National Cyber Security Centre, asking participating organisations to promote a new Software Security Code of Practice across their sectors and improve secure development and procurement to strengthen supply-chain resilience.

British and Chinese security officials have agreed to establish a new cyber dialogue forum to discuss cyberattacks and manage digital threats, aiming to create clearer communication channels, reduce the risk of miscalculation in cyberspace, and promote responsible state behaviour in digital security.

Economic 

EU ministers have urged faster progress toward the bloc’s 2030 digital targets, calling for stronger digital skills, wider tech adoption and simpler rules for SMEs and start-ups while keeping data protection and fundamental rights intact, alongside tougher, more consistent enforcement on online safety, illegal content, consumer protection and cyber resilience.

South Korea has approved legal changes to recognise tokenised securities and set rules for issuing and trading them within the regulated capital-market system, with implementation planned for January 2027 after a preparation period. The framework allows eligible issuers to create blockchain-based debt and equity products, while trading would run through licensed intermediaries under existing investor-protection rules.

Russia is keeping the ruble as the only legal payment method and continues to reject cryptocurrencies as money, but lawmakers are moving toward broader legal recognition of crypto as an asset, including a proposal to treat it as marital property in divorce cases, alongside limited, regulated use of crypto in foreign trade.

The UK plans to bring cryptoassets fully under its financial regulatory perimeter, with crypto firms regulated by the Financial Conduct Authority from 2027 under rules similar to those for traditional financial products, aiming to boost consumer protection, transparency and market confidence while supporting innovation and cracking down on illicit activity, alongside efforts to shape international standards through cooperation such as a UK–US taskforce.

Hong Kong’s proposed expansion of crypto licensing is drawing industry concern that stricter thresholds could force more firms into full licensing, raise compliance costs and lack a clear transition period, potentially disrupting businesses while applications are processed.

Poland’s effort to introduce a comprehensive crypto law has reached an impasse after the Sejm failed to overturn President Karol Nawrocki’s veto of a bill meant to align national rules with the EU’s MiCA framework. The government argued the reform was essential for consumer protection and national security, but the president rejected it as overly burdensome and a threat to economic freedom. In the aftermath, Prime Minister Donald Tusk has pledged to renew efforts to pass crypto legislation.

In Norway, Norges Bank has concluded that current conditions do not justify launching a central bank digital currency, arguing that Norway’s payment system remains secure, efficient and well-tailored to users. The bank maintains that the Norwegian krone continues to function reliably, supported by strong contingency arrangements and stable operational performance. Governor Ida Wolden Bache said the assessment reflects timing rather than a rejection of CBDCs, noting the bank could introduce one if conditions change or if new risks emerge in the domestic payments landscape.

The EU member states will introduce a new customs duty on low-value e-commerce imports, starting 1 July 2026. Under the agreement, a customs duty of €3 per item will be applied to parcels valued at less than €150 imported directly into the EU from third countries. The temporary duty is intended to bridge the gap until the EU Customs Data Hub, a broader customs reform initiative designed to provide comprehensive import data and enhance enforcement capacity, becomes fully operational in 2028. 

Development 

UNESCO expressed growing concern over the expanding use of internet shutdowns by governments seeking to manage political crises, protests, and electoral periods. Recent data indicate that more than 300 shutdowns have occurred across 54 countries over the past two years, with 2024 the most severe year since 2016. According to UNESCO, restricting online access undermines the universal right to freedom of expression and weakens citizens’ ability to participate in social, cultural, and political life. Access to information remains essential not only for democratic engagement but also for rights linked to education, assembly, and association, particularly during moments of instability. Internet disruptions also place significant strain on journalists, media organisations, and public information systems that distribute verified news. 

The OECD says generative AI is spreading quickly in schools, but results are mixed: general-purpose chatbots can improve the polish of students’ work without boosting exam performance, and may weaken deep learning when they replace ‘productive struggle.’ It argues that education-specific AI tools designed around learning science, used as tutors or collaborative assistants, are more likely to improve outcomes and should be prioritised and rigorously evaluated. 

The UK will trial AI tutoring tools in secondary schools, aiming for nationwide availability by the end of 2027, with teachers involved in co-design and testing and safety, reliability and National Curriculum alignment treated as core requirements. The initiative is intended to provide personalised support and help narrow attainment gaps, with up to 450,000 disadvantaged pupils in years 9–11 potentially benefiting each year, while positioning the tools as a supplement to, not a replacement for, classroom teaching.

Sociocultural

The EU has designated WhatsApp a Very Large Online Platform under the Digital Services Act (DSA) after it reported more than 51 million monthly users in the bloc, triggering tougher obligations to assess and mitigate systemic risks such as disinformation and to strengthen protections for minors and vulnerable users. The European Commission will directly supervise compliance, with potential fines of up to 6% of global annual turnover, and WhatsApp has until mid-May to align its policies and risk assessments with the DSA requirements.

The EU has issued its first DSt non-compliance decision against X, fining the platform €120 million for misleading paid ‘blue check’ verification, weak ad transparency due to an incomplete advertising repository, and barriers that restrict access to public data for researchers. X must propose fixes for the checkmark system within 60 working days and submit a broader plan on data access and advertising transparency within 90 days, or face further enforcement.

The EU has accepted binding commitments from TikTok under the DSA to make ads more transparent, including showing ads exactly as users see them, adding targeting and demographic details, updating its ad repository within 24 hours, and expanding tools and access for researchers and the public, with implementation deadlines ranging from two to twelve months.

WhatsApp is facing intensifying pressure from Russian authorities, who argue the service does not comply with national rules on data storage and cooperation with law enforcement, while Meta has no legal presence in Russia and rejects requests for user information. Officials are promoting state-backed alternatives, such as the national messaging app Max, and critics warn that targeting WhatsApp would curb private communications rather than address genuine security threats. 

National AI regulation

Vietnam. Vietnam’s National Assembly has passed the country’s first comprehensive AI law, establishing a risk management regime, sandbox testing, a National AI Development Fund and startup voucher schemes to balance strict safeguards with innovation incentives. The 35‑article legislation — largely inspired by EU and other models — centralises AI oversight under the government and will take effect in March 2026.

The UK. More than 100 UK parliamentarians from across parties are pushing the government to adopt binding rules on advanced AI systems, saying current frameworks lag behind rapid technological progress and pose risks to national and global security. The cross‑party campaign, backed by former ministers and figures from the tech community, seeks mandatory testing standards, independent oversight and stronger international cooperation — challenging the government’s preference for existing, largely voluntary regulation.

The USA. The US President Donald Trump has signed an executive order targeting what the administration views as the most onerous and excessive state-level AI laws. The White House argues that a growing patchwork of state rules threatens to stymie innovation, burden developers, and weaken US competitiveness.

To address this, the order creates an AI Litigation Task Force to challenge state laws deemed obstructive to the policy set out in the executive order – to sustain and enhance the US global AI dominance through a minimally burdensome national policy framework for AI. The Commerce Department is directed to review all state AI regulations within 90 days to identify those that impose undue burdens. It also uses federal funding as leverage, allowing certain grants to be conditioned on states aligning with national AI policy.

National plans and investments

Russia. Russia is advancing a nationwide plan to expand the use of generative AI across public administration and key sectors, with a proposed central headquarters to coordinate ministries and agencies. Officials see increased deployment of domestic generative systems as a way to strengthen sovereignty, boost efficiency and drive regional economic development, prioritising locally developed AI over foreign platforms.

Qatar. Qatar has launched Qai, a new national AI company designed to accelerate the country’s digital transformation and global AI footprint. Qai will provide high‑performance computing and scalable AI infrastructure, working with research institutions, policymakers and partners worldwide to promote the adoption of advanced technologies that support sustainable development and economic diversification.

The EU. The EU has advanced an ambitious gigafactory programme to strengthen AI leadership by scaling up infrastructure and computational capacity across member states. This involves expanding a network of AI ‘factories’ and antennas that provide high‑performance computing and technical expertise to startups, SMEs and researchers, integrating innovation support alongside regulatory frameworks like the AI Act. 

Australia. Australia has sealed a USD 4.6 billion deal for a new AI hub in western Sydney, partnering with private sector actors to build an AI campus with extensive GPU-based infrastructure capable of supporting advanced workloads. The investment forms part of broader national efforts to establish domestic AI innovation and computational capacity. 

Morocco. Morocco is preparing to unveil ‘Maroc IA 2030’, a national AI roadmap designed to structure the country’s AI ecosystem and strengthen digital transformation. The plan aims to add an estimated $10 billion to GDP by 2030, create tens of thousands of AI-related jobs, and integrate AI across industry and government, including modernising public services and strengthening technological autonomy. Central to the strategy is the launch of the JAZARI ROOT Institute, the core hub of a planned network of AI centres of excellence that will bridge research, regional innovation, and practical deployment; additional initiatives include sovereign data infrastructure and partnerships with global AI firms. Authorities also emphasise building national skills and trust in AI, with governance structures and legislative proposals expected to accompany implementation.

Capacity building initiatives 

The USA. The Trump administration has unveiled a new initiative, branded the US Tech Force, an initiative aimed at rebuilding the US government’s technical capacity after deep workforce reductions, with a particular focus on AI and digital transformation.

According to the official TechForce.gov website, participants will work on high-impact federal missions, addressing large-scale civic and national challenges. The programme positions itself as a bridge between Silicon Valley and Washington, encouraging experienced technologists to bring industry practices into government environments.  The programme reflects growing concern within the administration that federal agencies lack the in-house expertise needed to deploy and oversee advanced technologies, especially as AI becomes central to public administration, defence, and service delivery.

Taiwan. Taiwan’s government has set an ambitious goal to train 500,000 AI professionals by 2040 as part of its long-term AI development strategy, backed by a NT$100 billion (approximately US$3.2 billion) venture fund and a national computing centre initiative. President Lai Ching-te announced the target at a 2026 AI Talent Forum in Taipei, highlighting the need for broad AI literacy across disciplines to sustain national competitiveness, support innovation ecosystems, and accelerate digital transformation in small and medium-sized enterprises. The government is introducing training programmes for students and public servants and emphasising cooperation between industry, academia, and government to develop a versatile AI talent pipeline. 

El Salvador. El Salvador has partnered with xAI to launch the world’s first nationwide AI-powered education programme, deploying the Grok model across more than 5,000 public schools to deliver personalised, curriculum-aligned tutoring to over one million students over the next two years. The initiative will support teachers with adaptive AI tools while co-developing methodologies, datasets and governance frameworks for responsible AI use in classrooms, aiming to close learning gaps and modernise the education system. President Nayib Bukele described the move as a leap forward in national digital transformation. 

UN AI Resource Hub. The UN AI Resource Hub has gone live as a centralised platform aggregating AI activities and expertise across the UN system. Presented by the UN Inter-Agency Working Group on AI, the platform has been developed through the joint collaboration of UNDP, UNESCO and ITU. It enables stakeholders to explore initiatives by agency, country and SDGs. The hub supports inter-agency collaboration, capacity for UN member states, and enhanced coherence in AI governance and terminology.

Partnerships 

Canada‑EU. Canada and the EU have expanded their digital partnership on AI and security, committing to deepen cooperation on trusted AI systems, data governance and shared digital infrastructure. This includes memoranda aimed at advancing interoperability, harmonising standards and fostering joint work on trustworthy digital services. 

The International Network for Advanced AI Measurement, Evaluation and Science. The global network has strengthened cooperation on benchmarking AI governance progress, focusing on metrics that help compare national policies, identify gaps and support evidence‑based decision‑making in AI regulation internationally. This network includes Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the USA. The UK has assumed the role of Network Coordinator.

BRICS. Talks on AI governance within the BRICS bloc have deepened as member states seek to harmonise national approaches and shared principles to ethical, inclusive and cooperative AI deployment. It is, however, still premature to talk about the creation of an AI-BRICS, Deputy Foreign Minister Sergey Ryabkov, Russia’s BRICS sherpa stated.

ASEAN-Japan. Japan and the Association of Southeast Asian Nations (ASEAN) have agreed to deepen cooperation on AI, formalised in a joint statement at a digital ministers’ meeting in Hanoi. The partnership focuses on joint development of AI models, aligning related legislation, and strengthening research ties to enhance regional technological capabilities and competitiveness amid global competition from the United States and China.

Pax Silica. A diverse group of nations has announced Pax Silica, a new partnership aimed at building secure, resilient, and innovation-driven supply chains for the technologies that underpin the AI era. These include critical minerals and energy inputs, advanced manufacturing, semiconductors, AI infrastructure and logistics. Analysts warn that diverging views may emerge if Washington pushes for tougher measures targeting China, potentially increasing political and economic pressure on participating nations. However, the USA, which leads the platform, clarified that the platform will focus on strengthening supply chains among its members rather than penalising non-members, like China.

Content governance

Italy. Italy’s antitrust authority has formally closed its investigation into the Chinese AI developer DeepSeek after the company agreed to binding commitments to make risks from AI hallucinations — false or misleading outputs — clearer and more accessible to users. Regulators stated that DeepSeek will enhance transparency, providing clearer warnings and disclosures tailored to Italian users, thereby aligning its chatbot deployment with local regulatory requirements. If these conditions aren’t met, enforcement action under Italian law could follow.

Spain. Spain’s cabinet has approved draft legislation aimed at curbing AI-generated deepfakes and tightening consent rules on the use of images and voices. The bill sets 16 as the minimum age for consenting to image use and prohibits the reuse of online images or AI-generated likenesses without explicit permission — including for commercial purposes — while allowing clear, labelled satire or creative works involving public figures. The reform reinforces child protection measures and mirrors broader EU plans to criminalise non-consensual sexual deepfakes by 2027. Prosecutors are also examining whether certain AI-generated content could qualify as child pornography under Spanish law. 

Malta. The Maltese government is preparing tougher legal measures to tackle abuses of deepfake technology. Current legislation is under review with proposals to introduce penalties for the misuse of AI in harassment, blackmail, and bullying cases, building on existing cyberbullying and cyberstalking laws by extending similar protections to harms stemming from AI-generated content. Officials emphasise that while AI adoption is a national priority, robust safeguards against abusive use are essential to protect individuals and digital rights.

China. China’s cyberspace regulator has proposed new limits on AI ‘boyfriend’ and ‘girlfriend’ chatbots. Draft rules require platforms to intervene when users express suicidal or self-harm tendencies when interacting with emotionally interactive AI services, while strengthening protections for minors and restricting harmful content. The regulator defines the services as AI systems that simulate human personality traits and emotional interaction.

Note to readers: We’ve reported separately on the January 2026 backlash against Grok, following claims it was used to generate non-consensual sexualised and deepfake images.

Security

The UN. The UN has raised the alarm about AI-driven threats to child safety, highlighting how AI systems can accelerate the creation, distribution, and impact of harmful content, including sexual exploitation, abuse, and manipulation of children online. As smart toys, chatbots, and recommendation engines increasingly shape youth digital experiences, the absence of adequate safeguards risks exposing a generation to novel forms of exploitation and harm. 

International experts. The second International AI Safety Report finds that AI capabilities continue to advance rapidly—with leading systems outperforming human experts in areas like mathematics, science and some autonomous software tasks—while performance remains uneven. Adoption is swift but uneven globally. Rising harms include deepfakes, misuse in fraud and non‑consensual content, and systemic impacts on autonomy and trust. Technical safeguards and voluntary safety frameworks have improved but remain incomplete, and effective multi‑layered risk management is still lacking.

The EU and the USA. The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have released ten principles for good AI practice in the medicines lifecycle. The guidelines provide broad direction for AI use in research, clinical trials, manufacturing, and safety monitoring. The principles are relevant to pharmaceutical developers, marketing authorisation applicants, and holders, and will form the basis for future AI guidance in different jurisdictions.

The WSIS+20 review, conducted 20 years after the World Summit on the Information Society, concluded in December 2025 in New York with the adoption of a high-level outcome document by the UN General Assembly. The review assesses progress toward building a people-centred, inclusive, and development-oriented information society, highlights areas needing further effort, and outlines measures to strengthen international cooperation.

 Adult, Male, Man, Person, Clothing, Formal Wear, Suit, Book, Comics, Publication, Face, Head, Coat, Text

A major institutional decision was to make the Internet Governance Forum (IGF) a permanent UN body. The outcome also includes steps to strengthen its functioning: broadening participation—especially from developing countries and underrepresented communities—enhancing intersessional work, supporting national and regional initiatives, and adopting innovative and transparent collaboration methods. The IGF Secretariat is to be strengthened, sustainable funding ensured, and annual reporting on progress provided to UN bodies, including the Commission on Science and Technology for Development (CSTD).

Negotiations addressed the creation of a governmental segment at the IGF. While some member states supported this as a way to foster more dialogue among governments, others were concerned it could compromise the IGF’s multistakeholder nature. The final compromise encourages dialogue among governments with the participation of all stakeholders.

Beyond the IGF, the outcome confirms the continuation of the annual WSIS Forum and calls for the United Nations Group on the Information Society (UNGIS) to increase efficiency, agility, and membership. 

WSIS action line facilitators are tasked with creating targeted implementation roadmaps linking WSIS action lines to SDGs and Global Digital Compact (GDC) commitments. 

UNGIS is requested to prepare a joint implementation roadmap to strengthen coherence between WSIS and the Global Digital Compact, to be presented to CSTD in 2026. The Secretary-General will submit biennial reports on WSIS implementation, and the next high-level review is scheduled for 2035.

The document places closing digital divides at the core of the WSIS+20 agenda. It addresses multiple aspects of digital exclusion, including accessibility, affordability, quality of connectivity, inclusion of vulnerable groups, multilingualism, cultural diversity, and connecting all schools to the internet. It stresses that connectivity alone is insufficient, highlighting the importance of skills development, enabling policy environments, and human rights protection.

The outcome also emphasises open, fair, and non-discriminatory digital development, including predictable and transparent policies, legal frameworks, and technology transfer to developing countries. Environmental sustainability is highlighted, with commitments to leverage digital technologies while addressing energy use, e-waste, critical minerals, and international standards for sustainable digital products.

Human rights and ethical considerations are reaffirmed as fundamental. The document stresses that rights online mirror those offline, calls for safeguards against adverse impacts of digital technologies, and urges the private sector to respect human rights throughout the technology lifecycle. It addresses online harms such as violence, hate speech, misinformation, cyberbullying, and child sexual exploitation, while promoting media freedom, privacy, and freedom of expression.

Capacity development and financing are recognised as essential. The document highlights the need to strengthen digital skills, technical expertise, and institutional capacities, including in AI. It invites the International Telecommunication Union to establish an internal task force to assess gaps and challenges in financial mechanisms for digital development and to report recommendations to CSTD by 2027. It also calls on the UN Inter-Agency Working Group on AI to map existing capacity-building initiatives, identify gaps, and develop programs such as an AI capacity-building fellowship for government officials and research programmes.

Finally, the outcome underscores the importance of monitoring and measurement, requesting a systematic review of existing ICT indicators and methodologies by the Partnership on Measuring ICT for Development, in cooperation with action line facilitators and the UN Statistical Commission. The Partnership is tasked with reporting to CSTD in 2027. Overall, the CSTD, ECOSOC, and the General Assembly maintain a central role in WSIS follow-up and review.

The final text reflects a broad compromise and was adopted without a vote, though some member states and groups raised concerns about certain provisions.

The momentum of social media bans for children

Australia made history in December as it began enforcing its landmark under-16 social media restrictions — the first nationwide rules of their kind anywhere in the world. 

The measure — a new Social Media Minimum Age (SMMA) requirement under the Online Safety Act — obliges major platforms to take ‘reasonable steps’ to delete underage accounts and block new sign-ups, backed by AUD 49.5 million fines and monthly compliance reporting.

As enforcement began, eSafety Commissioner Julie Inman Grant urged families — particularly those in regional and rural Australia — to consult the newly published guidance, which explains how the age limit works, why it has been raised from 13 to 16, and how to support young people during the transition.

The new framework should be viewed not as a ban but as a delay, Grant emphasised, raising the minimum account age from 13 to 16 to create ‘a reprieve from the powerful and persuasive design features built to keep them hooked and often enabling harmful content and conduct.’

 Book, Publication, Comics, Person, Animal, Bird, Face, Head, Bus Stop, Outdoors, Bench, Furniture

It has been almost two months since the ban—we continue to use the word ‘ban’ in the text, as it has already become part of the vernacular—took effect. Here’s what has happened in the meantime.

Teen reactions. The shift was abrupt for young Australians. Teenagers posted farewell messages on the eve of the deadline, grieving the loss of communities, creative spaces, and peer networks that had anchored their daily lives. Youth advocates noted that those who rely on platforms for education, support networks, LGBTQ+ community spaces, or creative expression would be disproportionately affected.

Workarounds and their limits. Predictably, workarounds emerged immediately. Some teens tried (and succeeded) to fool facial-age estimation tools by distorting their expressions; others turned to VPNs to mask their locations. However, experts note that free VPNs frequently monetise user data or contain spyware, raising new risks. And it might be in vain – platforms retain an extensive set of signals they can use to infer a user’s true location and age, including IP addresses, GPS data, device identifiers, time-zone settings, mobile numbers, app-store information, and behavioural patterns. Age-related markers — such as linguistic analysis, school-hour activity patterns, face or voice age estimation, youth-focused interactions, and the age of an account give companies additional tools to identify underage users.

Privacy and effectiveness concerns. Critics argue that the policy raises serious privacy concerns, since age-verification systems, whether based on government ID uploads, biometrics, or AI-based assessments, force people to hand over sensitive data that could be misused, breached, or normalised as part of everyday surveillance. Others point out that facial-age technology is least reliable for teenagers — the very group it is now supposed to regulate. Some question whether the fines are even meaningful, given that Meta earns roughly AUD 50 million in under two hours.

The limited scope of the rules has drawn further scrutiny. Dating sites, gaming platforms, and AI chatbots remain outside the ban, even though some chatbots have been linked to harmful interactions with minors. Educators and child-rights advocates argue that digital literacy and resilience would better safeguard young people than removing access outright. Many teens say they will create fake profiles or share joint accounts with parents, raising doubts about long-term effectiveness.

Industry pushback. Most major platforms have publicly criticised the law’s development and substance. They maintain that the law will be extremely difficult to enforce, even as they prepare to comply to avoid fines. Industry group NetChoice has described the measure as ‘blanket censorship,’ while Meta and Snap argue that real enforcement power lies with Apple and Google through app-store age controls rather than at the platform level.

Reddit has filed a High Court challenge of the ban, naming the Commonwealth of Australia and Communications Minister Anika Wells as defendants, and claiming that the law is applied to Reddit inaccurately. The platform holds that it is a platform for adults, and doesn’t have the traditional social media features that the government has taken issue with.

Government position. The government, expecting a turbulent rollout, frames the measure as consistent with other age-based restrictions (such as no drinking alcohol under 18) and a response to sustained public concern about online harms. Officials argue that Australia is playing a pioneering role in youth online safety — a stance drawing significant international attention. 

International interest. This development has garnered considerable international attention. There is a growing club of countries seeking to ban minors from major platforms. 

All of these jurisdictions are now looking closely at Australia, watching for proof of concept — or failure.

 Bus Stop, Outdoors, Book, Comics, Publication, Person, Adult, Female, Woman, People, Face, Head, Art

The early results are in. On the enforcement metric — platform compliance and account takedowns — the law is functioning, with social media companies deactivating or restricting roughly 4.7 million accounts understood to belong to Australian users under 16 within the first month of enforcement. 

However, on the behavioural outcome metric — whether under-16s are actually offline, safer, or replacing harmful patterns with healthier ones — the evidence remains inconclusive and evolving. The Australian government has also said it’s too early to declare the ban an unequivocal success.

The unresolved question. Young people retain access to group messaging tools, gaming services and video conferencing apps while they await eligibility for full social media accounts. But the question lingers: if access to large parts of the digital ecosystem remains open, what is the practical value of fencing off only one segment of the internet?

Platforms on trial(s)

In January 2026, a landmark trial opened in Los Angeles involving K.G.M., a 19-year-old plaintiff, and major social media companies. The case, first filed in July 2023, accuses platforms including Meta (Instagram and Facebook), YouTube (Google/Alphabet), Snapchat, and TikTok of intentionally designing their apps to be addictive, with serious consequences for young users’ mental health. 

According to the complaint, features such as infinite scroll, algorithmic recommendations, and constant notifications contributed to compulsive use, exposure to harmful content, depression, anxiety, and even suicidal thoughts. The lawsuit also alleges that the platforms made it difficult for K.G.M. to avoid contact with strangers and predatory adults, despite parental restrictions. K.G.M.’s legal team argues that the companies knowingly optimised their platforms to maximise engagement at the expense of user well-being.

As the trial began, Snap Inc. and TikTok had already reached confidential settlements, leaving Meta and YouTube as the remaining defendants. Meta and YouTube deny intentionally causing harm, highlighting existing safety features, parental controls, and content filters. 

Separately in federal court, Meta, Snap, YouTube and TikTok asked a judge to dismiss school districts’ lawsuits that seek damages for costs tied to student mental health challenges

In both cases, the companies are arguing that Section 230 of US law shields them from liability, while the plaintiffs counter that their claims focus on allegedly addictive design features rather than user-generated content. 

Legal experts and advocates are watching closely, noting that the outcomes could set a precedent for thousands of related lawsuits and ultimately influence corporate design practices.

Governments have long debated controlling data, infrastructure, and technology within their borders. But there is a renewed sense of urgency, as geopolitical tensions are driving a stronger push to identify dependencies, build domestic capacity, and limit exposure to foreign technologies.

At the European level, France is pushing to make digital sovereignty measurable and actionable. Paris has proposed the creation of an EU Digital Sovereignty Observatory to map member states’ reliance on non-European technologies, from cloud services and AI systems to cybersecurity tools. Paired with a digital resilience index, the initiative aims to give policymakers a clearer picture of strategic dependencies and a stronger basis for coordinated action on procurement, investment, and industrial policy. 

The bloc has, however, already started working on digital sovereignty. Just in January, the European Parliament adopted a resolution on European technological sovereignty and digital infrastructure. In the text, the Parliament calls for the development of a robust European digital public infrastructure (DPI) base layer grounded in open standards, interoperability, privacy- and security-by-design, and competition-friendly governance. Priority areas include semiconductors and AI chips, high-performance and quantum computing, cloud and edge infrastructure, AI gigafactories, data centres, digital identity and payments systems, and public-interest data platforms.

Also newly adopted is the Digital Networks Act, which frames sovereignty as the EU’s capacity to control, secure, and scale its critical connectivity infrastructure rather than as isolation from global markets. High-quality, secure digital networks are presented as a foundational enabler of Europe’s digital transformation, competitiveness, and security, with fragmentation of national markets seen as undermining the Union’s ability to act collectively and reduce dependencies. Satellite connectivity is explicitly identified as a core pillar of EU strategic autonomy, essential for broadband access in remote areas and for security, crisis management, defence, and other critical applications, prompting a shift toward harmonised, EU-level authorisation to strengthen resilience and avoid reliance on foreign providers.

The DNA complements the EU’s support for the IRIS2 satellite constellation, a planned multi-orbit constellation of 290 satellites designed to provide encrypted communications for citizens, governments and public agencies and reduce EU reliance on external providers. In mid-January, the timeline for IRIS2 has been moved, according to the EU Commissioner for Defence and Space, Andrius Kubilius.

The EU has also advanced its timeline for the IRIS2 satellite network, according to the EU Commissioner for Defence and Space, Andrius Kubilius. A planned multi-orbit constellation of 290 satellites, IRIS2 aims to begin initial government communication services by 2029, a year earlier than originally planned. The network is designed to provide encrypted communications for citizens, governments and public agencies. It also aims to reduce reliance on external providers, as Europe is ‘quite dependent on American services,’ per Kubilius. 

The Commission is also ready to put money where the goal is: the Commission announced €307.3 million in funding to boost capabilities in AI, robotics, photonics, and other emerging technologies. A significant portion of this investment is tied to initiatives such as the Open Internet Stack, which seek to deepen European digital autonomy. The funding, open to businesses, academia, and public bodies, reflects a broader push to translate policy ambitions into concrete technological capacity.

There’s more in the pipeline. The Cloud and AI Development Act, a revision of the Chips Act and the Quantum Act, all due in 2026, will also bolster EU digital sovereignty, enhancing strategic autonomy across the digital stack.

Furthermore, the European Commission is preparing a strategy to commercialise European open-source software, alongside the Cloud and AI Development Act, to strengthen developer communities, support adoption across various sectors, and ensure market competitiveness. By providing stable support and fostering collaboration between government and industry, the strategy seeks to create an economically sustainable open-source ecosystem.

In Burkina Faso, the focus is on reducing reliance on external providers while consolidating national authority over core digital systems. The government has launched a Digital Infrastructure Supervision Centre to centralise oversight of national networks and strengthen cybersecurity monitoring. New mini data centres for public administration are being rolled out to ensure that sensitive state data is stored and managed domestically. 

 Person, Face, Head, Fence

Sovereignty debates are also translating into decisions to limit, replace, or restructure the use of digital services provided by foreign entities. France has announced plans to phase out US-based collaboration platforms such as Microsoft Teams, Zoom, Google Meet, and Webex from public administration, replacing them with a domestically developed alternative, ‘Visio’. 

The Dutch data protection authority has urged the government to act swiftly to protect the country’s digital sovereignty, after DigiD, the national digital identity system, appeared set for acquisition by a US company. The watchdog argued that the Netherlands relies heavily on a small group of non-European cloud and IT providers, and stresses that public bodies lack clear exit strategies if foreign ownership suddenly shifts.

In the USA, the TikTok controversy can also be seen through sovereignty angles: Rather than banning TikTok, authorities have pushed the platform to restructure its operations for the US market. A new entity will manage TikTok’s US operations, with user data and algorithms handled inside the US. The recommendation algorithm is meant to be trained only on US user data to meet American regulatory requirements.

In more security-driven contexts, the concept is sharper still. As Europe remains heavily dependent on both Chinese telecom vendors and US cloud and satellite providers, the European Commission proposed binding cybersecurity rules targeting critical ICT supply chains.

Russia’s Security Council has recently labelled services such as Starlink and Gmail as national security threats, describing them as tools for ‘destructive information and technical influence.’ These assessments are expected to feed into Russia’s information security doctrine, reinforcing the treatment of digital services provided by foreign companies not as neutral infrastructure but as potential vectors of geopolitical risk.

The big picture. The common thread is clear: Digital sovereignty is now a key consideration for governments worldwide. The approaches may differ, but the goal remains largely the same – to ensure that a nation’s digital future is shaped by its own priorities and rules. But true independence is hampered by deeply embedded global supply chains, prohibitive costs of building parallel systems, and the risk of stifling innovation through isolation. While the strategic push for sovereignty is clear, untangling from interdependent tech ecosystems will require years of investment, migration, and adaptation. The current initiatives mark the beginning of a protracted and challenging transition.

In January 2026, a regulatory firestorm engulfed Grok, the AI tool built into Elon Musk’s X platform, as reports surfaced that Grok was being used to produce non-consensual sexualised and deepfake images, including depictions of individuals undressed or in compromising scenarios without their consent. 

Musk has suggested that users who use such prompts be held liable, a move criticised as shifting responsibility.

 Book, Comics, Publication, Person, Face, Head, People, Stephan Bodzin

The backlash was swift and severe. The UK’s Ofcom launched an investigation under the Online Safety Act, to determine whether X has complied with its duties to protect people in the UK from content that is illegal in the country. UK Prime Minister Keir Starmer condemned the ‘disgusting’ outputs. The EU declared the content, especially involving children, had ‘no place in Europe.’ Southeast Asia acted decisively: Malaysia and Indonesia blocked Grok entirely, citing obscene image generation, and the Philippines swiftly followed suit on child-protection grounds.

Under pressure, X announced tightened controls on Grok’s image-editing capabilities. The platform said it had introduced technological safeguards to block the generation and editing of sexualised images of real people in jurisdictions where such content is illegal. 

However, regulatory authorities signalled that this step, while positive, would not halt oversight. 

In the UK, Ofcom emphasised that its formal investigation into X’s handling of Grok and the emergence of deepfake imagery will continue, even as it welcomes the platform’s policy changes. The regulator emphasised its commitment to understanding how the platform facilitated the proliferation to such content and to ensuring that corrective measures are implemented. 

The UK Information Commissioner’s Office (ICO) opened a formal investigation into X and xAI over whether Grok’s processing of personal data complies with UK data protection law, namely core data protection principles—lawfulness, fairness, and transparency—and whether its design and deployment included sufficient built-in protections to stop the misuse of personal data for creating harmful or manipulated images.

Canada’s Privacy Commissioner widened an existing investigation into X Corp. and opened a parallel probe into xAI to assess whether the companies obtained valid consent for the collection, use, and disclosure of personal information to create AI-generated deepfakes, including sexually explicit content.

In France, the Paris prosecutor’s office confirmed that it will widen an ongoing criminal investigation into X to include complicity in spreading pornographic images of minors, sexually explicit deepfakes, denial of crimes against humanity and manipulation of an automated data processing system. The cybercrime unit of the Paris prosecutor has raided the French office of X as part of this expanded investigation. Musk and ​former CEO Linda ​Yaccarino have been summoned for voluntary interviews. X denied any wrongdoing and called the raid ‘abusive act of law enforcement theatre’ while Musk described it as a ‘political attack.’

The European Commission has opened a formal investigation into X under the bloc’s Digital Services Act (DSA). The probe focuses on whether the company met its legal obligations to mitigate risks from AI-generated sexualised deepfakes and other harmful imagery produced by Grok — especially those that may involve minors or non-consensual content.

Brazil’s Federal Public Prosecutor’s Office, the National Data Protection Authority and the National Consumer Secretariat — issued coordinated recommendations have issued recommendations to X to stop Grok from producing and disseminating sexualised deepfakes, warning that Brazil’s civil liability rules could apply if harmful outputs continued and that the platform should be disabled until safeguards were in place.

In India, the Ministry of Electronics and Information Technology (Meity) demanded the removal of obscene and unlawful content generated by the AI tool and required a report on corrective actions within 72 hours. The ministry also ordered the company to review Grok’s technical and governance framework. The deadline has since passed, and neither the ministry nor Grok has made any updates public.

Regulatory authorities in South Korea are examining whether Grok has violated personal data protection and safety standards by enabling the production of explicit deepfakes, and whether the matter falls within its legal remit.

Indonesia, Malaysia and the Philippines, however, have restored access after the platform introduced additional safety controls aimed at curbing the generation and editing of problematic content. 

The red lines. The reaction was so immediate and widespread precisely because it struck two rather universal nerves: the profound violation of privacy through non-consensual sexual imagery—a moral line nearly everyone agrees cannot be crossed—combined with the unique perils of AI, a trigger for acute governmental sensitivity. 

The big picture. Grok’s ongoing scrutiny shows that not all regulators are satisfied with the safeguards implemented so far, highlighting that remedies may need to be tailored to different jurisdictions. 

Diplo and the Geneva Internet Platform (GIP) organised the 11th edition of the Geneva Engage Awards, recognising the efforts of International Geneva actors in digital outreach and online engagement. 

This year’s theme, ‘Back to Basics: The Future of Websites in the AI Era,’ highlighted new practices in which users increasingly rely on AI assistants and AI-generated summaries that may not cite primary or the most relevant sources.

The opening segment of the event set the context for a shifting digital environment, exploring the transition from a search-based web to an answer-driven web and its implications for public engagement. It also offered a brief, transparent look at the logic behind this year’s award rankings, unpacking the metrics and mathematical models used to assess digital presence and accessibility. This led to the awards presentation, which recognised Geneva-based actors for their online engagement and influence.

The awards honoured organisations across three main categories: international organisations, NGOs, and permanent representations. The awards assessed efforts in social media engagement, web accessibility, and AI leadership, reinforcing Geneva’s role as a trusted source of reliable information as technology changes rapidly.

In the International Organisations category, the United Nations Conference on Trade and Development (UNCTAD) won first place. The United Nations Office at Geneva (UNOG) and the United Nations Office for the Coordination of Humanitarian Affairs (UNOCHA) were named runners-up for their strong digital presence and outreach.

Among non-governmental organisations, the International AIDS Society ranked first. It was followed by the Aga Khan Development Network (AKDN) and the International Union for Conservation of Nature (IUCN), both recognised as runners-up for their effective digital engagement.

In the Permanent Representations category, the Permanent Mission of the Republic of Indonesia to the United Nations Office and other international organisations in Geneva took first place. The Permanent Mission of the Republic of Rwanda and the Permanent Mission of France were named runners-up.

The Web Accessibility Award went to the Permanent Mission of Canada, while the Geneva AI Leadership Award was presented to the International Telecommunication Union (ITU).

 Crowd, Person, Adult, Female, Woman, Audience, Male, Man, Clothing, Footwear, Shoe, Accessories, Jewelry, Necklace, Glasses, Speech, Formal Wear, Tie, Coat, Debate, Theodor W. Adorno

After the ceremony, the focus shifted from recognition to exchange at a networking cocktail and a ‘knowledge bazaar.’ Participants circulated through interactive stations that translate abstract digital and AI concepts into tangible experiences. These included a guided walkthrough of what happens technically when a question is posed to an AI system; an exploration of the data and network analysis underpinning the Geneva Engage Awards, including a large-scale mapping of interconnections between Geneva-related websites; and discussions on the role of curated, human-enriched knowledge in feeding AI systems, with practical insights into how organisations can preserve and scale institutional expertise.

Other stations highlighted hands-on approaches to AI capacity-building through apprenticeships that emphasise learning by building AI agents, as well as the use of AI for post-event reporting. Together, these sessions showed how AI can transform fleeting discussions into structured, multilingual, and lasting knowledge. 

As we enter the new year, we bring you our annual outlook on AI and digital developments, featuring insights from our Executive Director. Drawing on our coverage of digital policy over the past year on the Digital Watch Observatory, as well as our professional experience and expertise, we highlight the 10 trends and events we expect to shape the digital landscape in the year ahead.

 Adult, Male, Man, Person, Head

Technologies. AI is becoming a commodity, affecting everyone—from countries competing for AI sovereignty to individual citizens. Equally important is the rise of bottom-up AI: in 2026, small to large language models will be able to run on corporate or institutional servers. Open-source development, a major milestone in 2025, is expected to become a central focus of future geostrategic competition.

Geostrategy. The good news is that, despite all geopolitical pressure, we still have an integrated global internet. However, digital fragmentation is accelerating, with continued fragmentation of filtering social media, other services and the other developments around three major hubs: the United States, China, and potentially the EU. Geoeconomics is becoming a critical dimension of this shift, particularly given the global footprint of major technology companies. And any fragmentation, including trade fragmentation and taxation fragmentation, will inevitably affect them. Equally important is the role of “geo-emotions”: the growing disconnect between public sentiment and industry enthusiasm. While companies remain largely optimistic about AI, public scepticism is increasing, and this divergence may carry significant political implications.

Governance. The core governance dilemma remains whether national representatives—parliamentarians domestically and diplomats internationally—are truly able to protect citizens’ digital interests related to data, knowledge, and cybersecurity. While there are moments of productive discussion and well-run events, substantive progress remains limited. One positive note is that inclusive governance, at least in principle, continues through multistakeholder participation, though it raises its own unresolved questions.

Security. The adoption of the Hanoi Cybercrime Convention at the end of the year is a positive development, and substantive discussions at the UN continue despite ongoing criticism of the institution. While it remains unclear whether these processes are making us more secure, they are expanding the governance toolbox. At the same time, attention should extend beyond traditional concerns—such as cyberwarfare, terrorism, and crime—to emerging risks associated with the interconnectivity to AI systems through APIs. These points of integration create new interdependencies and potential backdoors for cyberattacks.

Human rights. Human rights are increasingly under strain, with recent policy shifts by technology companies and growing transatlantic tensions between the EU and the United States highlighting a changing landscape. While debates continue to focus heavily on bias and ethics, deeper human rights concerns—such as the rights to knowledge, education, dignity, meaningful work, and the freedom to remain human rather than optimised—receive far less attention. As AI reshapes society, the human rights community must urgently revisit its priorities, grounding them in the protection of life, dignity, and human potential.

Economy. The traditional three-pillar framework comprising security, development, and human rights is shifting toward economic and security concerns, with human rights being increasingly sidelined. Technological and economic issues, from access to rare earths to AI models, are now treated as strategic security matters. This trend is expected to accelerate in 2026, making the digital economy a central component of national security. Greater attention should be paid to taxation, the stability of the global trade system, and how potential fragmentation or disruption of global trade could impact the tech sector.

Standards. The lesson from social media is clear: without interoperable standards, users get locked into single platforms. The same risk exists for AI. To avoid repeating these mistakes, developing interoperable AI standards is critical. Ideally, individuals and companies should build their own AI, but where that isn’t feasible, at a minimum, platforms should be interoperable, allowing seamless movement across providers such as OpenAI, Cloudy, or DeepSeek. This approach can foster innovation, competition, and user choice in the emerging AI-dominated ecosystem.

Content. The key issue for content in 2026 is the tension between governments and US tech, particularly regarding compliance with EU laws. At the core, countries have the right to set rules for content within their territories, reflecting their interests, and citizens expect their governments to enforce them. While media debates often focus on misuse or censorship, the fundamental question remains: can a country regulate content on its own soil? The answer is yes, and adapting to these rules will be a major source of tension going forward.

Development. Countries that are currently behind in AI aren’t necessarily losing. Success in AI is less about owning large models or investing heavily in hardware, and more about preserving and cultivating local knowledge. Small countries should invest in education, skills, and open-source platforms to retain and grow knowledge locally. Paradoxically, a slower entry into AI could be an advantage, allowing countries to focus on what truly matters: people, skills, and effective governance.

Environment. Concerns about AI’s impact on the environment and water resources persist. It is worth asking whether massive AI farms are truly necessary. Small AI systems could serve as extensions of these processes or as support for training and education, reducing the need for energy- and water-intensive platforms. At a minimum, AI development should prioritise sustainability and efficiency, mitigating the risk of large-scale digital waste while still enabling practical benefits.

Weekly #247 From bytes to borders: The quest for digital sovereignty

 Logo, Text

23-30 January 2026


HIGHLIGHT OF THE WEEK

From bytes to borders: The quest for digital sovereignty

Governments have long debated controlling data, infrastructure, and technology within their borders. But there is a renewed sense of urgency, as geopolitical tensions are driving a stronger push to identify dependencies, build domestic capacity, and limit exposure to foreign technologies.

At the European level, France is pushing to make digital sovereignty measurable and actionable. Paris has proposed the creation of an EU Digital Sovereignty Observatory to map member states’ reliance on non-European technologies, from cloud services and AI systems to cybersecurity tools. Paired with a digital resilience index, the initiative aims to give policymakers a clearer picture of strategic dependencies and a stronger basis for coordinated action on procurement, investment, and industrial policy. 

In Burkina Faso, the focus is on reducing reliance on external providers while consolidating national authority over core digital systems. The government has launched a Digital Infrastructure Supervision Centre to centralise oversight of national networks and strengthen cybersecurity monitoring. New mini data centres for public administration are being rolled out to ensure that sensitive state data is stored and managed domestically. 

Sovereignty debates are also translating into decisions to limit, replace, or restructure the use of digital services provided by foreign entities. France has announced plans to phase out US-based collaboration platforms such as Microsoft Teams, Zoom, Google Meet, and Webex from public administration, replacing them with a domestically developed alternative, ‘Visio’. 

The EU has advanced its timeline for the IRIS2 satellite network, according to the EU Commissioner for Defence and Space, Andrius Kubilius. A planned multi-orbit constellation of 290 satellites, IRIS2 aims to begin initial government communication services by 2029, a year earlier than originally planned. The network is designed to provide encrypted communications for citizens, governments and public agencies. It also aims to reduce reliance on external providers, as Europe is ‘quite dependent on American services,’ per Kubilius.

In the USA, the TikTok controversy can also be seen through sovereignty angles: Rather than banning TikTok, authorities have pushed the platform to restructure its operations for the US market. A new entity will manage TikTok’s US operations, with user data and algorithms handled inside the US. The recommendation algorithm is meant to be trained only on US user data to meet American regulatory requirements.

In more security-driven contexts, the concept is sharper still. Russia’s Security Council has recently labelled services such as Starlink and Gmail as national security threats, describing them as tools for ‘destructive information and technical influence.’ These assessments are expected to feed into Russia’s information security doctrine, reinforcing the treatment of digital services provided by foreign companies not as neutral infrastructure but as potential vectors of geopolitical risk.

 Book, Comics, Publication, Adult, Male, Man, Person, Clothing, Formal Wear, Suit, Advertisement, Poster, Accessories, Glasses, Coat, Purple, Tie, Face, Head, Electronics, Phone

The big picture. The common thread is clear: Digital sovereignty is now a key consideration for governments worldwide. The approaches may differ, but the goal remains largely the same – to ensure that a nation’s digital future is shaped by its own priorities and rules. But true independence is hampered by deeply embedded global supply chains, prohibitive costs of building parallel systems, and the risk of stifling innovation through isolation. While the strategic push for sovereignty is clear, untangling from interdependent tech ecosystems will require years of investment, migration, and adaptation. The current initiatives mark the beginning of a protracted and challenging transition.

IN OTHER NEWS THIS WEEK

This week in AI governance

China. China is planning to launch space-based AI data centres over the next five years. State aerospace contractor CASC has committed to building gigawatt-class orbital computing hubs that integrate cloud, edge and terminal capabilities, enabling in-orbit processing of Earth-generated data. The news comes on the heels of Elon Musk’s announcement at WEF 2026 that SpaceX plans to launch solar-powered AI data centre satellites within the next two to three years.

The UN. The UN has raised the alarm about AI-driven threats to child safety, highlighting how AI systems can accelerate the creation, distribution, and impact of harmful content, including sexual exploitation, abuse, and manipulation of children online. As smart toys, chatbots, and recommendation engines increasingly shape youth digital experiences, the absence of adequate safeguards risks exposing a generation to novel forms of exploitation and harm.  


Child safety online: Bans, trials, and investigations

The momentum on banning children from accessing social media continues, as France’s National Assembly has advanced legislation to ban children under 15 from accessing social media, voting substantially in favour of a bill that would require platforms to block under‑15s and enforce age‑verification measures. The bill now goes to the Senate for approval, with targeted implementation before the next school year.

In India, the state governments of Goa and Andhra Pradesh are exploring similar restrictions, considering proposals to bar social media use for children under 16 amid rising concern about online safety and youth well‑being. Previously, in December, the Madras High Court urged India’s federal government to consider an Australia-style ban.

In a first for social media platforms, a landmark trial in Los Angeles is seeing Meta (Instagram and Facebook), YouTube (Google/Alphabet), Snapchat, and TikTok, accused of intentionally designing their apps to be addictive, with serious consequences for young users’ mental health. As the trial began, Snap Inc. and TikTok had already reached confidential settlements, leaving Meta and YouTube as the remaining defendants in front of a jury. 

Separately in federal court, Meta, Snap, YouTube and TikTok asked a judge to dismiss school districts’ lawsuits that seek damages for costs tied to student mental health challenges

In both cases, the companies are arguing that Section 230 of US law shields them from liability, while the plaintiffs counter that their claims focus on allegedly addictive design features rather than user-generated content. 

Legal experts and advocates are watching closely, noting that the outcomes could set a precedent for thousands of related lawsuits and ultimately influence corporate design practices.

Roblox is under formal investigation in the Netherlands, as the Autoriteit Consument & Markt (ACM) has opened a formal investigation to assess whether Roblox is taking sufficient measures to protect children and teenagers who use the service. The probe will examine Roblox’s compliance with the European Union’s Digital Services Act (DSA), which obliges online services to implement appropriate and proportionate measures to ensure safety, privacy and security for underage users, and could take up to a year.

Regulatory scrutiny can also bear fruit: Meta, which was under intense scrutiny by regulators and civil society over chatbots that previously permitted provocative or exploitative conversations with minors, is pausing teenagers’ access to its AI characters globally while it redesigns the experience with enhanced safety and parental controls. The company said teens will be blocked from interacting with certain AI personas until a revised platform is ready, guided by principles akin to a PG-13 rating system to limit exposure to inappropriate content. 

Bottom line. The pressure on platforms is mounting, and there is no indication that it will let up.


The Grok deepfakes aftershocks

The fallout from Grok’s misuse to produce non-consensual sexualised and deepfake images continues.

The European Commission has opened a formal investigation into X under the bloc’s Digital Services Act (DSA). The probe focuses on whether the company met its legal obligations to mitigate risks from AI-generated sexualised deepfakes and other harmful imagery produced by Grok — especially those that may involve minors or non-consensual content. 

Regulatory authorities in South Korea are examining whether Grok has violated personal data protection and safety standards by enabling the production of explicit deepfakes, and whether the matter falls within its legal remit.

However, Malaysian authorities, who temporarily blocked access to Grok in early January, have restored access after the platform introduced additional safety controls aimed at curbing the generation and editing of problematic content. 

Why does it matter? Grok’s ongoing scrutiny shows that not all regulators are satisfied with the safeguards implemented so far, highlighting that remedies may need to be tailored to different jurisdictions.



LOOKING AHEAD
 Person, Face, Head, Binoculars

11th Geneva Engage Awards

Diplo and the Geneva Internet Platform (GIP) are organising the 11th edition of the Geneva Engage Awards, recognising the efforts of International Geneva actors in digital outreach and online engagement. 

This year’s theme, ‘Back to Basics: The Future of Websites in the AI Era,’ highlights new practices in which users increasingly rely on AI assistants and AI-generated summaries that may not cite primary or the most relevant sources.

The awards honour organisations across three main categories: international organisations, NGOs, and permanent representations. They assess efforts in social media engagement, web accessibility, and AI leadership, reinforcing Geneva’s role as a trusted source of reliable information as technology changes rapidly.

Tech attache briefing: The future of the Internet Governance Forum (IGF)

The Geneva Internet Platform (GIP) is organising a briefing for tech attaches, which will look at the role and evolution of the IGF over the past 20 years and discuss ways to implement the requests of the General Assembly. The event will begin with a briefing and exchange among diplomats, followed by an open dialogue with the IGF Secretariat. The event is invitation-only. 



READING CORNER
Certifying humanity feature

As AI content floods the web, how do we know what’s real? Explore the case for a “Human-Certified” label and why authentic human thought is becoming our most valuable digital asset.

BLOG featured image 2026 13 Genevas AI footprint

Geneva’s AI footprint Modern AI platforms are trained on vast amounts of online information, including content from websites, blogs, and publications.

Weekly #246 WEF 2026 in Davos: Digital governance discussions shift from principles to ‘infrastructure politics’

 Logo, Text

16-23 January 2026


HIGHLIGHT OF THE WEEK

WEF 2026 in Davos: Digital governance discussions shift from principles to ‘infrastructure politics’

One of this week’s biggest highlights was the World Economic Forum’s annual meeting in Davos (19–23 January 2026), held under the banner ‘A Spirit of Dialogue.’ But while the headline was dialogue, the subtext was more related to control: who gets to build, run, and police the digital systems the world now treats as essential infrastructure?

Across AI-heavy sessions, the talk has moved beyond hype and into a more complex question: what legitimises large-scale AI rollouts when they draw on scarce resources and concentrate power? Microsoft CEO Satya Nadella argued that this legitimacy is fragile, warning that the public could withdraw its ‘social licence’ for AI’s energy use unless the benefits are clear and widely felt, delivering tangible gains in areas like health and education.

 People, Person, Crowd, Book, Publication, Jury, Comics, Face, Head, Audience, Indoors, Lecture, Room, Seminar

On the corporate side, many business leaders in Davos made the same point: moving from a small AI trial to a tool that runs safely across an entire company is proving much harder than expected. The biggest barriers are often cleaning and connecting data, finding and training the right people, and changing internal workflows so AI outputs are checked, approved, and acted on in a controlled way.

Meanwhile, ‘sovereignty’ surfaced as an engineering and legal puzzle: where can data and compute physically sit, and under whose rules? In the session ‘Digital Embassies for Sovereign AI’, participants argued for a standardised framework, likened to a ‘Vienna Convention’, that would allow countries to use overseas data centre capacity while still asserting control over sensitive datasets and access conditions.

What is a ‘digital embassy?

In a recent blog on diplomacy.edu, Jovan Kurbalija, the executive of Diplo, analyses the term ‘digital embassy’ and argues that it is widely misused. He explains that initiatives often labelled this way, such as state-run, sovereign data-backup facilities hosted abroad (e.g. Estonia’s arrangement in Luxembourg), do not function like embassies, which represent states and conduct diplomacy, but rather as resilience infrastructure designed to preserve critical data, continuity of government, and ‘national memory’ in crises. Read more

The debates in Davos also exposed a widening fault line in AI policy. Some leaders called for lighter, iterative rules that can evolve ‘at the speed of code‘, while others defended risk-based guardrails and market-wide harmonisation to prevent fragmentation.

In another interesting session called ‘Is Europe’s Tech Sovereignty Feasible? ‘, it was argued that a single framework beats ’27 different’ national regimes, even if the compliance debate remains politically charged.

There were also some discussions on the governance of digital finance. Debates on tokenisation and new payment rails underscored a familiar trade-off: efficiency and innovation versus sovereignty, consumer protection, and systemic risk.

Online harms provided a sharp reminder of what’s at stake when governance fails. In a session focused on fraud, panellists described scam ecosystems that blend online crime with coercion and trafficking, summed up in a stark line: cyber fraud is ‘no longer just about stolen money… it’s about stolen lives.’

Taken together, WEF 2026 provided a roadmap of where the pressure is building: from lofty AI principles toward practical control over infrastructure, accountability, and cross-border rules. The prevailing outcome was a recognition that trust in AI will hinge on demonstrating real-world benefits, integrating human responsibility and oversight into business processes, and resolving sovereignty questions about where data and compute reside. At the same time, the meeting underscored a growing risk of regulatory and geopolitical fragmentation, and a parallel push to strengthen cooperative mechanisms, from harmonised frameworks to multistakeholder forums, to keep security, rights, and resilience from falling behind the speed of deployment.

IN OTHER NEWS THIS WEEK

This week in AI governance

EU. The EU policymakers are calling for faster AI deployment across the bloc, especially among SMEs and scale-ups, backing the European Commission’s ‘Apply AI Strategy’ and an ‘AI-first’ mindset for business and public services. The European Economic and Social Committee argues the EU’s edge should be ‘trustworthy’ and human-centric AI, but warns that slow implementation, fragmented national approaches, and limited private investment are holding the EU back. Proposed fixes include easier access to funding, lighter administrative burdens, stronger regional ecosystems, investment in skills and procurement, and support for frontier AI to reduce dependence on non-EU models.

USA-California. California Attorney General Rob Bonta has sent a cease and desist letter to Elon Musk’s xAI, ordering it to stop creating and sharing non-consensual sexual deepfakes, following a spike in explicit AI-generated images circulating on X. State officials say Grok enabled the manipulation of images of women and children without consent, potentially violating state decency laws and a newer deepfake-pornography ban. Regulators point to research suggesting Grok users were sharing more non-consensual sexual imagery than users elsewhere. xAI has introduced partial restrictions, though authorities say the real-world impact remains uncertain as investigations continue.

South Korea. New US tariffs on advanced AI-oriented chips are prompting South Korea’s semiconductor industry to assess supply-chain risks and potential trade fallout, with the measure widely interpreted as an attempt to constrain the re-export of AI accelerators to China. The tariff is set at 25% for certain advanced chips imported into the US and then re-exported. It could affect high-end processors that rely on high-bandwidth memory supplied by Samsung Electronics and SK hynix. However, officials argue that much of South Korea’s memory shipments to the US are destined for domestic data centres and may be exempt. Seoul has launched consultations with industry and US counterparts to clarify exposure and ensure that Korean firms receive treatment comparable to that of competitors in Taiwan, Japan, and the EU.

EU. The European Commission has signalled it may escalate action over concerns that Grok-related ‘nudification’ content is spreading on X, with the EU officials stressing that non-consensual sexualised imagery, especially involving minors, is unacceptable. The EU tech chief, Henna Virkkunen, told MEPs that existing EU digital rules provide tools to respond, with enforcement under the Digital Services Act and child-protection priorities. While a formal investigation has not yet been launched, the Commission is examining potential DSA breaches and has reportedly ordered X to retain internal information related to Grok until the end of 2026.

UK. The UK government has appointed two ‘AI Champions’ from industry, Harriet Rees (Starling Bank) and Dr Rohit Dhawan (Lloyds Banking Group), to support safe and effective AI adoption across financial services. The move reflects how mainstream AI already is in the sector (around three-quarters of UK financial firms reportedly use it), alongside official estimates of large potential productivity gains by 2030. The Champions’ remit includes accelerating ‘trusted’ adoption, removing barriers to scale, protecting consumers, and supporting financial stability, linking innovation goals to the sector’s risk-management and supervisory expectations.


Jeff Bezos to enter satellite broadband race

Blue Origin, founded by Jeff Bezos, has announced plans to launch a global satellite internet network called TeraWave in the US. The project aims to deploy more than 5,400 satellites to deliver high-speed data services.

In the US, TeraWave will target data centres, businesses and government users rather than households. Blue Origin says the system could reach speeds of up to 6 terabits per second, exceeding the speeds of current commercial satellite services.

The announcement positions the US company as a direct rival to Starlink, SpaceX’s satellite internet service. Starlink already operates thousands of satellites and focuses heavily on consumer internet access across the US and beyond.

Blue Origin plans to begin launching TeraWave satellites from the US by the end of 2027. The announcement adds to the intensifying competition in satellite communications as demand for global connectivity continues to grow.

Why it matters: At WEF 2026, ‘infrastructure politics’ was shorthand for the power struggle over who builds and governs essential digital systems, and Blue Origin’s TeraWave plan underscores that satellite internet is increasingly treated as strategic infrastructure rather than just a commercial connectivity service.


Child online safety stays on the global agenda as the UK considers an under-16 social media ban

Pressure is growing on Keir Starmer after more than 60 Labour MPs called for a UK ban on social media use for under-16s, arguing that children’s online safety requires firmer regulation instead of voluntary platform measures. The signatories span Labour’s internal divides, including senior parliamentarians and former frontbenchers, signalling broad concern over the impact of social media on young people’s well-being, education and mental health.

Supporters of the proposal point to Australia’s recently implemented ban as a model worth following, suggesting that early evidence could guide UK policy development rather than prolonged inaction.

Starmer is understood to favour a cautious approach, preferring to assess the Australian experience before endorsing legislation, as peers prepare to vote on related measures in the coming days.

Zooming out: Australia’s under-16 social media ban is quickly becoming a reference point in a wider global shift, as more governments weigh age-based restrictions and tougher platform duties, signalling that youth online safety is moving from voluntary safeguards toward hard law.


European Parliament moves to force AI companies to pay news publishers

Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament. The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.

Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus. MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.

The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.

If adopted, the position of the European Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.

Why it matters: The EU’s push to require payment for journalistic content used in model training is part of a widening global trend, from licensing deals to proposed ‘training-use’ compensation rules, as governments look to rebalance the economics of AI and protect the sustainability of independent newsrooms.


UNESCO raises alarm over government use of internet shutdowns

UNESCO expressed growing concern over the expanding use of internet shutdowns by governments seeking to manage political crises, protests, and electoral periods. Recent data indicate that more than 300 shutdowns have occurred across 54 countries over the past two years, with 2024 the most severe year since 2016.

According to UNESCO, restricting online access undermines the universal right to freedom of expression and weakens citizens’ ability to participate in social, cultural, and political life. Access to information remains essential not only for democratic engagement but also for rights linked to education, assembly, and association, particularly during moments of instability.

Internet disruptions also place significant strain on journalists, media organisations, and public information systems that distribute verified news. Instead of improving public order, shutdowns fracture information flows and contribute to the spread of unverified or harmful content, increasing confusion and mistrust among affected populations.

UNESCO continues to call on governments to adopt policies that strengthen connectivity and digital access rather than imposing barriers. The organisation argues that maintaining open and reliable internet access during crises remains central to protecting democratic rights and safeguarding the integrity of information ecosystems.

Why it matters: As internet shutdowns spread worldwide, especially around protests and elections, they are becoming a default ‘crisis tool’ for states, with mounting costs for rights, public trust, and access to verified information, and growing calls for stronger international accountability.


LOOKING AHEAD
 Person, Face, Head, Binoculars

International Submarine Cable Resilience Summit 2026

The International Submarine Cable Resilience Summit 2026 will take place in Porto, Portugal (2–3 February 2026), bringing together governments, regulators, industry, investors, cable operators/experts, and international organisations to strengthen cooperation on protecting the submarine telecom cables that underpin global connectivity.

More info on our dig.watch EVENTS page



READING CORNER
BLOG featured image 2026 What is a digital embassy

The term ‘digital embassy’ is a misleading description for initiatives like Estonia’s sovereign data backup located in Luxembourg. True embassies represent and negotiate, while these facilities. Read more

Fear AI

Headlines predict mass AI job loss, but the data tells a nuanced story. Discover why research from the AI Index, OECD, and ILO suggests public fear is outpacing observed reality. Read more

BLOG featured image 2026 9 The irony of power

Greenland-related tensions could trigger the EU retaliation, pushing US tech to lobby for calmer transatlantic relations to protect the EU revenue, cloud/AI growth, and data-flow stability. Read more

chatgpt advertising openai digital governance

OpenAI’s ChatGPT Go launch highlights growing pressure to monetise AI without ads, as investor expectations reshape sustainable business models. Read more

Weekly #245 The Grok shock: How AI deepfakes triggered reactions worldwide

 Logo, Text

9-16 January 2026


HIGHLIGHT OF THE WEEK

The Grok shock: How AI deepfakes triggered reactions worldwide

In the last week, a regulatory firestorm engulfed Grok, the AI tool built into Elon Musk’s X platform, as reports surfaced that Grok was being used to produce non-consensual sexualised and deepfake images, including depictions of individuals undressed or in compromising scenarios without their consent.

The backlash was swift and severe. The UK’s Ofcom launched an investigation under the Online Safety Act, to determine whether X has complied with its duties to protect people in the UK from content that is illegal in the country. UK Prime Minister Keir Starmer condemned the ‘disgusting’ outputs. The EU declared the content, especially involving children, had ‘no place in Europe.’ Southeast Asia acted decisively: Malaysia and Indonesia blocked Grok entirely, citing obscene image generation, and the Philippines swiftly followed suit on child-protection grounds.

Under pressure, X announced tightened controls on Grok’s image-editing capabilities. The platform said it had introduced technological safeguards to block the generation and editing of sexualised images of real people in jurisdictions where such content is illegal. 

However, regulatory authorities signalled that this step, while positive, would not halt oversight. 

In the UK, Ofcom emphasised that its formal investigation into X’s handling of Grok and the emergence of deepfake imagery will continue, even as it welcomes the platform’s policy changes. The regulator emphasised its commitment to understanding how the platform facilitated the proliferation to such content and to ensuring that corrective measures are implemented. 

Canada’s Privacy Commissioner widened an existing investigation into X Corp. and opened a parallel probe into xAI to assess whether the companies obtained valid consent for the collection, use, and disclosure of personal information to create AI-generated deepfakes, including sexually explicit content.

The red lines. The reaction was so immediate and widespread precisely because it struck two rather universal nerves: the profound violation of privacy through non-consensual sexual imagery—a moral line nearly everyone agrees cannot be crossed—combined with the unique perils of AI, a trigger for acute governmental sensitivity. 

 Book, Comics, Publication, Person, Face, Head, People, Stephan Bodzin
IN OTHER NEWS THIS WEEK

This week in AI governance

Spain. Spain’s cabinet has approved draft legislation aimed at curbing AI-generated deepfakes and tightening consent rules on the use of images and voices. The bill sets 16 as the minimum age for consenting to image use and prohibits the reuse of online images or AI-generated likenesses without explicit permission — including for commercial purposes — while allowing clear, labelled satire or creative works involving public figures. The reform reinforces child protection measures and mirrors broader EU plans to criminalise non-consensual sexual deepfakes by 2027. Prosecutors are also examining whether certain AI-generated content could qualify as child pornography under Spanish law. 

Malta. The Maltese government is preparing tougher legal measures to tackle abuses of deepfake technology. Current legislation is under review with proposals to introduce penalties for the misuse of AI in harassment, blackmail, and bullying cases, building on existing cyberbullying and cyberstalking laws by extending similar protections to harms stemming from AI-generated content. Officials emphasise that while AI adoption is a national priority, robust safeguards against abusive use are essential to protect individuals and digital rights.

Morocco. Morocco is preparing to unveil ‘Maroc IA 2030’, a national AI roadmap designed to structure the country’s AI ecosystem and strengthen digital transformation. The plan aims to add an estimated $10 billion to GDP by 2030, create tens of thousands of AI-related jobs, and integrate AI across industry and government, including modernising public services and strengthening technological autonomy. Central to the strategy is the launch of the JAZARI ROOT Institute, the core hub of a planned network of AI centres of excellence that will bridge research, regional innovation, and practical deployment; additional initiatives include sovereign data infrastructure and partnerships with global AI firms. Authorities also emphasise building national skills and trust in AI, with governance structures and legislative proposals expected to accompany implementation.

Taiwan. Taiwan’s government has set an ambitious goal to train 500,000 AI professionals by 2040 as part of its long-term AI development strategy, backed by a NT$100 billion (approximately US$3.2 billion) venture fund and a national computing centre initiative. President Lai Ching-te announced the target at a 2026 AI Talent Forum in Taipei, highlighting the need for broad AI literacy across disciplines to sustain national competitiveness, support innovation ecosystems, and accelerate digital transformation in small and medium-sized enterprises. The government is introducing training programmes for students and public servants and emphasising cooperation between industry, academia, and government to develop a versatile AI talent pipeline. 

The EU and the USA. The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have released ten principles for good AI practice in the medicines lifecycle. The guidelines provide broad direction for AI use in research, clinical trials, manufacturing, and safety monitoring. The principles are relevant to pharmaceutical developers, marketing authorisation applicants, and holders, and will form the basis for future AI guidance in different jurisdictions. 


Internet access under pressure in Iran and Uganda

As anti-government protests deepened across Iran in early January 2026, nationwide communications were brought to an almost complete standstill when authorities enacted a near-total shutdown of the internet. Amid these conditions, some Iranians attempted to bypass government controls by using Elon Musk’s Starlink satellite internet service, which remained partially accessible despite Tehran’s efforts to ban and disrupt it. Latest reports suggest that security forces in parts of Tehran have started door-to-door operations to remove satellite dishes.

Separately, Ugandan authorities ordered restrictions on internet access ahead of the country’s presidential election on January 15, 2026. The Uganda Communications Commission directed telecom providers to suspend public internet access on the eve of the vote, citing concerns about misinformation, electoral fraud and incitement to violence. Critics, including civil liberties groups and opposition figures, argued that the blackout was part of a broader pattern of repression.

Zooming out. In both contexts — Tehran and Kampala — the suspension of internet access illustrates how control over information flows is a potent instrument in high-stakes political contests.


Worldwide focus on child safety online continues

The momentum behind policies to restrict children’s access to social media has carried from 2025 into early 2026. In Australia, the first country to enact such a ban, social media companies reported having deactivated about 4.7 million accounts believed to belong to users under 16 within the first month of enforcement.

In France, policymakers are debating proposals that would restrict social media access for children under 15. The country’s health watchdog has highlighted research pointing to a range of documented negative effects of social media use on adolescent mental health, noting that online platforms amplify harmful pressures, cyberbullying and unrealistic beauty standards. 

In the UK, the Prime Minister has signalled that he is open to age‑based restrictions similar to Australia’s approach, as well as proposals to limit screen time or the design features of platforms used by children. Support for stricter regulation has emerged across party lines, and the issue is being debated within Parliament. 

The future of bans. The number of countries eyeing a ban is climbing, and it’s far from final. The world is watching Australia—its success or struggle will decide who follows next


Chips and geopolitics

The global semiconductor industry entered 2026 amid developments that originated in late 2025.

On January 14, 2026, President Trump signed a presidential proclamation imposing a 25% tariff on certain advanced computing and AI‑oriented chips, including high‑end products such as Nvidia’s H200 and AMD’s MI325X, under a national security review. 

Officials described the measure as a ‘phase one’ step aimed at strengthening domestic production and reducing dependence on foreign manufacturers, particularly those in Taiwan, while also capturing revenue from imports that do not contribute to US manufacturing capacity. The administration suggested that further actions could follow depending on how negotiations with trading partners and the industry evolve.

Just a day later, the USA and Taiwan announced a landmark semiconductor-focused trade agreement. Under the deal, tariffs on a broad range of Taiwanese exports will be reduced or eliminated, while Taiwanese semiconductor companies, including leading firms like TSMC, have committed to invest at least $250 billion in U.S. chip manufacturing, AI, and energy projects, supported by an additional $250 billion in government-backed credit.

The protracted legal and political dispute over Dutch semiconductor manufacturer Nexperia,  a Netherlands‑based firm owned by China’s Wingtech Technology, also continues. The dispute erupted in autumn 2025, when Dutch authorities briefly seized control of Nexperia, citing national security and concerns about potential technology transfers to China.  Nexperia’s European management and Wingtech representatives are now squaring off in an Amsterdam court, which is deciding whether to launch a formal investigation into alleged mismanagement. The court is set to make a decision within four weeks.

On the horizon. As countries jockey for control over critical semiconductors, alliances and rivalries are clashing, and 2026 promises even more high-stakes moves.


Western cyber agencies issue guidance on cyber risks to industrial sectors

A group of international cybersecurity agencies has released new technical guidance addressing the security of operational technology (OT) used in industrial and critical infrastructure environments.

The guidance, led by the UK’s National Cyber Security Centre (NCSC), provides recommendations for securely connecting industrial control systems, sensors, and other operational equipment that support essential services.

According to the co-authoring agencies, industrial environments are being targeted by a range of actors, including cybercriminal groups and state-linked actors. The guidance references a joint advisory issued in June 2023 on China-linked cyber activity, as well as a more recent advisory from the US Cybersecurity and Infrastructure Security Agency (CISA) that notes opportunistic activity by pro-Russia hacktivist groups affecting critical infrastructure globally.


LOOKING AHEAD
 Person, Face, Head, Binoculars

World Economic Forum Annual Meeting 2026

The World Economic Forum Annual Meeting 2026 will take place 19–23 January in Davos‑Klosters, Switzerland. Bringing together leaders from government, business, civil society, academia, and culture, the meeting provides a platform to discuss global economic, technological, and societal challenges. A central theme will be the technological transformation—from AI and quantum computing to next-generation biotech and energy systems—reshaping economies, work, and growth. 

Our team will be reporting from the event, covering key discussions and insights on developments shaping the global agenda. Be sure to bookmark the dedicated page.



READING CORNER
BLOG featured image USAs exit from international organizations

On 7 January, the USA withdrew from a slate of international organisations and initiatives. Despite the wider retrenchment, the technology and digital governance ecosystem was largely spared, as most major tech-relevant bodies remained on the ‘white list.’ The bigger uncertainty lies with the US decision to step back from UNCTAD and UN DESA as this could still create knock-on effects for digital initiatives linked to these organisations, Dr Jovan Kurbalija writes.

Advancing Swiss AI Trinity featured image

In 2026, Switzerland will have to navigate a critical and highly uncertain AI transformation, Dr Jovan Kurbalija argues. With so much at stake and future AI trajectories unclear, the nation must build its resilience on a distinctly Swiss AI Trinity: Zurich’s entrepreneurship, Geneva’s governance, and communal subsidiarity, all anchored in the enduring values and practices outlined here.

BLOG featured image 2026 3

In her new article, Dr Anita Lamprecht examines how sci-fi narratives have been inverted in contemporary AI discourse, increasingly positioning technology beyond regulation and human governance. She introduces the concept of the ‘science fiction native’ (sci-fi native) to describe how immersion in speculative imaginaries over several generations is influencing legal and governance assumptions about control, responsibility, and social contracts.

Weekly #244 Looking ahead: Our annual AI and digital forecast

 Logo, Text

2-9 January 2026


HIGHLIGHT OF THE WEEK

Looking ahead: Our annual AI and digital forecast

As we enter the new year, we begin this issue of the Weekly newsletter with our annual outlook on AI and digital developments, featuring insights from our Executive Director. Drawing on our coverage of digital policy over the past year on the Digital Watch Observatory, as well as our professional experience and expertise, we highlight the 10 trends and events we expect to shape the digital landscape in the year ahead.

Technologies. AI is becoming a commodity, affecting everyone—from countries competing for AI sovereignty to individual citizens. Equally important is the rise of bottom-up AI: in 2026, small to large language models will be able to run on corporate or institutional servers. Open-source development, a major milestone in 2025, is expected to become a central focus of future geostrategic competition.

Geostrategy. The good news is that, despite all geopolitical pressure, we still have an integrated global internet. However, digital fragmentation is accelerating, with continued fragmentation of filtering social media, other services and the other developments around three major hubs: the United States, China, and potentially the EU. Geoeconomics is becoming a critical dimension of this shift, particularly given the global footprint of major technology companies. And any fragmentation, including trade fragmentation and taxation fragmentation, will inevitably affect them. Equally important is the role of “geo-emotions”: the growing disconnect between public sentiment and industry enthusiasm. While companies remain largely optimistic about AI, public scepticism is increasing, and this divergence may carry significant political implications.

Governance. The core governance dilemma remains whether national representatives—parliamentarians domestically and diplomats internationally—are truly able to protect citizens’ digital interests related to data, knowledge, and cybersecurity. While there are moments of productive discussion and well-run events, substantive progress remains limited. One positive note is that inclusive governance, at least in principle, continues through multistakeholder participation, though it raises its own unresolved questions.

Security. The adoption of the Hanoi Cybercrime Convention at the end of the year is a positive development, and substantive discussions at the UN continue despite ongoing criticism of the institution. While it remains unclear whether these processes are making us more secure, they are expanding the governance toolbox. At the same time, attention should extend beyond traditional concerns—such as cyberwarfare, terrorism, and crime—to emerging risks associated with the interconnectivity to AI systems through APIs. These points of integration create new interdependencies and potential backdoors for cyberattacks.

Human rights. Human rights are increasingly under strain, with recent policy shifts by technology companies and growing transatlantic tensions between the EU and the United States highlighting a changing landscape. While debates continue to focus heavily on bias and ethics, deeper human rights concerns—such as the rights to knowledge, education, dignity, meaningful work, and the freedom to remain human rather than optimised—receive far less attention. As AI reshapes society, the human rights community must urgently revisit its priorities, grounding them in the protection of life, dignity, and human potential.

Economy. The traditional three-pillar framework comprising security, development, and human rights is shifting toward economic and security concerns, with human rights being increasingly sidelined. Technological and economic issues, from access to rare earths to AI models, are now treated as strategic security matters. This trend is expected to accelerate in 2026, making the digital economy a central component of national security. Greater attention should be paid to taxation, the stability of the global trade system, and how potential fragmentation or disruption of global trade could impact the tech sector.

Standards. The lesson from social media is clear: without interoperable standards, users get locked into single platforms. The same risk exists for AI. To avoid repeating these mistakes, developing interoperable AI standards is critical. Ideally, individuals and companies should build their own AI, but where that isn’t feasible, at a minimum, platforms should be interoperable, allowing seamless movement across providers such as OpenAI, Cloudy, or DeepSeek. This approach can foster innovation, competition, and user choice in the emerging AI-dominated ecosystem.

Content. The key issue for content in 2026 is the tension between governments and US tech, particularly regarding compliance with EU laws. At the core, countries have the right to set rules for content within their territories, reflecting their interests, and citizens expect their governments to enforce them. While media debates often focus on misuse or censorship, the fundamental question remains: can a country regulate content on its own soil? The answer is yes, and adapting to these rules will be a major source of tension going forward.

Development. Countries that are currently behind in AI aren’t necessarily losing. Success in AI is less about owning large models or investing heavily in hardware, and more about preserving and cultivating local knowledge. Small countries should invest in education, skills, and open-source platforms to retain and grow knowledge locally. Paradoxically, a slower entry into AI could be an advantage, allowing countries to focus on what truly matters: people, skills, and effective governance.

Environment. Concerns about AI’s impact on the environment and water resources persist. It is worth asking whether massive AI farms are truly necessary. Small AI systems could serve as extensions of these processes or as support for training and education, reducing the need for energy- and water-intensive platforms. At a minimum, AI development should prioritise sustainability and efficiency, mitigating the risk of large-scale digital waste while still enabling practical benefits.

 Adult, Male, Man, Person, Head
IN OTHER NEWS THIS WEEK

This week in AI governance

Italy. Italy’s antitrust authority has formally closed its investigation into the Chinese AI developer DeepSeek after the company agreed to binding commitments to make risks from AI hallucinations — false or misleading outputs — clearer and more accessible to users. Regulators stated that DeepSeek will enhance transparency, providing clearer warnings and disclosures tailored to Italian users, thereby aligning its chatbot deployment with local regulatory requirements. If these conditions aren’t met, enforcement action under Italian law could follow.

UK. Britain has escalated pressure on Elon Musk’s social media platform X and its integrated AI chatbot Grok after reports that the tool was used to generate sexually explicit and non‑consensual deepfake images of women and minors. UK technology officials have publicly demanded that X act swiftly to prevent the spread of such content and ensure compliance with the Online Safety Act, which requires platforms to block unsolicited sexual imagery. Musk, however, has suggested that users who use such prompts be held liable, a move criticised as shifting responsibility. Critics note that the platform should still have to embed stronger safeguards.


Brussels bets on open-source to boost tech sovereignty

The European Commission is preparing a strategy to commercialise European open-source software to strengthen digital sovereignty and reduce reliance on foreign technology providers. 

The upcoming strategy, expected alongside the Cloud and AI Development Act in early 2026, will prioritise community upscaling, industrial deployment, and market integration. Strengthening developer communities, supporting adoption across various sectors, and ensuring market competitiveness are key objectives. Governance reforms and improved supply chain security are also planned to address vulnerabilities in widely used open-source components, enhancing trust and reliability.

Financial sustainability will be a key focus, with public sector partnerships encouraged to ensure the long-term viability of projects. By providing stable support and fostering collaboration between government and industry, the strategy seeks to create an economically sustainable open-source ecosystem.

The big picture. Despite funding fostering innovation, commercial-scale success has often occurred outside the EU. By focusing on open-source solutions developed within the EU, Brussels aims to strengthen Europe’s technological autonomy, retain the benefits of domestic innovation, and foster a resilient and competitive digital landscape.


USA pulls out of several international bodies

In a new move, US President Trump issued a memorandum directing the US withdrawal from numerous international organisations, conventions, and treaties deemed contrary to the interests of the USA.

The list includes 35 non-UN entities (e.g. the GFCE and the Freedom Online Coalition) and 31 UN bodies (e.g. the Department of Economic and Social Affairs, the UN Conference on Trade and Development and the UN Framework Convention on Climate Change (UNFCCC)). 

Why does it matter? The order was not a surprise, following the Trump administration’s 2025 retreat from the Paris Agreement, WHO and other international organisations focusing on climate change, sustainable development, and identity issues. Two initiatives in the technology and digital governance ecosystem are explicitly dropped: the Freedom Online Coalition (FOC) and the Global Forum on Cyber Expertise (GFCE). And there is also some uncertainty regarding the meaning and the implications of the US ‘withdrawal’ from UNCTAD and UN DESA, given the roles these entities play in relation to initiatives such as WSIS and Agenda 2030 follow-up processes, the Internet Governance Forum (IGF), and data governance. 



LOOKING AHEAD
 Person, Face, Head, Binoculars

The year has just begun, and the digital policy calendar is still taking shape. To stay up to date with upcoming events and discussions shaping the digital landscape, we encourage you to follow our calendar of events at dig.watch/events.



READING CORNER

Weekly #243 What the WSIS+20 outcome means for global digital governance?

 Logo, Text

12-19 December 2025


HIGHLIGHT OF THE WEEK

From review to recalibration: What the WSIS+20 outcome means for global digital governance

The WSIS+20 review, conducted 20 years after the World Summit on the Information Society, concluded in New York with the adoption of a high-level outcome document by the UN General Assembly. The review assesses progress toward building a people-centred, inclusive, and development-oriented information society, highlights areas needing further effort, and outlines measures to strengthen international cooperation.

A major institutional decision was to make the Internet Governance Forum (IGF) a permanent UN body. The outcome also includes steps to strengthen its functioning: broadening participation—especially from developing countries and underrepresented communities—enhancing intersessional work, supporting national and regional initiatives, and adopting innovative and transparent collaboration methods. The IGF Secretariat is to be strengthened, sustainable funding ensured, and annual reporting on progress provided to UN bodies, including the Commission on Science and Technology for Development (CSTD).

Negotiations addressed the creation of a governmental segment at the IGF. While some member states supported this as a way to foster more dialogue among governments, others were concerned it could compromise the IGF’s multistakeholder nature. The final compromise encourages dialogue among governments with the participation of all stakeholders.

Beyond the IGF, the outcome confirms the continuation of the annual WSIS Forum and calls for the United Nations Group on the Information Society (UNGIS) to increase efficiency, agility, and membership. 

WSIS action line facilitators are tasked with creating targeted implementation roadmaps linking WSIS action lines to Sustainable Development Goals (SDGs) and Global Digital Compact (GDC) commitments. 

UNGIS is requested to prepare a joint implementation roadmap to strengthen coherence between WSIS and the Global Digital Compact, to be presented to CSTD in 2026. The Secretary-General will submit biennial reports on WSIS implementation, and the next high-level review is scheduled for 2035.

The document places closing digital divides at the core of the WSIS+20 agenda. It addresses multiple aspects of digital exclusion, including accessibility, affordability, quality of connectivity, inclusion of vulnerable groups, multilingualism, cultural diversity, and connecting all schools to the internet. It stresses that connectivity alone is insufficient, highlighting the importance of skills development, enabling policy environments, and human rights protection.

The outcome also emphasises open, fair, and non-discriminatory digital development, including predictable and transparent policies, legal frameworks, and technology transfer to developing countries. Environmental sustainability is highlighted, with commitments to leverage digital technologies while addressing energy use, e-waste, critical minerals, and international standards for sustainable digital products.

Human rights and ethical considerations are reaffirmed as fundamental. The document stresses that rights online mirror those offline, calls for safeguards against adverse impacts of digital technologies, and urges the private sector to respect human rights throughout the technology lifecycle. It addresses online harms such as violence, hate speech, misinformation, cyberbullying, and child sexual exploitation, while promoting media freedom, privacy, and freedom of expression.

Capacity development and financing are recognised as essential. The document highlights the need to strengthen digital skills, technical expertise, and institutional capacities, including in AI. It invites the International Telecommunication Union to establish an internal task force to assess gaps and challenges in financial mechanisms for digital development and to report recommendations to CSTD by 2027. It also calls on the UN Inter-Agency Working Group on AI to map existing capacity-building initiatives, identify gaps, and develop programs such as an AI capacity-building fellowship for government officials and research programmes.

Finally, the outcome underscores the importance of monitoring and measurement, requesting a systematic review of existing ICT indicators and methodologies by the Partnership on Measuring ICT for Development, in cooperation with action line facilitators and the UN Statistical Commission. The Partnership is tasked with reporting to CSTD in 2027. Overall, the CSTD, ECOSOC, and the General Assembly maintain a central role in WSIS follow-up and review.

The final text reflects a broad compromise and was adopted without a vote, though some member states and groups raised concerns about certain provisions.

 Adult, Male, Man, Person, Clothing, Formal Wear, Suit, Book, Comics, Publication, Face, Head, Coat, Text
IN OTHER NEWS LAST WEEK

This week in AI governance

El Salvador. El Salvador has partnered with xAI to launch the world’s first nationwide AI-powered education programme, deploying the Grok model across more than 5,000 public schools to deliver personalised, curriculum-aligned tutoring to over one million students over the next two years. The initiative will support teachers with adaptive AI tools while co-developing methodologies, datasets and governance frameworks for responsible AI use in classrooms, aiming to close learning gaps and modernise the education system. President Nayib Bukele described the move as a leap forward in national digital transformation. 

BRICS. Talks on AI governance within the BRICS bloc have deepened as member states seek to harmonise national approaches and shared principles to ethical, inclusive and cooperative AI deployment. Still premature to talk about the creation of an AI-BRICS, Deputy Foreign Minister Sergey Ryabkov, Russia’s BRICS sherpa.

Pax Silica. A diverse group of nations has announced Pax Silica, a new partnership aimed at building secure, resilient, and innovation-driven supply chains for the technologies that underpin the AI era. These include critical minerals and energy inputs, advanced manufacturing, semiconductors, AI infrastructure and logistics. Analysts warn that diverging views may emerge if Washington pushes for tougher measures targeting China, potentially increasing political and economic pressure on participating nations. However, the USA, which leads the platform, clarified that the platform will focus on strengthening supply chains among its members rather than penalising non-members, like China.

UN AI Resource Hub. The UN AI Resource Hub has gone live as a centralised platform aggregating AI activities and expertise across the UN system. Presented by the UN Inter-Agency Working Group on AI. This platform has been developed through the joint collaboration of UNDP, UNESCO and ITU. It enables stakeholders to explore initiatives by agency, country and SDGs. The hub supports inter-agency collaboration, capacity for UN member states, and enhanced coherence in AI governance and terminology.


ByteDance inks US joint-venture deal to head off a TikTok ban

ByteDance has signed binding agreements to shift control of TikTok’s US operations to a new joint venture majority-owned (80.1%) by American and other non-Chinese investors, including Oracle, Silver Lake and Abu Dhabi-based MGX.

In exchange, ByteDance retains a 19.9% minority stake, in an effort to meet US national security demands and avoid a ban under the 2024 divest-or-ban law. 

The deal is slated to close on 22 January 2026, and US officials previously cited an implied valuation of approximately $14 billion, although the final terms have not been disclosed. 

TikTok CEO Shou Zi Chew told staff the new entity will independently oversee US data protection, algorithm and software security, and content moderation, with Oracle acting as the ‘trusted security partner’ hosting US user data in a US-based cloud and auditing compliance.


China edges closer to semiconductor independence with EUV prototype

Chinese scientists have reportedly built a prototype extreme ultraviolet (EUV) lithography machine, a technology long monopolised by ASML — the Dutch company that is the world’s sole supplier of EUV systems and a central chokepoint in global semiconductor manufacturing. 

EUV machines enable the production of the most advanced chips by etching ultra-fine circuits onto silicon wafers, making them indispensable for AI, advanced computing and modern weapons systems.

The Chinese prototype is already generating EUV light, though it has not yet produced working chips. 

The project reportedly involved former ASML engineers who reverse-engineered key elements of EUV systems, suggesting China may be closer to advanced chip-making capability than Western policymakers and analysts had assumed. 

Officials are targeting chip production by 2028, with insiders pointing to 2030 as a more realistic milestone.


USA launches tech force to boost federal AI and advanced tech skills

The Trump administration has unveiled a new initiative, branded the US Tech Force, aimed at rebuilding the US government’s technical capacity after deep workforce reductions, with a particular focus on AI and digital transformation. 

The programme reflects growing concern within the administration that federal agencies lack the in-house expertise needed to deploy and oversee advanced technologies, especially as AI becomes central to public administration, defence, and service delivery.

According to the official TechForce.gov website, participants will work on high-impact federal missions, addressing large-scale civic and national challenges. The programme positions itself as a bridge between Silicon Valley and Washington, encouraging experienced technologists to bring industry practices into government environments.

Supporters argue that the approach could quickly strengthen federal AI capacity and reduce reliance on external contractors. Critics, however, warn of potential conflicts of interest and question whether short-term deployments can substitute for sustained investment in the public sector workforce.


Brussels targets ultra-cheap imports

The EU member states will introduce a new customs duty on low-value e-commerce imports, starting 1 July 2026. Under the agreement, a customs duty of €3 per item will be applied to parcels valued at less than €150 imported directly into the EU from third countries. 

This marks a significant shift from the previous regime, under which such low-value goods were generally exempt from customs duties.

The temporary duty is intended to bridge the gap until the EU Customs Data Hub, a broader customs reform initiative designed to provide comprehensive import data and enhance enforcement capacity, becomes fully operational in 2028.

The Commission framed the measure as a necessary interim solution to ensure fair competition between EU-based retailers and overseas e-commerce sellers. The measure also lands squarely in the shadow of platforms such as Shein and Temu, whose business models are built on shipping vast volumes of ultra-low-value parcels.


USA reportedly suspends Tech Prosperity Deal with UK

The USA has reportedly suspended the implementation of the Tech Prosperity Deal with the UK, pausing a pact originally agreed during President Trump’s September state visit to London.

The Tech Prosperity Deal was designed to strengthen collaboration in frontier technologies, with a strong emphasis on AI, quantum, and the secure foundations needed for future innovation, and included commitments from major US tech firms to invest in the UK.

According to the Financial Times, Washington’s decision to suspend the deal reflects growing frustration with London’s stance on broader trade issues beyond technology. U.S. officials reportedly wanted the UK to make concessions on non-tariff barriers, particularly regulatory standards affecting food and industrial goods, before advancing the tech agreement

Neither government has commented yet. 



LOOKING AHEAD
 Person, Face, Head, Binoculars

Digital Watch Weekly will take a short break over the next two weeks. Thank you for your continued engagement and support.



READING CORNER

UNGA High-level meeting on WSIS+20 review – Day 2

Dear readers,

Welcome to our overview of statements delivered during Day 2 at UNGA’s high-level meeting on the WSIS+20 review.

Speakers repeatedly underscored that the WSIS vision remains relevant, but that it needs to be matched with concrete action, sustained cooperation, and inclusive governance arrangements. Digital transformation was framed as both an opportunity and a risk: a powerful accelerator of sustainable development, resilience, and service delivery, but also a driver of new inequalities if structural gaps, concentration of power, and governance challenges are left unaddressed. Digital public infrastructure and digital public goods were highlighted as foundations for inclusive development, while persistent digital divides were described as urgent and unresolved. Artificial intelligence (AI) featured prominently as a general-purpose technology with transformative potential, but also with risks related to exclusion, labour, environmental sustainability, and governance capacity.

Particular attention was given to the Internet Governance Forum (IGF), with widespread support for its permanent mandate, alongside calls to strengthen its funding, working modalities, and participation.

Throughout the day, speakers reaffirmed that no single stakeholder can deliver digital development alone, and that WSIS must continue to function as a people-centred, multistakeholder framework aligned with the SDGs and the Global Digital Compact (GDC).

DW team

Information and communication technologies for development

Digital transformation is no longer optional, underpinning early warning systems, disaster preparedness, climate adaptation, education, health services, and economic diversification, especially for Small Island Developing States (Fiji).

ICTs were widely framed as key enablers of sustainable development, innovation, resilience, and inclusive growth, and as major accelerators of the 2030 Agenda, particularly in contexts facing economic, climate, or security challenges (Ethiopia, Eritrea, Ukraine, Fiji, Colombia). It was noted that technologies, AI, and digital transformation must serve humanity through education, culture, science, communication, and information (UNESCO).

Strong emphasis was placed on digital public infrastructure (DPI) and digital public goods (DPGs) as foundations for inclusion, innovation, growth and public value (UNDP, Trinidad and Tobago, Malaysia). Digital public infrastructure was emphasised as needing to be secure, interoperable, and rights-based, grounded in safeguards, open systems, and public-interest governance (UNDP).

Digital commons, open-source solutions, and community-driven knowledge infrastructures were highlighted as central to sustainable development outcomes (IT for Change, Wikimedia, OIF). DPGs, such as open-source platforms, have been developed by stakeholders brought together by the WSIS process. However, member states need to create conditions for DPGs’ continued success within the WSIS framework (Wikimedia). Libraries were identified as global digital public infrastructure and significant public goods, with calls for their systematic integration into digital inclusion strategies and WSIS implementation efforts (International Federation of Library Associations and Institutions).

Persistent inequalities in sharing digitalisation gains were highlighted. While more than 6 billion people are online globally, low-income countries continue to lag significantly, including in digital commerce participation, underscoring the need for short-term policy choices that secure inclusive and sustainable development outcomes in the long term (UNCTAD).

The positive impact of digital technologies is considerably lower in developing countries compared to that in developed countries (Cuba). Concerns were raised that developing countries risk being locked into technological dependence, further deepening global asymmetries if left unaddressed (Colombia).

Environmental impacts

An environmentally sustainable information society was emphasised, with calls to align digital and green transformations to address climate change and resource scarcity, and to harness ICTs to achieve the SDGs (China).

Digital innovation was described as needing to support environmental sustainability and responsible resource use, ensuring positive long-term social and economic outcomes (Thailand).

The enabling environment for digital development

Speakers reaffirmed that enabling environments are central to the WSIS vision of a people-centred, inclusive, and development-oriented information society. Predictable, coherent, and transparent policy frameworks were highlighted as essential for enabling innovation and investment, and for ensuring that all countries can benefit from the digital economy (Microsoft, ICC).

These environments were linked to openness and coherence, including regulatory clarity and predictability, support for the free flow of information across borders, avoidance of unnecessary fragmentation, and the promotion of interoperability and scalable digital solutions (ICC). The importance of developing policies through dialogue with relevant stakeholders was also stressed (ICC).

Several speakers underlined that enabling environments must address persistent development gaps. The uneven distribution of the benefits of the information society, particularly in developing countries, was noted, alongside calls for enhanced international cooperation to facilitate investment, innovation, effective governance, and access to financial and technological resources (Holy See). Partnerships across all sectors were seen as essential to mobilise financing, capacity building, and technology transfer, given that governments cannot deliver alone (Fiji).

Divergent views were expressed on unilateral coercive measures. Some speakers argued that such measures impede economic and social development and hinder digital transformation, calling for international cooperation focused on capacity building, technology transfer, and financing of public digital infrastructure (Eritrea, Cuba). In contrast, a delegation stated that economic sanctions are lawful, legitimate, and effective tools for addressing threats to peace and security (USA).

Governance frameworks were identified as a core component of enabling environments. It was stressed that digital development must be safe, equitable, and rooted in trust, with adequate governance frameworks ensuring transparency, accountability, user protection, and meaningful stakeholder participation in line with the multistakeholder approach (Thailand).

Building confidence and security in the use of ICTs

Building confidence and security in the digital environment was framed as a prerequisite for realising the social and economic benefits of digitalisation, with trust and safety needing to be embedded across the entire digital ecosystem (Malaysia).

Trust was described as requiring regulation, accountability, and sustained public education to ensure that users can engage confidently with digital technologies (Malaysia).

Cybercrime was identified as a persistent and serious concern requiring concerted collective solutions beyond national approaches (Namibia).

Cybersecurity and cybercrime were highlighted as increasingly serious and complex challenges that undermine trust and risk eroding the socio-economic gains of digitalisation if left unaddressed (Thailand).

Investment in capacity building was emphasised as essential to strengthening national and individual resilience against cyber threats, alongside the adoption of security- and privacy-by-design principles (Thailand, International Federation for Information Processing).

Capacity development

Capacity development was consistently framed as a core enabler of inclusive digital transformation, with widespread recognition of persistent constraints in digital skills, institutional capacity, and governance capabilities (UNDP, Malaysia, Trinidad and Tobago).

Capacity development was identified as one of the most frequent requests from countries, particularly in relation to inclusive digital transformation (UNDP).

Effective capacity development was described as requiring institutional anchors, with centres of excellence highlighted as providing infrastructure and expertise that many countries—especially least developed countries, landlocked developing countries, and small island states—cannot afford independently (UNIDO).

Efforts are underway to establish a network of centres of excellence across the Global South, including in China, Ethiopia, the Western Balkans, Belarus, and Latin America (UNIDO).

Sustainable digital education was highlighted as essential, including fostering learner aspiration, addressing diversity and underrepresented communities, embedding computational thinking, and strengthening teacher preparation (International Federation for Information Processing). The emphasis should be on empowering people to understand information, question it, and use it wisely (UNESCO.

Libraries were highlighted as trusted, non-commercial public spaces that provide access to connectivity, devices, skills, and confidence-building support. For many people, particularly the most disenfranchised, libraries were described as the only way to get online and as key sources of diverse content and cultural heritage (International Federation of Library Associations and Institutions).

Financial mechanisms

Financing was described as a critical and non-negotiable component of implementing the WSIS vision, with repeated warnings that without adequate and predictable public and private resources, WSIS commitments risk remaining aspirational (APC).

Effective implementation was described as requiring a shift from fragmented, project-based funding toward systems-level financing approaches capable of delivering impact at scale (UNDP).

Calls were made for adequate, predictable, and accessible funding for digital infrastructure and capacity development, particularly to ensure effective participation of developing countries and the Global South (Colombia).

Support was expressed for the proposed establishment of a working group on future financial mechanisms for digital development, provided it focuses on the concrete needs of developing countries (Eritrea).

Financing challenges were also linked to linguistic and cultural diversity, with calls for decentralisation of computing capacity and ambitious strategies to finance digital development and AI, building on proposals by the UN Secretary-General (OIF).

Calls were made for UNGIS and ITU to ensure inclusive participation in the interagency financing task force and to approach the IGF’s permanent mandate with creativity and ambition (APC).

Existing financing mechanisms were highlighted for their tangible impact, including funds that have mobilised resources for digital infrastructure in more than 100 countries (Kuwait).

Human rights and the ethical dimensions of the information society

Human rights were reaffirmed as a foundational pillar of the WSIS vision, grounded in the UN Charter and the Universal Declaration of Human Rights, with emphasis on ensuring that the same rights people enjoy offline are protected online (International Institute for Democracy and Electoral Assistance, Costa Rica, Austria).

Anchoring WSIS in international human rights law was highlighted as essential to preserving an open, free, interoperable, reliable, and secure internet, particularly amid trends toward fragmentation, surveillance-based governance, and concentration of technological power (International Institute for Democracy and Electoral Assistance, OHCHR).

The centrality of human rights and the multistakeholder character of digital governance were described as practical conditions for legitimacy and effectiveness, particularly as freedom online declines and civic space shrinks (GPD, APC).

Concerns were raised about harms associated with profit-driven algorithmic systems and platform design, including addiction, mental health impacts, polarisation, extremism, and erosion of trustworthy information, with particularly severe effects in developing countries (HitRecord, Brazil).

A rights-based approach to digital governance was described as necessary to ensure accountability, participation, impact assessment, and protection of rights such as privacy, non-discrimination, and freedom of expression (OHCHR, ICC).

Divergent views were expressed on content regulation. Some cautioned against any threats to freedom of speech and expression (USA), while others emphasised the legitimate authority of states to regulate the digital domain to protect citizens and uphold the principle that what is illegal offline must also be illegal online (Brazil).

Ethical frameworks were emphasised to protect privacy, personal data, children, women, and vulnerable groups, and to orient digital development toward human dignity, justice, and the common good, including embedding ethical principles by design and protecting cultural diversity and the rights of artists and creators in AI-driven environments (UNESCO, Holy See, International Federation for Information Processing, Costa Rica, Kuwait, Colombia, Foundation Cibervoluntarios, Eritrea).

Concerns were raised about trends toward a more fragmented and state-centric internet, with warnings that such shifts pose risks to human rights, including privacy and freedom of expression, and could undermine the open and global nature of the internet (International Institute for Democracy and Electoral Assistance).

Data governance

The growing importance of data was linked to the expansion of AI (UNCTAD). Unlocking the value of data in a responsible manner was presented as a common problem and a civilizational challenge (Internet and Jurisdiction Policy Network). Concerns were raised about an innovation economy built on data extractivism, dispossession, and disenfranchisement, with countries and people from the Global South resisting unjust trade arrangements and seeking to reclaim the internet and its promise (IT for Change). 

Artificial intelligence

AI was described as a general-purpose technology at the centre of the technological revolution, shaping economic growth, national security, global competitiveness, and development trajectories (Brazil, the USA).

Concerns were raised that AI is currently being developed and deployed largely according to market-driven and engagement-maximising business models, similar to those that shaped social media. Without practical guardrails, AI risks reproducing harmful effects, and so governments need to move beyond historically hands-off approaches and play a more active role in governance (HitRecord).

Specific AI-related harms were identified, including deepfakes, rising environmental impacts from AI infrastructure (IT for Change), and labour impacts (Brazil). Concerns were expressed that AI adoption is contributing to job displacement and the fragilisation of labour rights, despite the centrality of decent work to the information society agenda (Brazil).

Noting uneven global capacities in AI development, deployment, and use, concerns were expressed that the speed of AI development may exceed the adaptive capacities of developing countries, including small island developing states, risking new forms of exclusion (Eritrea, Trinidad and Tobago). And it was highlighted that cultural and linguistic diversity is critically under-represented in AI systems (OIF).

Calls were made for AI governance frameworks to address AI-related risks and ensure that the technology is placed at the service of humanity (Kuwait, Namibia). Divergent views were expressed on governance approaches, with some cautioning against additional bureaucracy, while others stressed that relying on market forces alone will not ensure AI benefits all people (USA, HitRecord). It was also said that the UN should not shy away from looking into AI governance matters (Brazil). 

From an industrial perspective, it was noted that regulation often lags behind AI developments, with support expressed for evidence-based policymaking and regulatory testbeds to de-risk innovation and translate AI strategies into practice (UNIDO).

Ethical safeguards were emphasised as essential, with AI described as opening new horizons for creativity while also raising serious concerns about its impact on humanity’s relationship to truth, beauty, and contemplation (Holy See).

Internet governance

Widespread support was expressed for the Internet Governance Forum (IGF), described as a central pillar of the WSIS architecture and a cornerstone of global digital cooperation (International Institute for Democracy and Electoral Assistance, GPD, APC, ICANN, ICC, UNESCO, Austria, Africa ICT Alliance, Meta, Italy, Colombia). Making the IGF permanent was seen as an affirmation of confidence in the multistakeholder model and its continued relevance for addressing governance issues (APC, ICC, OHCHR).

The IGF was also described as a unique and inclusive multistakeholder space, bringing together governments, the private sector, civil society, the technical community, academia, and international organisations on equal footing. This model was credited with helping the internet remain global, interoperable, resilient, and stable through periods of rapid technological and geopolitical change (Microsoft, ICANN, IGF Leadership Panel, Meta).

Several speakers highlighted that the IGF has evolved into a self-organised global network, with more than 170 national, regional, sub-regional, and youth IGFs, enabling voices from remote, marginalised, and under-represented communities to feed into global discussions and bridge the gap between high-level diplomacy and ground-level implementation (Internet and Jurisdiction Policy Network, IGF Leadership Panel, Africa ICT Alliance, Internet Society). At the same time, it was stressed that while the IGF represents a remarkable institutional innovation, it has not yet fulfilled its full potential. Calls were made to continue improving its working modalities, clarify its institutional evolution, and ensure sustainable and predictable funding (Internet and Jurisdiction Policy Network, Brazil, ICANN).

Protecting and reaffirming the multistakeholder model of internet governance was repeatedly identified as important to the success of WSIS implementation. This model – anchored in dialogue, transparency, inclusivity, and accountability – was presented as a practical governance tool rather than a symbolic principle, ensuring that those who build, use, and regulate the internet can jointly shape its future (International Institute for Democracy and Electoral Assistance, Wikimedia, Microsoft, ICANN, ICC).

At the same time, several speakers stressed the need for stronger and more effective government participation in governance processes. It was noted that governments have legitimate roles and responsibilities in shaping digital policy, and that intergovernmental spaces must be strengthened so that all governments – particularly those from developing countries – can effectively perform their roles in global digital governance (APC, Brazil, Cuba). In this context, there was also a concern that calls for greater government engagement in the IGF have been framed primarily toward developing countries, with emphasis placed instead on the need for equal-footing participation of governments from all regions to ensure the forum’s long-term sustainability (APC).

Monitoring and measurement

It was noted that WSIS+20 must deliver measurable commitments with verifiable indicators (Costa Rica). And a streamlined and inclusive monitoring and review framework was seen as essential moving forward (Cuba).

WSIS framework, follow-up and implementation

There was broad recognition that the WSIS framework remains a central reference for a people-centred, inclusive, and development-oriented information society, while requiring reinforcement to respond to growing complexity, concentration of digital power, and risks posed by advanced AI systems (Costa Rica, Malaysia, Cuba).

The multistakeholder model was repeatedly reaffirmed as a cornerstone of the WSIS vision, anchored in dialogue, transparency, inclusivity, and accountability, and seen as essential to maintaining a resilient and open digital ecosystem (International Institute for Democracy and Electoral Assistance, GPD, USA, Meta, ICC, Italy, Thailand). The inclusive nature of the WSIS+20 review process itself was highlighted, with the Informal Multi-Stakeholder Sounding Board described as enabling substantive contributions from diverse stakeholder groups that helped identify both achievements and gaps in WSIS implementation over the past 20 years (WSIS+20 Co-Facilitators Informal Multi-Stakeholder Sounding Board).

Speaking of inclusivity, many speakers stressed that no single stakeholder can deliver digital development alone, and called for collaboration among governments, private sector, civil society, academia, technical communities, and international organisations to mobilise resources, share knowledge, transfer technology, and support nationally driven digital strategies (ICC, Namibia, Italy, Thailand). There were also calls to include knowledge actors such as universities, libraries, archives, cultural figures, and public media, reflecting that digital governance now concerns the status of knowledge itself (OIF). Youth representatives called for funded programmes, institutionalised youth seats in WSIS action line implementation, and recognition of young people as co-designers of digital policy (AI for Good Young Leaders).

On matters related to WSIS action lines, human rights expertise was highlighted as requiring a stronger and more systematic role within the WSIS architecture (GPD, OHCHR). And gender equality was welcomed as an explicit implementation priority within WSIS action lines (APC).

Strengthening UN system-wide coherence was highlighted as a priority, including clearer action line roadmaps and improved coordination across the UN system (GPD, UNDP). Alignment among WSIS, the Global Digital Compact (GDC), the Pact for the Future, and the SDGs was seen as necessary to maximise impact and avoid duplication (International Institute for Democracy and Electoral Assistance, Meta, Brazil, Colombia, Austria, Cuba). At the same time, one delegation expressed reservations about references to the GDC in the final outcome document, noting also concerns about what they considered to be international organisations setting a standard that legitimises international governance of the internet (USA).

Looking ahead, the task was framed not as preserving WSIS, but reinforcing it so that it remains future-proof, capable of anticipating rapid technological change while staying anchored in people-centred values, human rights, and inclusive governance (UNESCO, GPD). It was also stressed that for many in the Global South, the WSIS vision remains aspirational, and that the next phase must ensure the information society becomes an effective right rather than an empty promise (Cuba). 

Comments regarding the outcome document

In the last segment of the meeting, several delegations made statements regarding the WSIS+20 outcome document.

Some expressed concern about the limited transparency, inclusiveness, and predictability in the final phase of negotiations, stating that the process did not fully reflect multilateral dialogue and affected trust and collective ownership of the document (India, Israel, Iraq on behalf of Group of 77 and China, Iran).

Reservations were placed on language perceived as going beyond the WSIS mandate or national policy space, with reaffirmation of national sovereignty and the right of states to determine their own regulatory, social, and cultural frameworks. Concerns were raised regarding references to gender-related terminology, sexual and reproductive health, sexual and gender-based violence, misinformation, disinformation, and hate speech (Saudi Arabia, Argentina, Iran, Nigeria). Concerns were also noted regarding references to international instruments to which some states are not parties, citing concerns related to national legislation, culture, and sovereignty (Saudi Arabia). Dissociations were recorded from paragraphs related to human rights, information integrity, and the role of the Office of the High Commissioner for Human Rights in the digital sphere (Russian Federation). Concerns were further expressed that the outcome document advances what were described as divisive social themes, including climate change, gender, diversity, equity and inclusion, and the right to development (the USA).

Several delegations expressed concern that references to unilateral coercive measures were weakened and did not reflect their negative impact on access to technology, capacity building, and digital infrastructure in developing countries (Iraq on behalf of Group of 77 and China, Russian Federation, Iran). Others noted that such measures adopted in accordance with international law are legitimate foreign policy tools for addressing threats to peace and security (USA, Ukraine).

Some delegations noted that the outcome document does not sufficiently reflect the development dimension, particularly with regard to concrete commitments on financing, technology transfer, and capacity building, and that the absence of references to common but differentiated responsibilities weakens the development pillar (India, Iraq on behalf of Group of 77 and China, Iran). It was also said that the document does not adequately address the impacts of automation and artificial intelligence on labour and employment, despite requests from developing countries (Iraq on behalf of the Group of 77 and China).

While support for the multistakeholder nature of internet governance and the permanent nature of the IGF was noted, concerns were expressed that the outcome treats the IGF as a substitute rather than a complement to enhanced intergovernmental cooperation, and that the language regarding the intergovernmental segment for dialogue among governments has been weakened. It was said that intergovernmental spaces need to be strengthened so that all governments, particularly those from developing countries, can perform their roles in global governance (Iran, Iraq on behalf of Group of 77 and China). 

Serious reservations were placed on language viewed as legitimising international governance of the Internet, with opposition expressed to references to the Global Digital Compact, the Summit for the Future, and the Independent International Scientific Panel on AI, alongside reaffirmed support for a multistakeholder model of internet governance (USA).

Despite these reservations, several delegations stated that they joined the consensus in the interest of multilateralism and unity, while placing their positions and dissociations on record (India, Iraq on behalf of the Group of 77 and China, Iran, Nigeria, USA).

For a detailed summary of the discussions, including session transcripts and data statistics from the WSIS+20 High-Level meeting, visit our dedicated web page, where we are following the event. To explore the WSIS+20 review process in more depth, including its objectives and ongoing developments, see the dedicated WSIS+20 web page.
.

WSIS20 banner 4 final
Twenty years after the WSIS, the WSIS+20 review assesses progress, identifies ICT gaps, and highlights challenges such as bridging the digital divide and leveraging ICTs for development. The review will conclude with a two-day UNGA high-level meeting on 16–17 December 2025, featuring plenary sessions and the adoption of the draft outcome document.
wsis
This page keeps track of the process leading to the UNGA meeting in December 2025. It also provides background information about WSIS and related activities and processes since 1998.