Digital Watch newsletter – Issue 106 – January 2026

December 2025 and January 2026 in retrospect

This month’s newsletter looks back on December 2025 and January 2026 and explores the forces shaping the digital landscape in 2026:

WSIS+20 review: A close look at the outcome document and high-level review meeting, and what it means for global digital cooperation.

Child safety online: Momentum on bans continues, while landmark US trials examine platform addiction and responsibility.

Digital sovereignty: Governments are reassessing data, infrastructure, and technology policies to limit foreign exposure and build domestic capacity.

Grok Shock: Regulatory scrutiny hits Grok, X’s AI tool, after reports of non-consensual sexualised and deepfake content.

Geneva Engage Awards: Highlights from the 11th edition, recognising excellence in digital outreach and engagement in International Geneva.

Annual AI and digital forecast: We highlight the 10 trends and events we expect to shape the digital landscape in the year ahead.

Global digital governance

The USA has withdrawn from a wide range of international organisations, conventions and treaties it considers contrary to its interests, including dozens of UN bodies and non-UN entities. In the technology and digital governance space, it explicitly dropped two initiatives: the Freedom Online Coalition and the Global Forum on Cyber Expertise. The implications of withdrawing from UNCTAD and the UN Department of Economic and Social Affairs remain unclear, given their links to processes such as WSIS, follow-up to Agenda 2030, the Internet Governance Forum, and broader data-governance work.

Technologies

US President Trump signed a presidential proclamation imposing a 25% tariff on certain advanced computing and AI‑oriented chips, including high‑end products such as Nvidia’s H200 and AMD’s MI325X, under a national security review. Officials described the measure as a ‘phase one’ step aimed at strengthening domestic production and reducing dependence on foreign manufacturers, particularly those in Taiwan, while also capturing revenue from imports that do not contribute to US manufacturing capacity. The administration suggested that further actions could follow depending on how negotiations with trading partners and the industry evolve.

The USA and Taiwan announced a landmark semiconductor-focused trade agreement. Under the deal, tariffs on a broad range of Taiwanese exports will be reduced or eliminated, while Taiwanese semiconductor companies, including leading firms like TSMC, have committed to invest at least $250 billion in US chip manufacturing, AI, and energy projects, supported by an additional $250 billion in government-backed credit.

The protracted legal and political dispute over Dutch semiconductor manufacturer Nexperia,  a Netherlands‑based firm owned by China’s Wingtech Technology, also continues. The dispute erupted in autumn 2025, when Dutch authorities briefly seized control of Nexperia, citing national security and concerns about potential technology transfers to China. Nexperia’s European management and Wingtech representatives are now squaring off in an Amsterdam court, which is deciding whether to launch a formal investigation into alleged mismanagement. The court is set to make a decision within four weeks.

Reports say Chinese scientists have built a prototype extreme ultraviolet lithography machine, a technology long dominated by ASML. This Dutch firm is the sole supplier of EUV systems and a major chokepoint in advanced chipmaking. EUV tools are essential for producing cutting-edge chips used in AI, high-performance computing and modern weapons by etching ultra-fine circuits onto silicon wafers. The prototype is reportedly already generating EUV light but has not yet produced working chips, and the effort is said to include former ASML engineers who reverse-engineered key components.

Canada has launched Phase 1 of the Canadian Quantum Champions Program as part of a $334.3 million Budget 2025 investment, providing up to $92 million in initial funding, up to $23 million each to Anyon Systems, Nord Quantique, Photonic and Xanadu, to advance fault-tolerant quantum computers and keep key capabilities in Canada, with progress assessed through a new National Research Council-led benchmarking platform.

The USA has reportedly paused implementation of its Tech Prosperity Deal with the UK, a pact agreed during President Trump’s September visit to London that aimed to deepen cooperation on frontier technologies such as AI and quantum and included planned investment commitments by major US tech firms. According to the Financial Times, the suspension reflects broader US frustration with UK positions on wider trade matters, with Washington seeking UK concessions on non-tariff barriers, especially regulatory standards for food and industrial goods, before moving the technology agreement forward.

At the 16th EU–India Summit in New Delhi, the EU and India moved into a new phase of cooperation by concluding a landmark Free Trade Agreement and launching a Security and Defence Partnership, signalling closer alignment amid global economic and geopolitical pressures. The trade deal aims to cut tariff and non-tariff barriers and strengthen supply chains, while the security track expands cooperation on areas such as maritime security, cyber and hybrid threats, counterterrorism, space and defence industrial collaboration.

South Korea and Italy have agreed to deepen their strategic partnership by expanding cooperation in high-technology fields, especially AI, semiconductors and space, with officials framing the effort as a way to boost long-term competitiveness through closer research collaboration, talent exchanges and joint development initiatives, even though specific programmes have not yet been detailed publicly.

Infrastructure

The EU adopted the Digital Networks Act, which aims to reduce fragmentation with limited spectrum harmonisation and an EU-wide numbering scheme for cross-border business services, while stopping short of a truly unified telecoms market. The main obstacle remains resistance from member states that want to retain control over spectrum management, especially for 4G, 5G and Wi-Fi, leaving the package as an incremental step rather than a structural overhaul despite long-running calls for deeper integration.

The Second International Submarine Cable Resilience Summit concluded with the Porto Declaration on Submarine Cable Resilience, which reaffirms the critical role of submarine telecommunications cables for global connectivity, economic development and digital inclusion. The declaration builds on the 2025 Abuja Declaration with further practical guidance and outlines non-binding recommendations to strengthen international cooperation and resilience — including streamlining permitting and repair, improving legal/regulatory frameworks, promoting geographic diversity and redundancy, adopting best practices for risk mitigation, enhancing cable protection planning, and boosting capacity-building and innovation — to support more reliable, inclusive global digital infrastructure. 

Cybersecurity

Roblox is under formal investigation in the Netherlands, as the Autoriteit Consument & Markt (ACM) has opened a formal investigation to assess whether Roblox is taking sufficient measures to protect children and teenagers who use the service. The probe will examine Roblox’s compliance with the European Union’s Digital Services Act (DSA), which obliges online services to implement appropriate and proportionate measures to ensure safety, privacy and security for underage users, and could take up to a year.

Meta, which was under intense scrutiny by regulators and civil society over chatbots that previously permitted provocative or exploitative conversations with minors, is pausing teenagers’ access to its AI characters globally while it redesigns the experience with enhanced safety and parental controls. The company said teens will be blocked from interacting with certain AI personas until a revised platform is ready, guided by principles akin to a PG-13 rating system to limit exposure to inappropriate content. 

ETSI has issued a new standard, EN 304 223, setting cybersecurity requirements for AI systems across their full lifecycle, addressing AI-specific threats like data poisoning and prompt injection, with additional guidance for generative-AI risks expected in a companion report.

The EU has proposed a new cybersecurity package to tighten supply-chain security, expand and speed up certification, streamline NIS2 compliance and reporting, and give ENISA stronger operational powers such as threat alerts, vulnerability management and ransomware support.

A group of international cybersecurity agencies has released new technical guidance addressing the security of operational technology (OT) used in industrial and critical infrastructure environments. The guidance, led by the UK’s National Cyber Security Centre (NCSC), provides recommendations for securely connecting industrial control systems, sensors, and other operational equipment that support essential services. According to the co-authoring agencies, industrial environments are being targeted by a range of actors, including cybercriminal groups and state-linked actors. 

The UK has launched a Software Security Ambassadors Scheme led by the Department for Science, Innovation and Technology and the National Cyber Security Centre, asking participating organisations to promote a new Software Security Code of Practice across their sectors and improve secure development and procurement to strengthen supply-chain resilience.

British and Chinese security officials have agreed to establish a new cyber dialogue forum to discuss cyberattacks and manage digital threats, aiming to create clearer communication channels, reduce the risk of miscalculation in cyberspace, and promote responsible state behaviour in digital security.

Economic 

EU ministers have urged faster progress toward the bloc’s 2030 digital targets, calling for stronger digital skills, wider tech adoption and simpler rules for SMEs and start-ups while keeping data protection and fundamental rights intact, alongside tougher, more consistent enforcement on online safety, illegal content, consumer protection and cyber resilience.

South Korea has approved legal changes to recognise tokenised securities and set rules for issuing and trading them within the regulated capital-market system, with implementation planned for January 2027 after a preparation period. The framework allows eligible issuers to create blockchain-based debt and equity products, while trading would run through licensed intermediaries under existing investor-protection rules.

Russia is keeping the ruble as the only legal payment method and continues to reject cryptocurrencies as money, but lawmakers are moving toward broader legal recognition of crypto as an asset, including a proposal to treat it as marital property in divorce cases, alongside limited, regulated use of crypto in foreign trade.

The UK plans to bring cryptoassets fully under its financial regulatory perimeter, with crypto firms regulated by the Financial Conduct Authority from 2027 under rules similar to those for traditional financial products, aiming to boost consumer protection, transparency and market confidence while supporting innovation and cracking down on illicit activity, alongside efforts to shape international standards through cooperation such as a UK–US taskforce.

Hong Kong’s proposed expansion of crypto licensing is drawing industry concern that stricter thresholds could force more firms into full licensing, raise compliance costs and lack a clear transition period, potentially disrupting businesses while applications are processed.

Poland’s effort to introduce a comprehensive crypto law has reached an impasse after the Sejm failed to overturn President Karol Nawrocki’s veto of a bill meant to align national rules with the EU’s MiCA framework. The government argued the reform was essential for consumer protection and national security, but the president rejected it as overly burdensome and a threat to economic freedom. In the aftermath, Prime Minister Donald Tusk has pledged to renew efforts to pass crypto legislation.

In Norway, Norges Bank has concluded that current conditions do not justify launching a central bank digital currency, arguing that Norway’s payment system remains secure, efficient and well-tailored to users. The bank maintains that the Norwegian krone continues to function reliably, supported by strong contingency arrangements and stable operational performance. Governor Ida Wolden Bache said the assessment reflects timing rather than a rejection of CBDCs, noting the bank could introduce one if conditions change or if new risks emerge in the domestic payments landscape.

The EU member states will introduce a new customs duty on low-value e-commerce imports, starting 1 July 2026. Under the agreement, a customs duty of €3 per item will be applied to parcels valued at less than €150 imported directly into the EU from third countries. The temporary duty is intended to bridge the gap until the EU Customs Data Hub, a broader customs reform initiative designed to provide comprehensive import data and enhance enforcement capacity, becomes fully operational in 2028. 

Development 

UNESCO expressed growing concern over the expanding use of internet shutdowns by governments seeking to manage political crises, protests, and electoral periods. Recent data indicate that more than 300 shutdowns have occurred across 54 countries over the past two years, with 2024 the most severe year since 2016. According to UNESCO, restricting online access undermines the universal right to freedom of expression and weakens citizens’ ability to participate in social, cultural, and political life. Access to information remains essential not only for democratic engagement but also for rights linked to education, assembly, and association, particularly during moments of instability. Internet disruptions also place significant strain on journalists, media organisations, and public information systems that distribute verified news. 

The OECD says generative AI is spreading quickly in schools, but results are mixed: general-purpose chatbots can improve the polish of students’ work without boosting exam performance, and may weaken deep learning when they replace ‘productive struggle.’ It argues that education-specific AI tools designed around learning science, used as tutors or collaborative assistants, are more likely to improve outcomes and should be prioritised and rigorously evaluated. 

The UK will trial AI tutoring tools in secondary schools, aiming for nationwide availability by the end of 2027, with teachers involved in co-design and testing and safety, reliability and National Curriculum alignment treated as core requirements. The initiative is intended to provide personalised support and help narrow attainment gaps, with up to 450,000 disadvantaged pupils in years 9–11 potentially benefiting each year, while positioning the tools as a supplement to, not a replacement for, classroom teaching.

Sociocultural

The EU has designated WhatsApp a Very Large Online Platform under the Digital Services Act (DSA) after it reported more than 51 million monthly users in the bloc, triggering tougher obligations to assess and mitigate systemic risks such as disinformation and to strengthen protections for minors and vulnerable users. The European Commission will directly supervise compliance, with potential fines of up to 6% of global annual turnover, and WhatsApp has until mid-May to align its policies and risk assessments with the DSA requirements.

The EU has issued its first DSt non-compliance decision against X, fining the platform €120 million for misleading paid ‘blue check’ verification, weak ad transparency due to an incomplete advertising repository, and barriers that restrict access to public data for researchers. X must propose fixes for the checkmark system within 60 working days and submit a broader plan on data access and advertising transparency within 90 days, or face further enforcement.

The EU has accepted binding commitments from TikTok under the DSA to make ads more transparent, including showing ads exactly as users see them, adding targeting and demographic details, updating its ad repository within 24 hours, and expanding tools and access for researchers and the public, with implementation deadlines ranging from two to twelve months.

WhatsApp is facing intensifying pressure from Russian authorities, who argue the service does not comply with national rules on data storage and cooperation with law enforcement, while Meta has no legal presence in Russia and rejects requests for user information. Officials are promoting state-backed alternatives, such as the national messaging app Max, and critics warn that targeting WhatsApp would curb private communications rather than address genuine security threats. 

National AI regulation

Vietnam. Vietnam’s National Assembly has passed the country’s first comprehensive AI law, establishing a risk management regime, sandbox testing, a National AI Development Fund and startup voucher schemes to balance strict safeguards with innovation incentives. The 35‑article legislation — largely inspired by EU and other models — centralises AI oversight under the government and will take effect in March 2026.

The UK. More than 100 UK parliamentarians from across parties are pushing the government to adopt binding rules on advanced AI systems, saying current frameworks lag behind rapid technological progress and pose risks to national and global security. The cross‑party campaign, backed by former ministers and figures from the tech community, seeks mandatory testing standards, independent oversight and stronger international cooperation — challenging the government’s preference for existing, largely voluntary regulation.

The USA. The US President Donald Trump has signed an executive order targeting what the administration views as the most onerous and excessive state-level AI laws. The White House argues that a growing patchwork of state rules threatens to stymie innovation, burden developers, and weaken US competitiveness.

To address this, the order creates an AI Litigation Task Force to challenge state laws deemed obstructive to the policy set out in the executive order – to sustain and enhance the US global AI dominance through a minimally burdensome national policy framework for AI. The Commerce Department is directed to review all state AI regulations within 90 days to identify those that impose undue burdens. It also uses federal funding as leverage, allowing certain grants to be conditioned on states aligning with national AI policy.

National plans and investments

Russia. Russia is advancing a nationwide plan to expand the use of generative AI across public administration and key sectors, with a proposed central headquarters to coordinate ministries and agencies. Officials see increased deployment of domestic generative systems as a way to strengthen sovereignty, boost efficiency and drive regional economic development, prioritising locally developed AI over foreign platforms.

Qatar. Qatar has launched Qai, a new national AI company designed to accelerate the country’s digital transformation and global AI footprint. Qai will provide high‑performance computing and scalable AI infrastructure, working with research institutions, policymakers and partners worldwide to promote the adoption of advanced technologies that support sustainable development and economic diversification.

The EU. The EU has advanced an ambitious gigafactory programme to strengthen AI leadership by scaling up infrastructure and computational capacity across member states. This involves expanding a network of AI ‘factories’ and antennas that provide high‑performance computing and technical expertise to startups, SMEs and researchers, integrating innovation support alongside regulatory frameworks like the AI Act. 

Australia. Australia has sealed a USD 4.6 billion deal for a new AI hub in western Sydney, partnering with private sector actors to build an AI campus with extensive GPU-based infrastructure capable of supporting advanced workloads. The investment forms part of broader national efforts to establish domestic AI innovation and computational capacity. 

Morocco. Morocco is preparing to unveil ‘Maroc IA 2030’, a national AI roadmap designed to structure the country’s AI ecosystem and strengthen digital transformation. The plan aims to add an estimated $10 billion to GDP by 2030, create tens of thousands of AI-related jobs, and integrate AI across industry and government, including modernising public services and strengthening technological autonomy. Central to the strategy is the launch of the JAZARI ROOT Institute, the core hub of a planned network of AI centres of excellence that will bridge research, regional innovation, and practical deployment; additional initiatives include sovereign data infrastructure and partnerships with global AI firms. Authorities also emphasise building national skills and trust in AI, with governance structures and legislative proposals expected to accompany implementation.

Capacity building initiatives 

The USA. The Trump administration has unveiled a new initiative, branded the US Tech Force, an initiative aimed at rebuilding the US government’s technical capacity after deep workforce reductions, with a particular focus on AI and digital transformation.

According to the official TechForce.gov website, participants will work on high-impact federal missions, addressing large-scale civic and national challenges. The programme positions itself as a bridge between Silicon Valley and Washington, encouraging experienced technologists to bring industry practices into government environments.  The programme reflects growing concern within the administration that federal agencies lack the in-house expertise needed to deploy and oversee advanced technologies, especially as AI becomes central to public administration, defence, and service delivery.

Taiwan. Taiwan’s government has set an ambitious goal to train 500,000 AI professionals by 2040 as part of its long-term AI development strategy, backed by a NT$100 billion (approximately US$3.2 billion) venture fund and a national computing centre initiative. President Lai Ching-te announced the target at a 2026 AI Talent Forum in Taipei, highlighting the need for broad AI literacy across disciplines to sustain national competitiveness, support innovation ecosystems, and accelerate digital transformation in small and medium-sized enterprises. The government is introducing training programmes for students and public servants and emphasising cooperation between industry, academia, and government to develop a versatile AI talent pipeline. 

El Salvador. El Salvador has partnered with xAI to launch the world’s first nationwide AI-powered education programme, deploying the Grok model across more than 5,000 public schools to deliver personalised, curriculum-aligned tutoring to over one million students over the next two years. The initiative will support teachers with adaptive AI tools while co-developing methodologies, datasets and governance frameworks for responsible AI use in classrooms, aiming to close learning gaps and modernise the education system. President Nayib Bukele described the move as a leap forward in national digital transformation. 

UN AI Resource Hub. The UN AI Resource Hub has gone live as a centralised platform aggregating AI activities and expertise across the UN system. Presented by the UN Inter-Agency Working Group on AI, the platform has been developed through the joint collaboration of UNDP, UNESCO and ITU. It enables stakeholders to explore initiatives by agency, country and SDGs. The hub supports inter-agency collaboration, capacity for UN member states, and enhanced coherence in AI governance and terminology.

Partnerships 

Canada‑EU. Canada and the EU have expanded their digital partnership on AI and security, committing to deepen cooperation on trusted AI systems, data governance and shared digital infrastructure. This includes memoranda aimed at advancing interoperability, harmonising standards and fostering joint work on trustworthy digital services. 

The International Network for Advanced AI Measurement, Evaluation and Science. The global network has strengthened cooperation on benchmarking AI governance progress, focusing on metrics that help compare national policies, identify gaps and support evidence‑based decision‑making in AI regulation internationally. This network includes Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the USA. The UK has assumed the role of Network Coordinator.

BRICS. Talks on AI governance within the BRICS bloc have deepened as member states seek to harmonise national approaches and shared principles to ethical, inclusive and cooperative AI deployment. It is, however, still premature to talk about the creation of an AI-BRICS, Deputy Foreign Minister Sergey Ryabkov, Russia’s BRICS sherpa stated.

ASEAN-Japan. Japan and the Association of Southeast Asian Nations (ASEAN) have agreed to deepen cooperation on AI, formalised in a joint statement at a digital ministers’ meeting in Hanoi. The partnership focuses on joint development of AI models, aligning related legislation, and strengthening research ties to enhance regional technological capabilities and competitiveness amid global competition from the United States and China.

Pax Silica. A diverse group of nations has announced Pax Silica, a new partnership aimed at building secure, resilient, and innovation-driven supply chains for the technologies that underpin the AI era. These include critical minerals and energy inputs, advanced manufacturing, semiconductors, AI infrastructure and logistics. Analysts warn that diverging views may emerge if Washington pushes for tougher measures targeting China, potentially increasing political and economic pressure on participating nations. However, the USA, which leads the platform, clarified that the platform will focus on strengthening supply chains among its members rather than penalising non-members, like China.

Content governance

Italy. Italy’s antitrust authority has formally closed its investigation into the Chinese AI developer DeepSeek after the company agreed to binding commitments to make risks from AI hallucinations — false or misleading outputs — clearer and more accessible to users. Regulators stated that DeepSeek will enhance transparency, providing clearer warnings and disclosures tailored to Italian users, thereby aligning its chatbot deployment with local regulatory requirements. If these conditions aren’t met, enforcement action under Italian law could follow.

Spain. Spain’s cabinet has approved draft legislation aimed at curbing AI-generated deepfakes and tightening consent rules on the use of images and voices. The bill sets 16 as the minimum age for consenting to image use and prohibits the reuse of online images or AI-generated likenesses without explicit permission — including for commercial purposes — while allowing clear, labelled satire or creative works involving public figures. The reform reinforces child protection measures and mirrors broader EU plans to criminalise non-consensual sexual deepfakes by 2027. Prosecutors are also examining whether certain AI-generated content could qualify as child pornography under Spanish law. 

Malta. The Maltese government is preparing tougher legal measures to tackle abuses of deepfake technology. Current legislation is under review with proposals to introduce penalties for the misuse of AI in harassment, blackmail, and bullying cases, building on existing cyberbullying and cyberstalking laws by extending similar protections to harms stemming from AI-generated content. Officials emphasise that while AI adoption is a national priority, robust safeguards against abusive use are essential to protect individuals and digital rights.

China. China’s cyberspace regulator has proposed new limits on AI ‘boyfriend’ and ‘girlfriend’ chatbots. Draft rules require platforms to intervene when users express suicidal or self-harm tendencies when interacting with emotionally interactive AI services, while strengthening protections for minors and restricting harmful content. The regulator defines the services as AI systems that simulate human personality traits and emotional interaction.

Note to readers: We’ve reported separately on the January 2026 backlash against Grok, following claims it was used to generate non-consensual sexualised and deepfake images.

Security

The UN. The UN has raised the alarm about AI-driven threats to child safety, highlighting how AI systems can accelerate the creation, distribution, and impact of harmful content, including sexual exploitation, abuse, and manipulation of children online. As smart toys, chatbots, and recommendation engines increasingly shape youth digital experiences, the absence of adequate safeguards risks exposing a generation to novel forms of exploitation and harm. 

International experts. The second International AI Safety Report finds that AI capabilities continue to advance rapidly—with leading systems outperforming human experts in areas like mathematics, science and some autonomous software tasks—while performance remains uneven. Adoption is swift but uneven globally. Rising harms include deepfakes, misuse in fraud and non‑consensual content, and systemic impacts on autonomy and trust. Technical safeguards and voluntary safety frameworks have improved but remain incomplete, and effective multi‑layered risk management is still lacking.

The EU and the USA. The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have released ten principles for good AI practice in the medicines lifecycle. The guidelines provide broad direction for AI use in research, clinical trials, manufacturing, and safety monitoring. The principles are relevant to pharmaceutical developers, marketing authorisation applicants, and holders, and will form the basis for future AI guidance in different jurisdictions.

The WSIS+20 review, conducted 20 years after the World Summit on the Information Society, concluded in December 2025 in New York with the adoption of a high-level outcome document by the UN General Assembly. The review assesses progress toward building a people-centred, inclusive, and development-oriented information society, highlights areas needing further effort, and outlines measures to strengthen international cooperation.

 Adult, Male, Man, Person, Clothing, Formal Wear, Suit, Book, Comics, Publication, Face, Head, Coat, Text

A major institutional decision was to make the Internet Governance Forum (IGF) a permanent UN body. The outcome also includes steps to strengthen its functioning: broadening participation—especially from developing countries and underrepresented communities—enhancing intersessional work, supporting national and regional initiatives, and adopting innovative and transparent collaboration methods. The IGF Secretariat is to be strengthened, sustainable funding ensured, and annual reporting on progress provided to UN bodies, including the Commission on Science and Technology for Development (CSTD).

Negotiations addressed the creation of a governmental segment at the IGF. While some member states supported this as a way to foster more dialogue among governments, others were concerned it could compromise the IGF’s multistakeholder nature. The final compromise encourages dialogue among governments with the participation of all stakeholders.

Beyond the IGF, the outcome confirms the continuation of the annual WSIS Forum and calls for the United Nations Group on the Information Society (UNGIS) to increase efficiency, agility, and membership. 

WSIS action line facilitators are tasked with creating targeted implementation roadmaps linking WSIS action lines to SDGs and Global Digital Compact (GDC) commitments. 

UNGIS is requested to prepare a joint implementation roadmap to strengthen coherence between WSIS and the Global Digital Compact, to be presented to CSTD in 2026. The Secretary-General will submit biennial reports on WSIS implementation, and the next high-level review is scheduled for 2035.

The document places closing digital divides at the core of the WSIS+20 agenda. It addresses multiple aspects of digital exclusion, including accessibility, affordability, quality of connectivity, inclusion of vulnerable groups, multilingualism, cultural diversity, and connecting all schools to the internet. It stresses that connectivity alone is insufficient, highlighting the importance of skills development, enabling policy environments, and human rights protection.

The outcome also emphasises open, fair, and non-discriminatory digital development, including predictable and transparent policies, legal frameworks, and technology transfer to developing countries. Environmental sustainability is highlighted, with commitments to leverage digital technologies while addressing energy use, e-waste, critical minerals, and international standards for sustainable digital products.

Human rights and ethical considerations are reaffirmed as fundamental. The document stresses that rights online mirror those offline, calls for safeguards against adverse impacts of digital technologies, and urges the private sector to respect human rights throughout the technology lifecycle. It addresses online harms such as violence, hate speech, misinformation, cyberbullying, and child sexual exploitation, while promoting media freedom, privacy, and freedom of expression.

Capacity development and financing are recognised as essential. The document highlights the need to strengthen digital skills, technical expertise, and institutional capacities, including in AI. It invites the International Telecommunication Union to establish an internal task force to assess gaps and challenges in financial mechanisms for digital development and to report recommendations to CSTD by 2027. It also calls on the UN Inter-Agency Working Group on AI to map existing capacity-building initiatives, identify gaps, and develop programs such as an AI capacity-building fellowship for government officials and research programmes.

Finally, the outcome underscores the importance of monitoring and measurement, requesting a systematic review of existing ICT indicators and methodologies by the Partnership on Measuring ICT for Development, in cooperation with action line facilitators and the UN Statistical Commission. The Partnership is tasked with reporting to CSTD in 2027. Overall, the CSTD, ECOSOC, and the General Assembly maintain a central role in WSIS follow-up and review.

The final text reflects a broad compromise and was adopted without a vote, though some member states and groups raised concerns about certain provisions.

The momentum of social media bans for children

Australia made history in December as it began enforcing its landmark under-16 social media restrictions — the first nationwide rules of their kind anywhere in the world. 

The measure — a new Social Media Minimum Age (SMMA) requirement under the Online Safety Act — obliges major platforms to take ‘reasonable steps’ to delete underage accounts and block new sign-ups, backed by AUD 49.5 million fines and monthly compliance reporting.

As enforcement began, eSafety Commissioner Julie Inman Grant urged families — particularly those in regional and rural Australia — to consult the newly published guidance, which explains how the age limit works, why it has been raised from 13 to 16, and how to support young people during the transition.

The new framework should be viewed not as a ban but as a delay, Grant emphasised, raising the minimum account age from 13 to 16 to create ‘a reprieve from the powerful and persuasive design features built to keep them hooked and often enabling harmful content and conduct.’

 Book, Publication, Comics, Person, Animal, Bird, Face, Head, Bus Stop, Outdoors, Bench, Furniture

It has been almost two months since the ban—we continue to use the word ‘ban’ in the text, as it has already become part of the vernacular—took effect. Here’s what has happened in the meantime.

Teen reactions. The shift was abrupt for young Australians. Teenagers posted farewell messages on the eve of the deadline, grieving the loss of communities, creative spaces, and peer networks that had anchored their daily lives. Youth advocates noted that those who rely on platforms for education, support networks, LGBTQ+ community spaces, or creative expression would be disproportionately affected.

Workarounds and their limits. Predictably, workarounds emerged immediately. Some teens tried (and succeeded) to fool facial-age estimation tools by distorting their expressions; others turned to VPNs to mask their locations. However, experts note that free VPNs frequently monetise user data or contain spyware, raising new risks. And it might be in vain – platforms retain an extensive set of signals they can use to infer a user’s true location and age, including IP addresses, GPS data, device identifiers, time-zone settings, mobile numbers, app-store information, and behavioural patterns. Age-related markers — such as linguistic analysis, school-hour activity patterns, face or voice age estimation, youth-focused interactions, and the age of an account give companies additional tools to identify underage users.

Privacy and effectiveness concerns. Critics argue that the policy raises serious privacy concerns, since age-verification systems, whether based on government ID uploads, biometrics, or AI-based assessments, force people to hand over sensitive data that could be misused, breached, or normalised as part of everyday surveillance. Others point out that facial-age technology is least reliable for teenagers — the very group it is now supposed to regulate. Some question whether the fines are even meaningful, given that Meta earns roughly AUD 50 million in under two hours.

The limited scope of the rules has drawn further scrutiny. Dating sites, gaming platforms, and AI chatbots remain outside the ban, even though some chatbots have been linked to harmful interactions with minors. Educators and child-rights advocates argue that digital literacy and resilience would better safeguard young people than removing access outright. Many teens say they will create fake profiles or share joint accounts with parents, raising doubts about long-term effectiveness.

Industry pushback. Most major platforms have publicly criticised the law’s development and substance. They maintain that the law will be extremely difficult to enforce, even as they prepare to comply to avoid fines. Industry group NetChoice has described the measure as ‘blanket censorship,’ while Meta and Snap argue that real enforcement power lies with Apple and Google through app-store age controls rather than at the platform level.

Reddit has filed a High Court challenge of the ban, naming the Commonwealth of Australia and Communications Minister Anika Wells as defendants, and claiming that the law is applied to Reddit inaccurately. The platform holds that it is a platform for adults, and doesn’t have the traditional social media features that the government has taken issue with.

Government position. The government, expecting a turbulent rollout, frames the measure as consistent with other age-based restrictions (such as no drinking alcohol under 18) and a response to sustained public concern about online harms. Officials argue that Australia is playing a pioneering role in youth online safety — a stance drawing significant international attention. 

International interest. This development has garnered considerable international attention. There is a growing club of countries seeking to ban minors from major platforms. 

All of these jurisdictions are now looking closely at Australia, watching for proof of concept — or failure.

 Bus Stop, Outdoors, Book, Comics, Publication, Person, Adult, Female, Woman, People, Face, Head, Art

The early results are in. On the enforcement metric — platform compliance and account takedowns — the law is functioning, with social media companies deactivating or restricting roughly 4.7 million accounts understood to belong to Australian users under 16 within the first month of enforcement. 

However, on the behavioural outcome metric — whether under-16s are actually offline, safer, or replacing harmful patterns with healthier ones — the evidence remains inconclusive and evolving. The Australian government has also said it’s too early to declare the ban an unequivocal success.

The unresolved question. Young people retain access to group messaging tools, gaming services and video conferencing apps while they await eligibility for full social media accounts. But the question lingers: if access to large parts of the digital ecosystem remains open, what is the practical value of fencing off only one segment of the internet?

Platforms on trial(s)

In January 2026, a landmark trial opened in Los Angeles involving K.G.M., a 19-year-old plaintiff, and major social media companies. The case, first filed in July 2023, accuses platforms including Meta (Instagram and Facebook), YouTube (Google/Alphabet), Snapchat, and TikTok of intentionally designing their apps to be addictive, with serious consequences for young users’ mental health. 

According to the complaint, features such as infinite scroll, algorithmic recommendations, and constant notifications contributed to compulsive use, exposure to harmful content, depression, anxiety, and even suicidal thoughts. The lawsuit also alleges that the platforms made it difficult for K.G.M. to avoid contact with strangers and predatory adults, despite parental restrictions. K.G.M.’s legal team argues that the companies knowingly optimised their platforms to maximise engagement at the expense of user well-being.

As the trial began, Snap Inc. and TikTok had already reached confidential settlements, leaving Meta and YouTube as the remaining defendants. Meta and YouTube deny intentionally causing harm, highlighting existing safety features, parental controls, and content filters. 

Separately in federal court, Meta, Snap, YouTube and TikTok asked a judge to dismiss school districts’ lawsuits that seek damages for costs tied to student mental health challenges

In both cases, the companies are arguing that Section 230 of US law shields them from liability, while the plaintiffs counter that their claims focus on allegedly addictive design features rather than user-generated content. 

Legal experts and advocates are watching closely, noting that the outcomes could set a precedent for thousands of related lawsuits and ultimately influence corporate design practices.

Governments have long debated controlling data, infrastructure, and technology within their borders. But there is a renewed sense of urgency, as geopolitical tensions are driving a stronger push to identify dependencies, build domestic capacity, and limit exposure to foreign technologies.

At the European level, France is pushing to make digital sovereignty measurable and actionable. Paris has proposed the creation of an EU Digital Sovereignty Observatory to map member states’ reliance on non-European technologies, from cloud services and AI systems to cybersecurity tools. Paired with a digital resilience index, the initiative aims to give policymakers a clearer picture of strategic dependencies and a stronger basis for coordinated action on procurement, investment, and industrial policy. 

The bloc has, however, already started working on digital sovereignty. Just in January, the European Parliament adopted a resolution on European technological sovereignty and digital infrastructure. In the text, the Parliament calls for the development of a robust European digital public infrastructure (DPI) base layer grounded in open standards, interoperability, privacy- and security-by-design, and competition-friendly governance. Priority areas include semiconductors and AI chips, high-performance and quantum computing, cloud and edge infrastructure, AI gigafactories, data centres, digital identity and payments systems, and public-interest data platforms.

Also newly adopted is the Digital Networks Act, which frames sovereignty as the EU’s capacity to control, secure, and scale its critical connectivity infrastructure rather than as isolation from global markets. High-quality, secure digital networks are presented as a foundational enabler of Europe’s digital transformation, competitiveness, and security, with fragmentation of national markets seen as undermining the Union’s ability to act collectively and reduce dependencies. Satellite connectivity is explicitly identified as a core pillar of EU strategic autonomy, essential for broadband access in remote areas and for security, crisis management, defence, and other critical applications, prompting a shift toward harmonised, EU-level authorisation to strengthen resilience and avoid reliance on foreign providers.

The DNA complements the EU’s support for the IRIS2 satellite constellation, a planned multi-orbit constellation of 290 satellites designed to provide encrypted communications for citizens, governments and public agencies and reduce EU reliance on external providers. In mid-January, the timeline for IRIS2 has been moved, according to the EU Commissioner for Defence and Space, Andrius Kubilius.

The EU has also advanced its timeline for the IRIS2 satellite network, according to the EU Commissioner for Defence and Space, Andrius Kubilius. A planned multi-orbit constellation of 290 satellites, IRIS2 aims to begin initial government communication services by 2029, a year earlier than originally planned. The network is designed to provide encrypted communications for citizens, governments and public agencies. It also aims to reduce reliance on external providers, as Europe is ‘quite dependent on American services,’ per Kubilius. 

The Commission is also ready to put money where the goal is: the Commission announced €307.3 million in funding to boost capabilities in AI, robotics, photonics, and other emerging technologies. A significant portion of this investment is tied to initiatives such as the Open Internet Stack, which seek to deepen European digital autonomy. The funding, open to businesses, academia, and public bodies, reflects a broader push to translate policy ambitions into concrete technological capacity.

There’s more in the pipeline. The Cloud and AI Development Act, a revision of the Chips Act and the Quantum Act, all due in 2026, will also bolster EU digital sovereignty, enhancing strategic autonomy across the digital stack.

Furthermore, the European Commission is preparing a strategy to commercialise European open-source software, alongside the Cloud and AI Development Act, to strengthen developer communities, support adoption across various sectors, and ensure market competitiveness. By providing stable support and fostering collaboration between government and industry, the strategy seeks to create an economically sustainable open-source ecosystem.

In Burkina Faso, the focus is on reducing reliance on external providers while consolidating national authority over core digital systems. The government has launched a Digital Infrastructure Supervision Centre to centralise oversight of national networks and strengthen cybersecurity monitoring. New mini data centres for public administration are being rolled out to ensure that sensitive state data is stored and managed domestically. 

 Person, Face, Head, Fence

Sovereignty debates are also translating into decisions to limit, replace, or restructure the use of digital services provided by foreign entities. France has announced plans to phase out US-based collaboration platforms such as Microsoft Teams, Zoom, Google Meet, and Webex from public administration, replacing them with a domestically developed alternative, ‘Visio’. 

The Dutch data protection authority has urged the government to act swiftly to protect the country’s digital sovereignty, after DigiD, the national digital identity system, appeared set for acquisition by a US company. The watchdog argued that the Netherlands relies heavily on a small group of non-European cloud and IT providers, and stresses that public bodies lack clear exit strategies if foreign ownership suddenly shifts.

In the USA, the TikTok controversy can also be seen through sovereignty angles: Rather than banning TikTok, authorities have pushed the platform to restructure its operations for the US market. A new entity will manage TikTok’s US operations, with user data and algorithms handled inside the US. The recommendation algorithm is meant to be trained only on US user data to meet American regulatory requirements.

In more security-driven contexts, the concept is sharper still. As Europe remains heavily dependent on both Chinese telecom vendors and US cloud and satellite providers, the European Commission proposed binding cybersecurity rules targeting critical ICT supply chains.

Russia’s Security Council has recently labelled services such as Starlink and Gmail as national security threats, describing them as tools for ‘destructive information and technical influence.’ These assessments are expected to feed into Russia’s information security doctrine, reinforcing the treatment of digital services provided by foreign companies not as neutral infrastructure but as potential vectors of geopolitical risk.

The big picture. The common thread is clear: Digital sovereignty is now a key consideration for governments worldwide. The approaches may differ, but the goal remains largely the same – to ensure that a nation’s digital future is shaped by its own priorities and rules. But true independence is hampered by deeply embedded global supply chains, prohibitive costs of building parallel systems, and the risk of stifling innovation through isolation. While the strategic push for sovereignty is clear, untangling from interdependent tech ecosystems will require years of investment, migration, and adaptation. The current initiatives mark the beginning of a protracted and challenging transition.

In January 2026, a regulatory firestorm engulfed Grok, the AI tool built into Elon Musk’s X platform, as reports surfaced that Grok was being used to produce non-consensual sexualised and deepfake images, including depictions of individuals undressed or in compromising scenarios without their consent. 

Musk has suggested that users who use such prompts be held liable, a move criticised as shifting responsibility.

 Book, Comics, Publication, Person, Face, Head, People, Stephan Bodzin

The backlash was swift and severe. The UK’s Ofcom launched an investigation under the Online Safety Act, to determine whether X has complied with its duties to protect people in the UK from content that is illegal in the country. UK Prime Minister Keir Starmer condemned the ‘disgusting’ outputs. The EU declared the content, especially involving children, had ‘no place in Europe.’ Southeast Asia acted decisively: Malaysia and Indonesia blocked Grok entirely, citing obscene image generation, and the Philippines swiftly followed suit on child-protection grounds.

Under pressure, X announced tightened controls on Grok’s image-editing capabilities. The platform said it had introduced technological safeguards to block the generation and editing of sexualised images of real people in jurisdictions where such content is illegal. 

However, regulatory authorities signalled that this step, while positive, would not halt oversight. 

In the UK, Ofcom emphasised that its formal investigation into X’s handling of Grok and the emergence of deepfake imagery will continue, even as it welcomes the platform’s policy changes. The regulator emphasised its commitment to understanding how the platform facilitated the proliferation to such content and to ensuring that corrective measures are implemented. 

The UK Information Commissioner’s Office (ICO) opened a formal investigation into X and xAI over whether Grok’s processing of personal data complies with UK data protection law, namely core data protection principles—lawfulness, fairness, and transparency—and whether its design and deployment included sufficient built-in protections to stop the misuse of personal data for creating harmful or manipulated images.

Canada’s Privacy Commissioner widened an existing investigation into X Corp. and opened a parallel probe into xAI to assess whether the companies obtained valid consent for the collection, use, and disclosure of personal information to create AI-generated deepfakes, including sexually explicit content.

In France, the Paris prosecutor’s office confirmed that it will widen an ongoing criminal investigation into X to include complicity in spreading pornographic images of minors, sexually explicit deepfakes, denial of crimes against humanity and manipulation of an automated data processing system. The cybercrime unit of the Paris prosecutor has raided the French office of X as part of this expanded investigation. Musk and ​former CEO Linda ​Yaccarino have been summoned for voluntary interviews. X denied any wrongdoing and called the raid ‘abusive act of law enforcement theatre’ while Musk described it as a ‘political attack.’

The European Commission has opened a formal investigation into X under the bloc’s Digital Services Act (DSA). The probe focuses on whether the company met its legal obligations to mitigate risks from AI-generated sexualised deepfakes and other harmful imagery produced by Grok — especially those that may involve minors or non-consensual content.

Brazil’s Federal Public Prosecutor’s Office, the National Data Protection Authority and the National Consumer Secretariat — issued coordinated recommendations have issued recommendations to X to stop Grok from producing and disseminating sexualised deepfakes, warning that Brazil’s civil liability rules could apply if harmful outputs continued and that the platform should be disabled until safeguards were in place.

In India, the Ministry of Electronics and Information Technology (Meity) demanded the removal of obscene and unlawful content generated by the AI tool and required a report on corrective actions within 72 hours. The ministry also ordered the company to review Grok’s technical and governance framework. The deadline has since passed, and neither the ministry nor Grok has made any updates public.

Regulatory authorities in South Korea are examining whether Grok has violated personal data protection and safety standards by enabling the production of explicit deepfakes, and whether the matter falls within its legal remit.

Indonesia, Malaysia and the Philippines, however, have restored access after the platform introduced additional safety controls aimed at curbing the generation and editing of problematic content. 

The red lines. The reaction was so immediate and widespread precisely because it struck two rather universal nerves: the profound violation of privacy through non-consensual sexual imagery—a moral line nearly everyone agrees cannot be crossed—combined with the unique perils of AI, a trigger for acute governmental sensitivity. 

The big picture. Grok’s ongoing scrutiny shows that not all regulators are satisfied with the safeguards implemented so far, highlighting that remedies may need to be tailored to different jurisdictions. 

Diplo and the Geneva Internet Platform (GIP) organised the 11th edition of the Geneva Engage Awards, recognising the efforts of International Geneva actors in digital outreach and online engagement. 

This year’s theme, ‘Back to Basics: The Future of Websites in the AI Era,’ highlighted new practices in which users increasingly rely on AI assistants and AI-generated summaries that may not cite primary or the most relevant sources.

The opening segment of the event set the context for a shifting digital environment, exploring the transition from a search-based web to an answer-driven web and its implications for public engagement. It also offered a brief, transparent look at the logic behind this year’s award rankings, unpacking the metrics and mathematical models used to assess digital presence and accessibility. This led to the awards presentation, which recognised Geneva-based actors for their online engagement and influence.

The awards honoured organisations across three main categories: international organisations, NGOs, and permanent representations. The awards assessed efforts in social media engagement, web accessibility, and AI leadership, reinforcing Geneva’s role as a trusted source of reliable information as technology changes rapidly.

In the International Organisations category, the United Nations Conference on Trade and Development (UNCTAD) won first place. The United Nations Office at Geneva (UNOG) and the United Nations Office for the Coordination of Humanitarian Affairs (UNOCHA) were named runners-up for their strong digital presence and outreach.

Among non-governmental organisations, the International AIDS Society ranked first. It was followed by the Aga Khan Development Network (AKDN) and the International Union for Conservation of Nature (IUCN), both recognised as runners-up for their effective digital engagement.

In the Permanent Representations category, the Permanent Mission of the Republic of Indonesia to the United Nations Office and other international organisations in Geneva took first place. The Permanent Mission of the Republic of Rwanda and the Permanent Mission of France were named runners-up.

The Web Accessibility Award went to the Permanent Mission of Canada, while the Geneva AI Leadership Award was presented to the International Telecommunication Union (ITU).

 Crowd, Person, Adult, Female, Woman, Audience, Male, Man, Clothing, Footwear, Shoe, Accessories, Jewelry, Necklace, Glasses, Speech, Formal Wear, Tie, Coat, Debate, Theodor W. Adorno

After the ceremony, the focus shifted from recognition to exchange at a networking cocktail and a ‘knowledge bazaar.’ Participants circulated through interactive stations that translate abstract digital and AI concepts into tangible experiences. These included a guided walkthrough of what happens technically when a question is posed to an AI system; an exploration of the data and network analysis underpinning the Geneva Engage Awards, including a large-scale mapping of interconnections between Geneva-related websites; and discussions on the role of curated, human-enriched knowledge in feeding AI systems, with practical insights into how organisations can preserve and scale institutional expertise.

Other stations highlighted hands-on approaches to AI capacity-building through apprenticeships that emphasise learning by building AI agents, as well as the use of AI for post-event reporting. Together, these sessions showed how AI can transform fleeting discussions into structured, multilingual, and lasting knowledge. 

As we enter the new year, we bring you our annual outlook on AI and digital developments, featuring insights from our Executive Director. Drawing on our coverage of digital policy over the past year on the Digital Watch Observatory, as well as our professional experience and expertise, we highlight the 10 trends and events we expect to shape the digital landscape in the year ahead.

 Adult, Male, Man, Person, Head

Technologies. AI is becoming a commodity, affecting everyone—from countries competing for AI sovereignty to individual citizens. Equally important is the rise of bottom-up AI: in 2026, small to large language models will be able to run on corporate or institutional servers. Open-source development, a major milestone in 2025, is expected to become a central focus of future geostrategic competition.

Geostrategy. The good news is that, despite all geopolitical pressure, we still have an integrated global internet. However, digital fragmentation is accelerating, with continued fragmentation of filtering social media, other services and the other developments around three major hubs: the United States, China, and potentially the EU. Geoeconomics is becoming a critical dimension of this shift, particularly given the global footprint of major technology companies. And any fragmentation, including trade fragmentation and taxation fragmentation, will inevitably affect them. Equally important is the role of “geo-emotions”: the growing disconnect between public sentiment and industry enthusiasm. While companies remain largely optimistic about AI, public scepticism is increasing, and this divergence may carry significant political implications.

Governance. The core governance dilemma remains whether national representatives—parliamentarians domestically and diplomats internationally—are truly able to protect citizens’ digital interests related to data, knowledge, and cybersecurity. While there are moments of productive discussion and well-run events, substantive progress remains limited. One positive note is that inclusive governance, at least in principle, continues through multistakeholder participation, though it raises its own unresolved questions.

Security. The adoption of the Hanoi Cybercrime Convention at the end of the year is a positive development, and substantive discussions at the UN continue despite ongoing criticism of the institution. While it remains unclear whether these processes are making us more secure, they are expanding the governance toolbox. At the same time, attention should extend beyond traditional concerns—such as cyberwarfare, terrorism, and crime—to emerging risks associated with the interconnectivity to AI systems through APIs. These points of integration create new interdependencies and potential backdoors for cyberattacks.

Human rights. Human rights are increasingly under strain, with recent policy shifts by technology companies and growing transatlantic tensions between the EU and the United States highlighting a changing landscape. While debates continue to focus heavily on bias and ethics, deeper human rights concerns—such as the rights to knowledge, education, dignity, meaningful work, and the freedom to remain human rather than optimised—receive far less attention. As AI reshapes society, the human rights community must urgently revisit its priorities, grounding them in the protection of life, dignity, and human potential.

Economy. The traditional three-pillar framework comprising security, development, and human rights is shifting toward economic and security concerns, with human rights being increasingly sidelined. Technological and economic issues, from access to rare earths to AI models, are now treated as strategic security matters. This trend is expected to accelerate in 2026, making the digital economy a central component of national security. Greater attention should be paid to taxation, the stability of the global trade system, and how potential fragmentation or disruption of global trade could impact the tech sector.

Standards. The lesson from social media is clear: without interoperable standards, users get locked into single platforms. The same risk exists for AI. To avoid repeating these mistakes, developing interoperable AI standards is critical. Ideally, individuals and companies should build their own AI, but where that isn’t feasible, at a minimum, platforms should be interoperable, allowing seamless movement across providers such as OpenAI, Cloudy, or DeepSeek. This approach can foster innovation, competition, and user choice in the emerging AI-dominated ecosystem.

Content. The key issue for content in 2026 is the tension between governments and US tech, particularly regarding compliance with EU laws. At the core, countries have the right to set rules for content within their territories, reflecting their interests, and citizens expect their governments to enforce them. While media debates often focus on misuse or censorship, the fundamental question remains: can a country regulate content on its own soil? The answer is yes, and adapting to these rules will be a major source of tension going forward.

Development. Countries that are currently behind in AI aren’t necessarily losing. Success in AI is less about owning large models or investing heavily in hardware, and more about preserving and cultivating local knowledge. Small countries should invest in education, skills, and open-source platforms to retain and grow knowledge locally. Paradoxically, a slower entry into AI could be an advantage, allowing countries to focus on what truly matters: people, skills, and effective governance.

Environment. Concerns about AI’s impact on the environment and water resources persist. It is worth asking whether massive AI farms are truly necessary. Small AI systems could serve as extensions of these processes or as support for training and education, reducing the need for energy- and water-intensive platforms. At a minimum, AI development should prioritise sustainability and efficiency, mitigating the risk of large-scale digital waste while still enabling practical benefits.

Weekly #247 From bytes to borders: The quest for digital sovereignty

 Logo, Text

23-30 January 2026


HIGHLIGHT OF THE WEEK

From bytes to borders: The quest for digital sovereignty

Governments have long debated controlling data, infrastructure, and technology within their borders. But there is a renewed sense of urgency, as geopolitical tensions are driving a stronger push to identify dependencies, build domestic capacity, and limit exposure to foreign technologies.

At the European level, France is pushing to make digital sovereignty measurable and actionable. Paris has proposed the creation of an EU Digital Sovereignty Observatory to map member states’ reliance on non-European technologies, from cloud services and AI systems to cybersecurity tools. Paired with a digital resilience index, the initiative aims to give policymakers a clearer picture of strategic dependencies and a stronger basis for coordinated action on procurement, investment, and industrial policy. 

In Burkina Faso, the focus is on reducing reliance on external providers while consolidating national authority over core digital systems. The government has launched a Digital Infrastructure Supervision Centre to centralise oversight of national networks and strengthen cybersecurity monitoring. New mini data centres for public administration are being rolled out to ensure that sensitive state data is stored and managed domestically. 

Sovereignty debates are also translating into decisions to limit, replace, or restructure the use of digital services provided by foreign entities. France has announced plans to phase out US-based collaboration platforms such as Microsoft Teams, Zoom, Google Meet, and Webex from public administration, replacing them with a domestically developed alternative, ‘Visio’. 

The EU has advanced its timeline for the IRIS2 satellite network, according to the EU Commissioner for Defence and Space, Andrius Kubilius. A planned multi-orbit constellation of 290 satellites, IRIS2 aims to begin initial government communication services by 2029, a year earlier than originally planned. The network is designed to provide encrypted communications for citizens, governments and public agencies. It also aims to reduce reliance on external providers, as Europe is ‘quite dependent on American services,’ per Kubilius.

In the USA, the TikTok controversy can also be seen through sovereignty angles: Rather than banning TikTok, authorities have pushed the platform to restructure its operations for the US market. A new entity will manage TikTok’s US operations, with user data and algorithms handled inside the US. The recommendation algorithm is meant to be trained only on US user data to meet American regulatory requirements.

In more security-driven contexts, the concept is sharper still. Russia’s Security Council has recently labelled services such as Starlink and Gmail as national security threats, describing them as tools for ‘destructive information and technical influence.’ These assessments are expected to feed into Russia’s information security doctrine, reinforcing the treatment of digital services provided by foreign companies not as neutral infrastructure but as potential vectors of geopolitical risk.

 Book, Comics, Publication, Adult, Male, Man, Person, Clothing, Formal Wear, Suit, Advertisement, Poster, Accessories, Glasses, Coat, Purple, Tie, Face, Head, Electronics, Phone

The big picture. The common thread is clear: Digital sovereignty is now a key consideration for governments worldwide. The approaches may differ, but the goal remains largely the same – to ensure that a nation’s digital future is shaped by its own priorities and rules. But true independence is hampered by deeply embedded global supply chains, prohibitive costs of building parallel systems, and the risk of stifling innovation through isolation. While the strategic push for sovereignty is clear, untangling from interdependent tech ecosystems will require years of investment, migration, and adaptation. The current initiatives mark the beginning of a protracted and challenging transition.

IN OTHER NEWS THIS WEEK

This week in AI governance

China. China is planning to launch space-based AI data centres over the next five years. State aerospace contractor CASC has committed to building gigawatt-class orbital computing hubs that integrate cloud, edge and terminal capabilities, enabling in-orbit processing of Earth-generated data. The news comes on the heels of Elon Musk’s announcement at WEF 2026 that SpaceX plans to launch solar-powered AI data centre satellites within the next two to three years.

The UN. The UN has raised the alarm about AI-driven threats to child safety, highlighting how AI systems can accelerate the creation, distribution, and impact of harmful content, including sexual exploitation, abuse, and manipulation of children online. As smart toys, chatbots, and recommendation engines increasingly shape youth digital experiences, the absence of adequate safeguards risks exposing a generation to novel forms of exploitation and harm.  


Child safety online: Bans, trials, and investigations

The momentum on banning children from accessing social media continues, as France’s National Assembly has advanced legislation to ban children under 15 from accessing social media, voting substantially in favour of a bill that would require platforms to block under‑15s and enforce age‑verification measures. The bill now goes to the Senate for approval, with targeted implementation before the next school year.

In India, the state governments of Goa and Andhra Pradesh are exploring similar restrictions, considering proposals to bar social media use for children under 16 amid rising concern about online safety and youth well‑being. Previously, in December, the Madras High Court urged India’s federal government to consider an Australia-style ban.

In a first for social media platforms, a landmark trial in Los Angeles is seeing Meta (Instagram and Facebook), YouTube (Google/Alphabet), Snapchat, and TikTok, accused of intentionally designing their apps to be addictive, with serious consequences for young users’ mental health. As the trial began, Snap Inc. and TikTok had already reached confidential settlements, leaving Meta and YouTube as the remaining defendants in front of a jury. 

Separately in federal court, Meta, Snap, YouTube and TikTok asked a judge to dismiss school districts’ lawsuits that seek damages for costs tied to student mental health challenges

In both cases, the companies are arguing that Section 230 of US law shields them from liability, while the plaintiffs counter that their claims focus on allegedly addictive design features rather than user-generated content. 

Legal experts and advocates are watching closely, noting that the outcomes could set a precedent for thousands of related lawsuits and ultimately influence corporate design practices.

Roblox is under formal investigation in the Netherlands, as the Autoriteit Consument & Markt (ACM) has opened a formal investigation to assess whether Roblox is taking sufficient measures to protect children and teenagers who use the service. The probe will examine Roblox’s compliance with the European Union’s Digital Services Act (DSA), which obliges online services to implement appropriate and proportionate measures to ensure safety, privacy and security for underage users, and could take up to a year.

Regulatory scrutiny can also bear fruit: Meta, which was under intense scrutiny by regulators and civil society over chatbots that previously permitted provocative or exploitative conversations with minors, is pausing teenagers’ access to its AI characters globally while it redesigns the experience with enhanced safety and parental controls. The company said teens will be blocked from interacting with certain AI personas until a revised platform is ready, guided by principles akin to a PG-13 rating system to limit exposure to inappropriate content. 

Bottom line. The pressure on platforms is mounting, and there is no indication that it will let up.


The Grok deepfakes aftershocks

The fallout from Grok’s misuse to produce non-consensual sexualised and deepfake images continues.

The European Commission has opened a formal investigation into X under the bloc’s Digital Services Act (DSA). The probe focuses on whether the company met its legal obligations to mitigate risks from AI-generated sexualised deepfakes and other harmful imagery produced by Grok — especially those that may involve minors or non-consensual content. 

Regulatory authorities in South Korea are examining whether Grok has violated personal data protection and safety standards by enabling the production of explicit deepfakes, and whether the matter falls within its legal remit.

However, Malaysian authorities, who temporarily blocked access to Grok in early January, have restored access after the platform introduced additional safety controls aimed at curbing the generation and editing of problematic content. 

Why does it matter? Grok’s ongoing scrutiny shows that not all regulators are satisfied with the safeguards implemented so far, highlighting that remedies may need to be tailored to different jurisdictions.



LOOKING AHEAD
 Person, Face, Head, Binoculars

11th Geneva Engage Awards

Diplo and the Geneva Internet Platform (GIP) are organising the 11th edition of the Geneva Engage Awards, recognising the efforts of International Geneva actors in digital outreach and online engagement. 

This year’s theme, ‘Back to Basics: The Future of Websites in the AI Era,’ highlights new practices in which users increasingly rely on AI assistants and AI-generated summaries that may not cite primary or the most relevant sources.

The awards honour organisations across three main categories: international organisations, NGOs, and permanent representations. They assess efforts in social media engagement, web accessibility, and AI leadership, reinforcing Geneva’s role as a trusted source of reliable information as technology changes rapidly.

Tech attache briefing: The future of the Internet Governance Forum (IGF)

The Geneva Internet Platform (GIP) is organising a briefing for tech attaches, which will look at the role and evolution of the IGF over the past 20 years and discuss ways to implement the requests of the General Assembly. The event will begin with a briefing and exchange among diplomats, followed by an open dialogue with the IGF Secretariat. The event is invitation-only. 



READING CORNER
Certifying humanity feature

As AI content floods the web, how do we know what’s real? Explore the case for a “Human-Certified” label and why authentic human thought is becoming our most valuable digital asset.

BLOG featured image 2026 13 Genevas AI footprint

Geneva’s AI footprint Modern AI platforms are trained on vast amounts of online information, including content from websites, blogs, and publications.

Weekly #246 WEF 2026 in Davos: Digital governance discussions shift from principles to ‘infrastructure politics’

 Logo, Text

16-23 January 2026


HIGHLIGHT OF THE WEEK

WEF 2026 in Davos: Digital governance discussions shift from principles to ‘infrastructure politics’

One of this week’s biggest highlights was the World Economic Forum’s annual meeting in Davos (19–23 January 2026), held under the banner ‘A Spirit of Dialogue.’ But while the headline was dialogue, the subtext was more related to control: who gets to build, run, and police the digital systems the world now treats as essential infrastructure?

Across AI-heavy sessions, the talk has moved beyond hype and into a more complex question: what legitimises large-scale AI rollouts when they draw on scarce resources and concentrate power? Microsoft CEO Satya Nadella argued that this legitimacy is fragile, warning that the public could withdraw its ‘social licence’ for AI’s energy use unless the benefits are clear and widely felt, delivering tangible gains in areas like health and education.

 People, Person, Crowd, Book, Publication, Jury, Comics, Face, Head, Audience, Indoors, Lecture, Room, Seminar

On the corporate side, many business leaders in Davos made the same point: moving from a small AI trial to a tool that runs safely across an entire company is proving much harder than expected. The biggest barriers are often cleaning and connecting data, finding and training the right people, and changing internal workflows so AI outputs are checked, approved, and acted on in a controlled way.

Meanwhile, ‘sovereignty’ surfaced as an engineering and legal puzzle: where can data and compute physically sit, and under whose rules? In the session ‘Digital Embassies for Sovereign AI’, participants argued for a standardised framework, likened to a ‘Vienna Convention’, that would allow countries to use overseas data centre capacity while still asserting control over sensitive datasets and access conditions.

What is a ‘digital embassy?

In a recent blog on diplomacy.edu, Jovan Kurbalija, the executive of Diplo, analyses the term ‘digital embassy’ and argues that it is widely misused. He explains that initiatives often labelled this way, such as state-run, sovereign data-backup facilities hosted abroad (e.g. Estonia’s arrangement in Luxembourg), do not function like embassies, which represent states and conduct diplomacy, but rather as resilience infrastructure designed to preserve critical data, continuity of government, and ‘national memory’ in crises. Read more

The debates in Davos also exposed a widening fault line in AI policy. Some leaders called for lighter, iterative rules that can evolve ‘at the speed of code‘, while others defended risk-based guardrails and market-wide harmonisation to prevent fragmentation.

In another interesting session called ‘Is Europe’s Tech Sovereignty Feasible? ‘, it was argued that a single framework beats ’27 different’ national regimes, even if the compliance debate remains politically charged.

There were also some discussions on the governance of digital finance. Debates on tokenisation and new payment rails underscored a familiar trade-off: efficiency and innovation versus sovereignty, consumer protection, and systemic risk.

Online harms provided a sharp reminder of what’s at stake when governance fails. In a session focused on fraud, panellists described scam ecosystems that blend online crime with coercion and trafficking, summed up in a stark line: cyber fraud is ‘no longer just about stolen money… it’s about stolen lives.’

Taken together, WEF 2026 provided a roadmap of where the pressure is building: from lofty AI principles toward practical control over infrastructure, accountability, and cross-border rules. The prevailing outcome was a recognition that trust in AI will hinge on demonstrating real-world benefits, integrating human responsibility and oversight into business processes, and resolving sovereignty questions about where data and compute reside. At the same time, the meeting underscored a growing risk of regulatory and geopolitical fragmentation, and a parallel push to strengthen cooperative mechanisms, from harmonised frameworks to multistakeholder forums, to keep security, rights, and resilience from falling behind the speed of deployment.

IN OTHER NEWS THIS WEEK

This week in AI governance

EU. The EU policymakers are calling for faster AI deployment across the bloc, especially among SMEs and scale-ups, backing the European Commission’s ‘Apply AI Strategy’ and an ‘AI-first’ mindset for business and public services. The European Economic and Social Committee argues the EU’s edge should be ‘trustworthy’ and human-centric AI, but warns that slow implementation, fragmented national approaches, and limited private investment are holding the EU back. Proposed fixes include easier access to funding, lighter administrative burdens, stronger regional ecosystems, investment in skills and procurement, and support for frontier AI to reduce dependence on non-EU models.

USA-California. California Attorney General Rob Bonta has sent a cease and desist letter to Elon Musk’s xAI, ordering it to stop creating and sharing non-consensual sexual deepfakes, following a spike in explicit AI-generated images circulating on X. State officials say Grok enabled the manipulation of images of women and children without consent, potentially violating state decency laws and a newer deepfake-pornography ban. Regulators point to research suggesting Grok users were sharing more non-consensual sexual imagery than users elsewhere. xAI has introduced partial restrictions, though authorities say the real-world impact remains uncertain as investigations continue.

South Korea. New US tariffs on advanced AI-oriented chips are prompting South Korea’s semiconductor industry to assess supply-chain risks and potential trade fallout, with the measure widely interpreted as an attempt to constrain the re-export of AI accelerators to China. The tariff is set at 25% for certain advanced chips imported into the US and then re-exported. It could affect high-end processors that rely on high-bandwidth memory supplied by Samsung Electronics and SK hynix. However, officials argue that much of South Korea’s memory shipments to the US are destined for domestic data centres and may be exempt. Seoul has launched consultations with industry and US counterparts to clarify exposure and ensure that Korean firms receive treatment comparable to that of competitors in Taiwan, Japan, and the EU.

EU. The European Commission has signalled it may escalate action over concerns that Grok-related ‘nudification’ content is spreading on X, with the EU officials stressing that non-consensual sexualised imagery, especially involving minors, is unacceptable. The EU tech chief, Henna Virkkunen, told MEPs that existing EU digital rules provide tools to respond, with enforcement under the Digital Services Act and child-protection priorities. While a formal investigation has not yet been launched, the Commission is examining potential DSA breaches and has reportedly ordered X to retain internal information related to Grok until the end of 2026.

UK. The UK government has appointed two ‘AI Champions’ from industry, Harriet Rees (Starling Bank) and Dr Rohit Dhawan (Lloyds Banking Group), to support safe and effective AI adoption across financial services. The move reflects how mainstream AI already is in the sector (around three-quarters of UK financial firms reportedly use it), alongside official estimates of large potential productivity gains by 2030. The Champions’ remit includes accelerating ‘trusted’ adoption, removing barriers to scale, protecting consumers, and supporting financial stability, linking innovation goals to the sector’s risk-management and supervisory expectations.


Jeff Bezos to enter satellite broadband race

Blue Origin, founded by Jeff Bezos, has announced plans to launch a global satellite internet network called TeraWave in the US. The project aims to deploy more than 5,400 satellites to deliver high-speed data services.

In the US, TeraWave will target data centres, businesses and government users rather than households. Blue Origin says the system could reach speeds of up to 6 terabits per second, exceeding the speeds of current commercial satellite services.

The announcement positions the US company as a direct rival to Starlink, SpaceX’s satellite internet service. Starlink already operates thousands of satellites and focuses heavily on consumer internet access across the US and beyond.

Blue Origin plans to begin launching TeraWave satellites from the US by the end of 2027. The announcement adds to the intensifying competition in satellite communications as demand for global connectivity continues to grow.

Why it matters: At WEF 2026, ‘infrastructure politics’ was shorthand for the power struggle over who builds and governs essential digital systems, and Blue Origin’s TeraWave plan underscores that satellite internet is increasingly treated as strategic infrastructure rather than just a commercial connectivity service.


Child online safety stays on the global agenda as the UK considers an under-16 social media ban

Pressure is growing on Keir Starmer after more than 60 Labour MPs called for a UK ban on social media use for under-16s, arguing that children’s online safety requires firmer regulation instead of voluntary platform measures. The signatories span Labour’s internal divides, including senior parliamentarians and former frontbenchers, signalling broad concern over the impact of social media on young people’s well-being, education and mental health.

Supporters of the proposal point to Australia’s recently implemented ban as a model worth following, suggesting that early evidence could guide UK policy development rather than prolonged inaction.

Starmer is understood to favour a cautious approach, preferring to assess the Australian experience before endorsing legislation, as peers prepare to vote on related measures in the coming days.

Zooming out: Australia’s under-16 social media ban is quickly becoming a reference point in a wider global shift, as more governments weigh age-based restrictions and tougher platform duties, signalling that youth online safety is moving from voluntary safeguards toward hard law.


European Parliament moves to force AI companies to pay news publishers

Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament. The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.

Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus. MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.

The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.

If adopted, the position of the European Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.

Why it matters: The EU’s push to require payment for journalistic content used in model training is part of a widening global trend, from licensing deals to proposed ‘training-use’ compensation rules, as governments look to rebalance the economics of AI and protect the sustainability of independent newsrooms.


UNESCO raises alarm over government use of internet shutdowns

UNESCO expressed growing concern over the expanding use of internet shutdowns by governments seeking to manage political crises, protests, and electoral periods. Recent data indicate that more than 300 shutdowns have occurred across 54 countries over the past two years, with 2024 the most severe year since 2016.

According to UNESCO, restricting online access undermines the universal right to freedom of expression and weakens citizens’ ability to participate in social, cultural, and political life. Access to information remains essential not only for democratic engagement but also for rights linked to education, assembly, and association, particularly during moments of instability.

Internet disruptions also place significant strain on journalists, media organisations, and public information systems that distribute verified news. Instead of improving public order, shutdowns fracture information flows and contribute to the spread of unverified or harmful content, increasing confusion and mistrust among affected populations.

UNESCO continues to call on governments to adopt policies that strengthen connectivity and digital access rather than imposing barriers. The organisation argues that maintaining open and reliable internet access during crises remains central to protecting democratic rights and safeguarding the integrity of information ecosystems.

Why it matters: As internet shutdowns spread worldwide, especially around protests and elections, they are becoming a default ‘crisis tool’ for states, with mounting costs for rights, public trust, and access to verified information, and growing calls for stronger international accountability.


LOOKING AHEAD
 Person, Face, Head, Binoculars

International Submarine Cable Resilience Summit 2026

The International Submarine Cable Resilience Summit 2026 will take place in Porto, Portugal (2–3 February 2026), bringing together governments, regulators, industry, investors, cable operators/experts, and international organisations to strengthen cooperation on protecting the submarine telecom cables that underpin global connectivity.

More info on our dig.watch EVENTS page



READING CORNER
BLOG featured image 2026 What is a digital embassy

The term ‘digital embassy’ is a misleading description for initiatives like Estonia’s sovereign data backup located in Luxembourg. True embassies represent and negotiate, while these facilities. Read more

Fear AI

Headlines predict mass AI job loss, but the data tells a nuanced story. Discover why research from the AI Index, OECD, and ILO suggests public fear is outpacing observed reality. Read more

BLOG featured image 2026 9 The irony of power

Greenland-related tensions could trigger the EU retaliation, pushing US tech to lobby for calmer transatlantic relations to protect the EU revenue, cloud/AI growth, and data-flow stability. Read more

chatgpt advertising openai digital governance

OpenAI’s ChatGPT Go launch highlights growing pressure to monetise AI without ads, as investor expectations reshape sustainable business models. Read more

Weekly #245 The Grok shock: How AI deepfakes triggered reactions worldwide

 Logo, Text

9-16 January 2026


HIGHLIGHT OF THE WEEK

The Grok shock: How AI deepfakes triggered reactions worldwide

In the last week, a regulatory firestorm engulfed Grok, the AI tool built into Elon Musk’s X platform, as reports surfaced that Grok was being used to produce non-consensual sexualised and deepfake images, including depictions of individuals undressed or in compromising scenarios without their consent.

The backlash was swift and severe. The UK’s Ofcom launched an investigation under the Online Safety Act, to determine whether X has complied with its duties to protect people in the UK from content that is illegal in the country. UK Prime Minister Keir Starmer condemned the ‘disgusting’ outputs. The EU declared the content, especially involving children, had ‘no place in Europe.’ Southeast Asia acted decisively: Malaysia and Indonesia blocked Grok entirely, citing obscene image generation, and the Philippines swiftly followed suit on child-protection grounds.

Under pressure, X announced tightened controls on Grok’s image-editing capabilities. The platform said it had introduced technological safeguards to block the generation and editing of sexualised images of real people in jurisdictions where such content is illegal. 

However, regulatory authorities signalled that this step, while positive, would not halt oversight. 

In the UK, Ofcom emphasised that its formal investigation into X’s handling of Grok and the emergence of deepfake imagery will continue, even as it welcomes the platform’s policy changes. The regulator emphasised its commitment to understanding how the platform facilitated the proliferation to such content and to ensuring that corrective measures are implemented. 

Canada’s Privacy Commissioner widened an existing investigation into X Corp. and opened a parallel probe into xAI to assess whether the companies obtained valid consent for the collection, use, and disclosure of personal information to create AI-generated deepfakes, including sexually explicit content.

The red lines. The reaction was so immediate and widespread precisely because it struck two rather universal nerves: the profound violation of privacy through non-consensual sexual imagery—a moral line nearly everyone agrees cannot be crossed—combined with the unique perils of AI, a trigger for acute governmental sensitivity. 

 Book, Comics, Publication, Person, Face, Head, People, Stephan Bodzin
IN OTHER NEWS THIS WEEK

This week in AI governance

Spain. Spain’s cabinet has approved draft legislation aimed at curbing AI-generated deepfakes and tightening consent rules on the use of images and voices. The bill sets 16 as the minimum age for consenting to image use and prohibits the reuse of online images or AI-generated likenesses without explicit permission — including for commercial purposes — while allowing clear, labelled satire or creative works involving public figures. The reform reinforces child protection measures and mirrors broader EU plans to criminalise non-consensual sexual deepfakes by 2027. Prosecutors are also examining whether certain AI-generated content could qualify as child pornography under Spanish law. 

Malta. The Maltese government is preparing tougher legal measures to tackle abuses of deepfake technology. Current legislation is under review with proposals to introduce penalties for the misuse of AI in harassment, blackmail, and bullying cases, building on existing cyberbullying and cyberstalking laws by extending similar protections to harms stemming from AI-generated content. Officials emphasise that while AI adoption is a national priority, robust safeguards against abusive use are essential to protect individuals and digital rights.

Morocco. Morocco is preparing to unveil ‘Maroc IA 2030’, a national AI roadmap designed to structure the country’s AI ecosystem and strengthen digital transformation. The plan aims to add an estimated $10 billion to GDP by 2030, create tens of thousands of AI-related jobs, and integrate AI across industry and government, including modernising public services and strengthening technological autonomy. Central to the strategy is the launch of the JAZARI ROOT Institute, the core hub of a planned network of AI centres of excellence that will bridge research, regional innovation, and practical deployment; additional initiatives include sovereign data infrastructure and partnerships with global AI firms. Authorities also emphasise building national skills and trust in AI, with governance structures and legislative proposals expected to accompany implementation.

Taiwan. Taiwan’s government has set an ambitious goal to train 500,000 AI professionals by 2040 as part of its long-term AI development strategy, backed by a NT$100 billion (approximately US$3.2 billion) venture fund and a national computing centre initiative. President Lai Ching-te announced the target at a 2026 AI Talent Forum in Taipei, highlighting the need for broad AI literacy across disciplines to sustain national competitiveness, support innovation ecosystems, and accelerate digital transformation in small and medium-sized enterprises. The government is introducing training programmes for students and public servants and emphasising cooperation between industry, academia, and government to develop a versatile AI talent pipeline. 

The EU and the USA. The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have released ten principles for good AI practice in the medicines lifecycle. The guidelines provide broad direction for AI use in research, clinical trials, manufacturing, and safety monitoring. The principles are relevant to pharmaceutical developers, marketing authorisation applicants, and holders, and will form the basis for future AI guidance in different jurisdictions. 


Internet access under pressure in Iran and Uganda

As anti-government protests deepened across Iran in early January 2026, nationwide communications were brought to an almost complete standstill when authorities enacted a near-total shutdown of the internet. Amid these conditions, some Iranians attempted to bypass government controls by using Elon Musk’s Starlink satellite internet service, which remained partially accessible despite Tehran’s efforts to ban and disrupt it. Latest reports suggest that security forces in parts of Tehran have started door-to-door operations to remove satellite dishes.

Separately, Ugandan authorities ordered restrictions on internet access ahead of the country’s presidential election on January 15, 2026. The Uganda Communications Commission directed telecom providers to suspend public internet access on the eve of the vote, citing concerns about misinformation, electoral fraud and incitement to violence. Critics, including civil liberties groups and opposition figures, argued that the blackout was part of a broader pattern of repression.

Zooming out. In both contexts — Tehran and Kampala — the suspension of internet access illustrates how control over information flows is a potent instrument in high-stakes political contests.


Worldwide focus on child safety online continues

The momentum behind policies to restrict children’s access to social media has carried from 2025 into early 2026. In Australia, the first country to enact such a ban, social media companies reported having deactivated about 4.7 million accounts believed to belong to users under 16 within the first month of enforcement.

In France, policymakers are debating proposals that would restrict social media access for children under 15. The country’s health watchdog has highlighted research pointing to a range of documented negative effects of social media use on adolescent mental health, noting that online platforms amplify harmful pressures, cyberbullying and unrealistic beauty standards. 

In the UK, the Prime Minister has signalled that he is open to age‑based restrictions similar to Australia’s approach, as well as proposals to limit screen time or the design features of platforms used by children. Support for stricter regulation has emerged across party lines, and the issue is being debated within Parliament. 

The future of bans. The number of countries eyeing a ban is climbing, and it’s far from final. The world is watching Australia—its success or struggle will decide who follows next


Chips and geopolitics

The global semiconductor industry entered 2026 amid developments that originated in late 2025.

On January 14, 2026, President Trump signed a presidential proclamation imposing a 25% tariff on certain advanced computing and AI‑oriented chips, including high‑end products such as Nvidia’s H200 and AMD’s MI325X, under a national security review. 

Officials described the measure as a ‘phase one’ step aimed at strengthening domestic production and reducing dependence on foreign manufacturers, particularly those in Taiwan, while also capturing revenue from imports that do not contribute to US manufacturing capacity. The administration suggested that further actions could follow depending on how negotiations with trading partners and the industry evolve.

Just a day later, the USA and Taiwan announced a landmark semiconductor-focused trade agreement. Under the deal, tariffs on a broad range of Taiwanese exports will be reduced or eliminated, while Taiwanese semiconductor companies, including leading firms like TSMC, have committed to invest at least $250 billion in U.S. chip manufacturing, AI, and energy projects, supported by an additional $250 billion in government-backed credit.

The protracted legal and political dispute over Dutch semiconductor manufacturer Nexperia,  a Netherlands‑based firm owned by China’s Wingtech Technology, also continues. The dispute erupted in autumn 2025, when Dutch authorities briefly seized control of Nexperia, citing national security and concerns about potential technology transfers to China.  Nexperia’s European management and Wingtech representatives are now squaring off in an Amsterdam court, which is deciding whether to launch a formal investigation into alleged mismanagement. The court is set to make a decision within four weeks.

On the horizon. As countries jockey for control over critical semiconductors, alliances and rivalries are clashing, and 2026 promises even more high-stakes moves.


Western cyber agencies issue guidance on cyber risks to industrial sectors

A group of international cybersecurity agencies has released new technical guidance addressing the security of operational technology (OT) used in industrial and critical infrastructure environments.

The guidance, led by the UK’s National Cyber Security Centre (NCSC), provides recommendations for securely connecting industrial control systems, sensors, and other operational equipment that support essential services.

According to the co-authoring agencies, industrial environments are being targeted by a range of actors, including cybercriminal groups and state-linked actors. The guidance references a joint advisory issued in June 2023 on China-linked cyber activity, as well as a more recent advisory from the US Cybersecurity and Infrastructure Security Agency (CISA) that notes opportunistic activity by pro-Russia hacktivist groups affecting critical infrastructure globally.


LOOKING AHEAD
 Person, Face, Head, Binoculars

World Economic Forum Annual Meeting 2026

The World Economic Forum Annual Meeting 2026 will take place 19–23 January in Davos‑Klosters, Switzerland. Bringing together leaders from government, business, civil society, academia, and culture, the meeting provides a platform to discuss global economic, technological, and societal challenges. A central theme will be the technological transformation—from AI and quantum computing to next-generation biotech and energy systems—reshaping economies, work, and growth. 

Our team will be reporting from the event, covering key discussions and insights on developments shaping the global agenda. Be sure to bookmark the dedicated page.



READING CORNER
BLOG featured image USAs exit from international organizations

On 7 January, the USA withdrew from a slate of international organisations and initiatives. Despite the wider retrenchment, the technology and digital governance ecosystem was largely spared, as most major tech-relevant bodies remained on the ‘white list.’ The bigger uncertainty lies with the US decision to step back from UNCTAD and UN DESA as this could still create knock-on effects for digital initiatives linked to these organisations, Dr Jovan Kurbalija writes.

Advancing Swiss AI Trinity featured image

In 2026, Switzerland will have to navigate a critical and highly uncertain AI transformation, Dr Jovan Kurbalija argues. With so much at stake and future AI trajectories unclear, the nation must build its resilience on a distinctly Swiss AI Trinity: Zurich’s entrepreneurship, Geneva’s governance, and communal subsidiarity, all anchored in the enduring values and practices outlined here.

BLOG featured image 2026 3

In her new article, Dr Anita Lamprecht examines how sci-fi narratives have been inverted in contemporary AI discourse, increasingly positioning technology beyond regulation and human governance. She introduces the concept of the ‘science fiction native’ (sci-fi native) to describe how immersion in speculative imaginaries over several generations is influencing legal and governance assumptions about control, responsibility, and social contracts.

Weekly #244 Looking ahead: Our annual AI and digital forecast

 Logo, Text

2-9 January 2026


HIGHLIGHT OF THE WEEK

Looking ahead: Our annual AI and digital forecast

As we enter the new year, we begin this issue of the Weekly newsletter with our annual outlook on AI and digital developments, featuring insights from our Executive Director. Drawing on our coverage of digital policy over the past year on the Digital Watch Observatory, as well as our professional experience and expertise, we highlight the 10 trends and events we expect to shape the digital landscape in the year ahead.

Technologies. AI is becoming a commodity, affecting everyone—from countries competing for AI sovereignty to individual citizens. Equally important is the rise of bottom-up AI: in 2026, small to large language models will be able to run on corporate or institutional servers. Open-source development, a major milestone in 2025, is expected to become a central focus of future geostrategic competition.

Geostrategy. The good news is that, despite all geopolitical pressure, we still have an integrated global internet. However, digital fragmentation is accelerating, with continued fragmentation of filtering social media, other services and the other developments around three major hubs: the United States, China, and potentially the EU. Geoeconomics is becoming a critical dimension of this shift, particularly given the global footprint of major technology companies. And any fragmentation, including trade fragmentation and taxation fragmentation, will inevitably affect them. Equally important is the role of “geo-emotions”: the growing disconnect between public sentiment and industry enthusiasm. While companies remain largely optimistic about AI, public scepticism is increasing, and this divergence may carry significant political implications.

Governance. The core governance dilemma remains whether national representatives—parliamentarians domestically and diplomats internationally—are truly able to protect citizens’ digital interests related to data, knowledge, and cybersecurity. While there are moments of productive discussion and well-run events, substantive progress remains limited. One positive note is that inclusive governance, at least in principle, continues through multistakeholder participation, though it raises its own unresolved questions.

Security. The adoption of the Hanoi Cybercrime Convention at the end of the year is a positive development, and substantive discussions at the UN continue despite ongoing criticism of the institution. While it remains unclear whether these processes are making us more secure, they are expanding the governance toolbox. At the same time, attention should extend beyond traditional concerns—such as cyberwarfare, terrorism, and crime—to emerging risks associated with the interconnectivity to AI systems through APIs. These points of integration create new interdependencies and potential backdoors for cyberattacks.

Human rights. Human rights are increasingly under strain, with recent policy shifts by technology companies and growing transatlantic tensions between the EU and the United States highlighting a changing landscape. While debates continue to focus heavily on bias and ethics, deeper human rights concerns—such as the rights to knowledge, education, dignity, meaningful work, and the freedom to remain human rather than optimised—receive far less attention. As AI reshapes society, the human rights community must urgently revisit its priorities, grounding them in the protection of life, dignity, and human potential.

Economy. The traditional three-pillar framework comprising security, development, and human rights is shifting toward economic and security concerns, with human rights being increasingly sidelined. Technological and economic issues, from access to rare earths to AI models, are now treated as strategic security matters. This trend is expected to accelerate in 2026, making the digital economy a central component of national security. Greater attention should be paid to taxation, the stability of the global trade system, and how potential fragmentation or disruption of global trade could impact the tech sector.

Standards. The lesson from social media is clear: without interoperable standards, users get locked into single platforms. The same risk exists for AI. To avoid repeating these mistakes, developing interoperable AI standards is critical. Ideally, individuals and companies should build their own AI, but where that isn’t feasible, at a minimum, platforms should be interoperable, allowing seamless movement across providers such as OpenAI, Cloudy, or DeepSeek. This approach can foster innovation, competition, and user choice in the emerging AI-dominated ecosystem.

Content. The key issue for content in 2026 is the tension between governments and US tech, particularly regarding compliance with EU laws. At the core, countries have the right to set rules for content within their territories, reflecting their interests, and citizens expect their governments to enforce them. While media debates often focus on misuse or censorship, the fundamental question remains: can a country regulate content on its own soil? The answer is yes, and adapting to these rules will be a major source of tension going forward.

Development. Countries that are currently behind in AI aren’t necessarily losing. Success in AI is less about owning large models or investing heavily in hardware, and more about preserving and cultivating local knowledge. Small countries should invest in education, skills, and open-source platforms to retain and grow knowledge locally. Paradoxically, a slower entry into AI could be an advantage, allowing countries to focus on what truly matters: people, skills, and effective governance.

Environment. Concerns about AI’s impact on the environment and water resources persist. It is worth asking whether massive AI farms are truly necessary. Small AI systems could serve as extensions of these processes or as support for training and education, reducing the need for energy- and water-intensive platforms. At a minimum, AI development should prioritise sustainability and efficiency, mitigating the risk of large-scale digital waste while still enabling practical benefits.

 Adult, Male, Man, Person, Head
IN OTHER NEWS THIS WEEK

This week in AI governance

Italy. Italy’s antitrust authority has formally closed its investigation into the Chinese AI developer DeepSeek after the company agreed to binding commitments to make risks from AI hallucinations — false or misleading outputs — clearer and more accessible to users. Regulators stated that DeepSeek will enhance transparency, providing clearer warnings and disclosures tailored to Italian users, thereby aligning its chatbot deployment with local regulatory requirements. If these conditions aren’t met, enforcement action under Italian law could follow.

UK. Britain has escalated pressure on Elon Musk’s social media platform X and its integrated AI chatbot Grok after reports that the tool was used to generate sexually explicit and non‑consensual deepfake images of women and minors. UK technology officials have publicly demanded that X act swiftly to prevent the spread of such content and ensure compliance with the Online Safety Act, which requires platforms to block unsolicited sexual imagery. Musk, however, has suggested that users who use such prompts be held liable, a move criticised as shifting responsibility. Critics note that the platform should still have to embed stronger safeguards.


Brussels bets on open-source to boost tech sovereignty

The European Commission is preparing a strategy to commercialise European open-source software to strengthen digital sovereignty and reduce reliance on foreign technology providers. 

The upcoming strategy, expected alongside the Cloud and AI Development Act in early 2026, will prioritise community upscaling, industrial deployment, and market integration. Strengthening developer communities, supporting adoption across various sectors, and ensuring market competitiveness are key objectives. Governance reforms and improved supply chain security are also planned to address vulnerabilities in widely used open-source components, enhancing trust and reliability.

Financial sustainability will be a key focus, with public sector partnerships encouraged to ensure the long-term viability of projects. By providing stable support and fostering collaboration between government and industry, the strategy seeks to create an economically sustainable open-source ecosystem.

The big picture. Despite funding fostering innovation, commercial-scale success has often occurred outside the EU. By focusing on open-source solutions developed within the EU, Brussels aims to strengthen Europe’s technological autonomy, retain the benefits of domestic innovation, and foster a resilient and competitive digital landscape.


USA pulls out of several international bodies

In a new move, US President Trump issued a memorandum directing the US withdrawal from numerous international organisations, conventions, and treaties deemed contrary to the interests of the USA.

The list includes 35 non-UN entities (e.g. the GFCE and the Freedom Online Coalition) and 31 UN bodies (e.g. the Department of Economic and Social Affairs, the UN Conference on Trade and Development and the UN Framework Convention on Climate Change (UNFCCC)). 

Why does it matter? The order was not a surprise, following the Trump administration’s 2025 retreat from the Paris Agreement, WHO and other international organisations focusing on climate change, sustainable development, and identity issues. Two initiatives in the technology and digital governance ecosystem are explicitly dropped: the Freedom Online Coalition (FOC) and the Global Forum on Cyber Expertise (GFCE). And there is also some uncertainty regarding the meaning and the implications of the US ‘withdrawal’ from UNCTAD and UN DESA, given the roles these entities play in relation to initiatives such as WSIS and Agenda 2030 follow-up processes, the Internet Governance Forum (IGF), and data governance. 



LOOKING AHEAD
 Person, Face, Head, Binoculars

The year has just begun, and the digital policy calendar is still taking shape. To stay up to date with upcoming events and discussions shaping the digital landscape, we encourage you to follow our calendar of events at dig.watch/events.



READING CORNER

Weekly #243 What the WSIS+20 outcome means for global digital governance?

 Logo, Text

12-19 December 2025


HIGHLIGHT OF THE WEEK

From review to recalibration: What the WSIS+20 outcome means for global digital governance

The WSIS+20 review, conducted 20 years after the World Summit on the Information Society, concluded in New York with the adoption of a high-level outcome document by the UN General Assembly. The review assesses progress toward building a people-centred, inclusive, and development-oriented information society, highlights areas needing further effort, and outlines measures to strengthen international cooperation.

A major institutional decision was to make the Internet Governance Forum (IGF) a permanent UN body. The outcome also includes steps to strengthen its functioning: broadening participation—especially from developing countries and underrepresented communities—enhancing intersessional work, supporting national and regional initiatives, and adopting innovative and transparent collaboration methods. The IGF Secretariat is to be strengthened, sustainable funding ensured, and annual reporting on progress provided to UN bodies, including the Commission on Science and Technology for Development (CSTD).

Negotiations addressed the creation of a governmental segment at the IGF. While some member states supported this as a way to foster more dialogue among governments, others were concerned it could compromise the IGF’s multistakeholder nature. The final compromise encourages dialogue among governments with the participation of all stakeholders.

Beyond the IGF, the outcome confirms the continuation of the annual WSIS Forum and calls for the United Nations Group on the Information Society (UNGIS) to increase efficiency, agility, and membership. 

WSIS action line facilitators are tasked with creating targeted implementation roadmaps linking WSIS action lines to Sustainable Development Goals (SDGs) and Global Digital Compact (GDC) commitments. 

UNGIS is requested to prepare a joint implementation roadmap to strengthen coherence between WSIS and the Global Digital Compact, to be presented to CSTD in 2026. The Secretary-General will submit biennial reports on WSIS implementation, and the next high-level review is scheduled for 2035.

The document places closing digital divides at the core of the WSIS+20 agenda. It addresses multiple aspects of digital exclusion, including accessibility, affordability, quality of connectivity, inclusion of vulnerable groups, multilingualism, cultural diversity, and connecting all schools to the internet. It stresses that connectivity alone is insufficient, highlighting the importance of skills development, enabling policy environments, and human rights protection.

The outcome also emphasises open, fair, and non-discriminatory digital development, including predictable and transparent policies, legal frameworks, and technology transfer to developing countries. Environmental sustainability is highlighted, with commitments to leverage digital technologies while addressing energy use, e-waste, critical minerals, and international standards for sustainable digital products.

Human rights and ethical considerations are reaffirmed as fundamental. The document stresses that rights online mirror those offline, calls for safeguards against adverse impacts of digital technologies, and urges the private sector to respect human rights throughout the technology lifecycle. It addresses online harms such as violence, hate speech, misinformation, cyberbullying, and child sexual exploitation, while promoting media freedom, privacy, and freedom of expression.

Capacity development and financing are recognised as essential. The document highlights the need to strengthen digital skills, technical expertise, and institutional capacities, including in AI. It invites the International Telecommunication Union to establish an internal task force to assess gaps and challenges in financial mechanisms for digital development and to report recommendations to CSTD by 2027. It also calls on the UN Inter-Agency Working Group on AI to map existing capacity-building initiatives, identify gaps, and develop programs such as an AI capacity-building fellowship for government officials and research programmes.

Finally, the outcome underscores the importance of monitoring and measurement, requesting a systematic review of existing ICT indicators and methodologies by the Partnership on Measuring ICT for Development, in cooperation with action line facilitators and the UN Statistical Commission. The Partnership is tasked with reporting to CSTD in 2027. Overall, the CSTD, ECOSOC, and the General Assembly maintain a central role in WSIS follow-up and review.

The final text reflects a broad compromise and was adopted without a vote, though some member states and groups raised concerns about certain provisions.

 Adult, Male, Man, Person, Clothing, Formal Wear, Suit, Book, Comics, Publication, Face, Head, Coat, Text
IN OTHER NEWS LAST WEEK

This week in AI governance

El Salvador. El Salvador has partnered with xAI to launch the world’s first nationwide AI-powered education programme, deploying the Grok model across more than 5,000 public schools to deliver personalised, curriculum-aligned tutoring to over one million students over the next two years. The initiative will support teachers with adaptive AI tools while co-developing methodologies, datasets and governance frameworks for responsible AI use in classrooms, aiming to close learning gaps and modernise the education system. President Nayib Bukele described the move as a leap forward in national digital transformation. 

BRICS. Talks on AI governance within the BRICS bloc have deepened as member states seek to harmonise national approaches and shared principles to ethical, inclusive and cooperative AI deployment. Still premature to talk about the creation of an AI-BRICS, Deputy Foreign Minister Sergey Ryabkov, Russia’s BRICS sherpa.

Pax Silica. A diverse group of nations has announced Pax Silica, a new partnership aimed at building secure, resilient, and innovation-driven supply chains for the technologies that underpin the AI era. These include critical minerals and energy inputs, advanced manufacturing, semiconductors, AI infrastructure and logistics. Analysts warn that diverging views may emerge if Washington pushes for tougher measures targeting China, potentially increasing political and economic pressure on participating nations. However, the USA, which leads the platform, clarified that the platform will focus on strengthening supply chains among its members rather than penalising non-members, like China.

UN AI Resource Hub. The UN AI Resource Hub has gone live as a centralised platform aggregating AI activities and expertise across the UN system. Presented by the UN Inter-Agency Working Group on AI. This platform has been developed through the joint collaboration of UNDP, UNESCO and ITU. It enables stakeholders to explore initiatives by agency, country and SDGs. The hub supports inter-agency collaboration, capacity for UN member states, and enhanced coherence in AI governance and terminology.


ByteDance inks US joint-venture deal to head off a TikTok ban

ByteDance has signed binding agreements to shift control of TikTok’s US operations to a new joint venture majority-owned (80.1%) by American and other non-Chinese investors, including Oracle, Silver Lake and Abu Dhabi-based MGX.

In exchange, ByteDance retains a 19.9% minority stake, in an effort to meet US national security demands and avoid a ban under the 2024 divest-or-ban law. 

The deal is slated to close on 22 January 2026, and US officials previously cited an implied valuation of approximately $14 billion, although the final terms have not been disclosed. 

TikTok CEO Shou Zi Chew told staff the new entity will independently oversee US data protection, algorithm and software security, and content moderation, with Oracle acting as the ‘trusted security partner’ hosting US user data in a US-based cloud and auditing compliance.


China edges closer to semiconductor independence with EUV prototype

Chinese scientists have reportedly built a prototype extreme ultraviolet (EUV) lithography machine, a technology long monopolised by ASML — the Dutch company that is the world’s sole supplier of EUV systems and a central chokepoint in global semiconductor manufacturing. 

EUV machines enable the production of the most advanced chips by etching ultra-fine circuits onto silicon wafers, making them indispensable for AI, advanced computing and modern weapons systems.

The Chinese prototype is already generating EUV light, though it has not yet produced working chips. 

The project reportedly involved former ASML engineers who reverse-engineered key elements of EUV systems, suggesting China may be closer to advanced chip-making capability than Western policymakers and analysts had assumed. 

Officials are targeting chip production by 2028, with insiders pointing to 2030 as a more realistic milestone.


USA launches tech force to boost federal AI and advanced tech skills

The Trump administration has unveiled a new initiative, branded the US Tech Force, aimed at rebuilding the US government’s technical capacity after deep workforce reductions, with a particular focus on AI and digital transformation. 

The programme reflects growing concern within the administration that federal agencies lack the in-house expertise needed to deploy and oversee advanced technologies, especially as AI becomes central to public administration, defence, and service delivery.

According to the official TechForce.gov website, participants will work on high-impact federal missions, addressing large-scale civic and national challenges. The programme positions itself as a bridge between Silicon Valley and Washington, encouraging experienced technologists to bring industry practices into government environments.

Supporters argue that the approach could quickly strengthen federal AI capacity and reduce reliance on external contractors. Critics, however, warn of potential conflicts of interest and question whether short-term deployments can substitute for sustained investment in the public sector workforce.


Brussels targets ultra-cheap imports

The EU member states will introduce a new customs duty on low-value e-commerce imports, starting 1 July 2026. Under the agreement, a customs duty of €3 per item will be applied to parcels valued at less than €150 imported directly into the EU from third countries. 

This marks a significant shift from the previous regime, under which such low-value goods were generally exempt from customs duties.

The temporary duty is intended to bridge the gap until the EU Customs Data Hub, a broader customs reform initiative designed to provide comprehensive import data and enhance enforcement capacity, becomes fully operational in 2028.

The Commission framed the measure as a necessary interim solution to ensure fair competition between EU-based retailers and overseas e-commerce sellers. The measure also lands squarely in the shadow of platforms such as Shein and Temu, whose business models are built on shipping vast volumes of ultra-low-value parcels.


USA reportedly suspends Tech Prosperity Deal with UK

The USA has reportedly suspended the implementation of the Tech Prosperity Deal with the UK, pausing a pact originally agreed during President Trump’s September state visit to London.

The Tech Prosperity Deal was designed to strengthen collaboration in frontier technologies, with a strong emphasis on AI, quantum, and the secure foundations needed for future innovation, and included commitments from major US tech firms to invest in the UK.

According to the Financial Times, Washington’s decision to suspend the deal reflects growing frustration with London’s stance on broader trade issues beyond technology. U.S. officials reportedly wanted the UK to make concessions on non-tariff barriers, particularly regulatory standards affecting food and industrial goods, before advancing the tech agreement

Neither government has commented yet. 



LOOKING AHEAD
 Person, Face, Head, Binoculars

Digital Watch Weekly will take a short break over the next two weeks. Thank you for your continued engagement and support.



READING CORNER

UNGA High-level meeting on WSIS+20 review – Day 2

Dear readers,

Welcome to our overview of statements delivered during Day 2 at UNGA’s high-level meeting on the WSIS+20 review.

Speakers repeatedly underscored that the WSIS vision remains relevant, but that it needs to be matched with concrete action, sustained cooperation, and inclusive governance arrangements. Digital transformation was framed as both an opportunity and a risk: a powerful accelerator of sustainable development, resilience, and service delivery, but also a driver of new inequalities if structural gaps, concentration of power, and governance challenges are left unaddressed. Digital public infrastructure and digital public goods were highlighted as foundations for inclusive development, while persistent digital divides were described as urgent and unresolved. Artificial intelligence (AI) featured prominently as a general-purpose technology with transformative potential, but also with risks related to exclusion, labour, environmental sustainability, and governance capacity.

Particular attention was given to the Internet Governance Forum (IGF), with widespread support for its permanent mandate, alongside calls to strengthen its funding, working modalities, and participation.

Throughout the day, speakers reaffirmed that no single stakeholder can deliver digital development alone, and that WSIS must continue to function as a people-centred, multistakeholder framework aligned with the SDGs and the Global Digital Compact (GDC).

DW team

Information and communication technologies for development

Digital transformation is no longer optional, underpinning early warning systems, disaster preparedness, climate adaptation, education, health services, and economic diversification, especially for Small Island Developing States (Fiji).

ICTs were widely framed as key enablers of sustainable development, innovation, resilience, and inclusive growth, and as major accelerators of the 2030 Agenda, particularly in contexts facing economic, climate, or security challenges (Ethiopia, Eritrea, Ukraine, Fiji, Colombia). It was noted that technologies, AI, and digital transformation must serve humanity through education, culture, science, communication, and information (UNESCO).

Strong emphasis was placed on digital public infrastructure (DPI) and digital public goods (DPGs) as foundations for inclusion, innovation, growth and public value (UNDP, Trinidad and Tobago, Malaysia). Digital public infrastructure was emphasised as needing to be secure, interoperable, and rights-based, grounded in safeguards, open systems, and public-interest governance (UNDP).

Digital commons, open-source solutions, and community-driven knowledge infrastructures were highlighted as central to sustainable development outcomes (IT for Change, Wikimedia, OIF). DPGs, such as open-source platforms, have been developed by stakeholders brought together by the WSIS process. However, member states need to create conditions for DPGs’ continued success within the WSIS framework (Wikimedia). Libraries were identified as global digital public infrastructure and significant public goods, with calls for their systematic integration into digital inclusion strategies and WSIS implementation efforts (International Federation of Library Associations and Institutions).

Persistent inequalities in sharing digitalisation gains were highlighted. While more than 6 billion people are online globally, low-income countries continue to lag significantly, including in digital commerce participation, underscoring the need for short-term policy choices that secure inclusive and sustainable development outcomes in the long term (UNCTAD).

The positive impact of digital technologies is considerably lower in developing countries compared to that in developed countries (Cuba). Concerns were raised that developing countries risk being locked into technological dependence, further deepening global asymmetries if left unaddressed (Colombia).

Environmental impacts

An environmentally sustainable information society was emphasised, with calls to align digital and green transformations to address climate change and resource scarcity, and to harness ICTs to achieve the SDGs (China).

Digital innovation was described as needing to support environmental sustainability and responsible resource use, ensuring positive long-term social and economic outcomes (Thailand).

The enabling environment for digital development

Speakers reaffirmed that enabling environments are central to the WSIS vision of a people-centred, inclusive, and development-oriented information society. Predictable, coherent, and transparent policy frameworks were highlighted as essential for enabling innovation and investment, and for ensuring that all countries can benefit from the digital economy (Microsoft, ICC).

These environments were linked to openness and coherence, including regulatory clarity and predictability, support for the free flow of information across borders, avoidance of unnecessary fragmentation, and the promotion of interoperability and scalable digital solutions (ICC). The importance of developing policies through dialogue with relevant stakeholders was also stressed (ICC).

Several speakers underlined that enabling environments must address persistent development gaps. The uneven distribution of the benefits of the information society, particularly in developing countries, was noted, alongside calls for enhanced international cooperation to facilitate investment, innovation, effective governance, and access to financial and technological resources (Holy See). Partnerships across all sectors were seen as essential to mobilise financing, capacity building, and technology transfer, given that governments cannot deliver alone (Fiji).

Divergent views were expressed on unilateral coercive measures. Some speakers argued that such measures impede economic and social development and hinder digital transformation, calling for international cooperation focused on capacity building, technology transfer, and financing of public digital infrastructure (Eritrea, Cuba). In contrast, a delegation stated that economic sanctions are lawful, legitimate, and effective tools for addressing threats to peace and security (USA).

Governance frameworks were identified as a core component of enabling environments. It was stressed that digital development must be safe, equitable, and rooted in trust, with adequate governance frameworks ensuring transparency, accountability, user protection, and meaningful stakeholder participation in line with the multistakeholder approach (Thailand).

Building confidence and security in the use of ICTs

Building confidence and security in the digital environment was framed as a prerequisite for realising the social and economic benefits of digitalisation, with trust and safety needing to be embedded across the entire digital ecosystem (Malaysia).

Trust was described as requiring regulation, accountability, and sustained public education to ensure that users can engage confidently with digital technologies (Malaysia).

Cybercrime was identified as a persistent and serious concern requiring concerted collective solutions beyond national approaches (Namibia).

Cybersecurity and cybercrime were highlighted as increasingly serious and complex challenges that undermine trust and risk eroding the socio-economic gains of digitalisation if left unaddressed (Thailand).

Investment in capacity building was emphasised as essential to strengthening national and individual resilience against cyber threats, alongside the adoption of security- and privacy-by-design principles (Thailand, International Federation for Information Processing).

Capacity development

Capacity development was consistently framed as a core enabler of inclusive digital transformation, with widespread recognition of persistent constraints in digital skills, institutional capacity, and governance capabilities (UNDP, Malaysia, Trinidad and Tobago).

Capacity development was identified as one of the most frequent requests from countries, particularly in relation to inclusive digital transformation (UNDP).

Effective capacity development was described as requiring institutional anchors, with centres of excellence highlighted as providing infrastructure and expertise that many countries—especially least developed countries, landlocked developing countries, and small island states—cannot afford independently (UNIDO).

Efforts are underway to establish a network of centres of excellence across the Global South, including in China, Ethiopia, the Western Balkans, Belarus, and Latin America (UNIDO).

Sustainable digital education was highlighted as essential, including fostering learner aspiration, addressing diversity and underrepresented communities, embedding computational thinking, and strengthening teacher preparation (International Federation for Information Processing). The emphasis should be on empowering people to understand information, question it, and use it wisely (UNESCO.

Libraries were highlighted as trusted, non-commercial public spaces that provide access to connectivity, devices, skills, and confidence-building support. For many people, particularly the most disenfranchised, libraries were described as the only way to get online and as key sources of diverse content and cultural heritage (International Federation of Library Associations and Institutions).

Financial mechanisms

Financing was described as a critical and non-negotiable component of implementing the WSIS vision, with repeated warnings that without adequate and predictable public and private resources, WSIS commitments risk remaining aspirational (APC).

Effective implementation was described as requiring a shift from fragmented, project-based funding toward systems-level financing approaches capable of delivering impact at scale (UNDP).

Calls were made for adequate, predictable, and accessible funding for digital infrastructure and capacity development, particularly to ensure effective participation of developing countries and the Global South (Colombia).

Support was expressed for the proposed establishment of a working group on future financial mechanisms for digital development, provided it focuses on the concrete needs of developing countries (Eritrea).

Financing challenges were also linked to linguistic and cultural diversity, with calls for decentralisation of computing capacity and ambitious strategies to finance digital development and AI, building on proposals by the UN Secretary-General (OIF).

Calls were made for UNGIS and ITU to ensure inclusive participation in the interagency financing task force and to approach the IGF’s permanent mandate with creativity and ambition (APC).

Existing financing mechanisms were highlighted for their tangible impact, including funds that have mobilised resources for digital infrastructure in more than 100 countries (Kuwait).

Human rights and the ethical dimensions of the information society

Human rights were reaffirmed as a foundational pillar of the WSIS vision, grounded in the UN Charter and the Universal Declaration of Human Rights, with emphasis on ensuring that the same rights people enjoy offline are protected online (International Institute for Democracy and Electoral Assistance, Costa Rica, Austria).

Anchoring WSIS in international human rights law was highlighted as essential to preserving an open, free, interoperable, reliable, and secure internet, particularly amid trends toward fragmentation, surveillance-based governance, and concentration of technological power (International Institute for Democracy and Electoral Assistance, OHCHR).

The centrality of human rights and the multistakeholder character of digital governance were described as practical conditions for legitimacy and effectiveness, particularly as freedom online declines and civic space shrinks (GPD, APC).

Concerns were raised about harms associated with profit-driven algorithmic systems and platform design, including addiction, mental health impacts, polarisation, extremism, and erosion of trustworthy information, with particularly severe effects in developing countries (HitRecord, Brazil).

A rights-based approach to digital governance was described as necessary to ensure accountability, participation, impact assessment, and protection of rights such as privacy, non-discrimination, and freedom of expression (OHCHR, ICC).

Divergent views were expressed on content regulation. Some cautioned against any threats to freedom of speech and expression (USA), while others emphasised the legitimate authority of states to regulate the digital domain to protect citizens and uphold the principle that what is illegal offline must also be illegal online (Brazil).

Ethical frameworks were emphasised to protect privacy, personal data, children, women, and vulnerable groups, and to orient digital development toward human dignity, justice, and the common good, including embedding ethical principles by design and protecting cultural diversity and the rights of artists and creators in AI-driven environments (UNESCO, Holy See, International Federation for Information Processing, Costa Rica, Kuwait, Colombia, Foundation Cibervoluntarios, Eritrea).

Concerns were raised about trends toward a more fragmented and state-centric internet, with warnings that such shifts pose risks to human rights, including privacy and freedom of expression, and could undermine the open and global nature of the internet (International Institute for Democracy and Electoral Assistance).

Data governance

The growing importance of data was linked to the expansion of AI (UNCTAD). Unlocking the value of data in a responsible manner was presented as a common problem and a civilizational challenge (Internet and Jurisdiction Policy Network). Concerns were raised about an innovation economy built on data extractivism, dispossession, and disenfranchisement, with countries and people from the Global South resisting unjust trade arrangements and seeking to reclaim the internet and its promise (IT for Change). 

Artificial intelligence

AI was described as a general-purpose technology at the centre of the technological revolution, shaping economic growth, national security, global competitiveness, and development trajectories (Brazil, the USA).

Concerns were raised that AI is currently being developed and deployed largely according to market-driven and engagement-maximising business models, similar to those that shaped social media. Without practical guardrails, AI risks reproducing harmful effects, and so governments need to move beyond historically hands-off approaches and play a more active role in governance (HitRecord).

Specific AI-related harms were identified, including deepfakes, rising environmental impacts from AI infrastructure (IT for Change), and labour impacts (Brazil). Concerns were expressed that AI adoption is contributing to job displacement and the fragilisation of labour rights, despite the centrality of decent work to the information society agenda (Brazil).

Noting uneven global capacities in AI development, deployment, and use, concerns were expressed that the speed of AI development may exceed the adaptive capacities of developing countries, including small island developing states, risking new forms of exclusion (Eritrea, Trinidad and Tobago). And it was highlighted that cultural and linguistic diversity is critically under-represented in AI systems (OIF).

Calls were made for AI governance frameworks to address AI-related risks and ensure that the technology is placed at the service of humanity (Kuwait, Namibia). Divergent views were expressed on governance approaches, with some cautioning against additional bureaucracy, while others stressed that relying on market forces alone will not ensure AI benefits all people (USA, HitRecord). It was also said that the UN should not shy away from looking into AI governance matters (Brazil). 

From an industrial perspective, it was noted that regulation often lags behind AI developments, with support expressed for evidence-based policymaking and regulatory testbeds to de-risk innovation and translate AI strategies into practice (UNIDO).

Ethical safeguards were emphasised as essential, with AI described as opening new horizons for creativity while also raising serious concerns about its impact on humanity’s relationship to truth, beauty, and contemplation (Holy See).

Internet governance

Widespread support was expressed for the Internet Governance Forum (IGF), described as a central pillar of the WSIS architecture and a cornerstone of global digital cooperation (International Institute for Democracy and Electoral Assistance, GPD, APC, ICANN, ICC, UNESCO, Austria, Africa ICT Alliance, Meta, Italy, Colombia). Making the IGF permanent was seen as an affirmation of confidence in the multistakeholder model and its continued relevance for addressing governance issues (APC, ICC, OHCHR).

The IGF was also described as a unique and inclusive multistakeholder space, bringing together governments, the private sector, civil society, the technical community, academia, and international organisations on equal footing. This model was credited with helping the internet remain global, interoperable, resilient, and stable through periods of rapid technological and geopolitical change (Microsoft, ICANN, IGF Leadership Panel, Meta).

Several speakers highlighted that the IGF has evolved into a self-organised global network, with more than 170 national, regional, sub-regional, and youth IGFs, enabling voices from remote, marginalised, and under-represented communities to feed into global discussions and bridge the gap between high-level diplomacy and ground-level implementation (Internet and Jurisdiction Policy Network, IGF Leadership Panel, Africa ICT Alliance, Internet Society). At the same time, it was stressed that while the IGF represents a remarkable institutional innovation, it has not yet fulfilled its full potential. Calls were made to continue improving its working modalities, clarify its institutional evolution, and ensure sustainable and predictable funding (Internet and Jurisdiction Policy Network, Brazil, ICANN).

Protecting and reaffirming the multistakeholder model of internet governance was repeatedly identified as important to the success of WSIS implementation. This model – anchored in dialogue, transparency, inclusivity, and accountability – was presented as a practical governance tool rather than a symbolic principle, ensuring that those who build, use, and regulate the internet can jointly shape its future (International Institute for Democracy and Electoral Assistance, Wikimedia, Microsoft, ICANN, ICC).

At the same time, several speakers stressed the need for stronger and more effective government participation in governance processes. It was noted that governments have legitimate roles and responsibilities in shaping digital policy, and that intergovernmental spaces must be strengthened so that all governments – particularly those from developing countries – can effectively perform their roles in global digital governance (APC, Brazil, Cuba). In this context, there was also a concern that calls for greater government engagement in the IGF have been framed primarily toward developing countries, with emphasis placed instead on the need for equal-footing participation of governments from all regions to ensure the forum’s long-term sustainability (APC).

Monitoring and measurement

It was noted that WSIS+20 must deliver measurable commitments with verifiable indicators (Costa Rica). And a streamlined and inclusive monitoring and review framework was seen as essential moving forward (Cuba).

WSIS framework, follow-up and implementation

There was broad recognition that the WSIS framework remains a central reference for a people-centred, inclusive, and development-oriented information society, while requiring reinforcement to respond to growing complexity, concentration of digital power, and risks posed by advanced AI systems (Costa Rica, Malaysia, Cuba).

The multistakeholder model was repeatedly reaffirmed as a cornerstone of the WSIS vision, anchored in dialogue, transparency, inclusivity, and accountability, and seen as essential to maintaining a resilient and open digital ecosystem (International Institute for Democracy and Electoral Assistance, GPD, USA, Meta, ICC, Italy, Thailand). The inclusive nature of the WSIS+20 review process itself was highlighted, with the Informal Multi-Stakeholder Sounding Board described as enabling substantive contributions from diverse stakeholder groups that helped identify both achievements and gaps in WSIS implementation over the past 20 years (WSIS+20 Co-Facilitators Informal Multi-Stakeholder Sounding Board).

Speaking of inclusivity, many speakers stressed that no single stakeholder can deliver digital development alone, and called for collaboration among governments, private sector, civil society, academia, technical communities, and international organisations to mobilise resources, share knowledge, transfer technology, and support nationally driven digital strategies (ICC, Namibia, Italy, Thailand). There were also calls to include knowledge actors such as universities, libraries, archives, cultural figures, and public media, reflecting that digital governance now concerns the status of knowledge itself (OIF). Youth representatives called for funded programmes, institutionalised youth seats in WSIS action line implementation, and recognition of young people as co-designers of digital policy (AI for Good Young Leaders).

On matters related to WSIS action lines, human rights expertise was highlighted as requiring a stronger and more systematic role within the WSIS architecture (GPD, OHCHR). And gender equality was welcomed as an explicit implementation priority within WSIS action lines (APC).

Strengthening UN system-wide coherence was highlighted as a priority, including clearer action line roadmaps and improved coordination across the UN system (GPD, UNDP). Alignment among WSIS, the Global Digital Compact (GDC), the Pact for the Future, and the SDGs was seen as necessary to maximise impact and avoid duplication (International Institute for Democracy and Electoral Assistance, Meta, Brazil, Colombia, Austria, Cuba). At the same time, one delegation expressed reservations about references to the GDC in the final outcome document, noting also concerns about what they considered to be international organisations setting a standard that legitimises international governance of the internet (USA).

Looking ahead, the task was framed not as preserving WSIS, but reinforcing it so that it remains future-proof, capable of anticipating rapid technological change while staying anchored in people-centred values, human rights, and inclusive governance (UNESCO, GPD). It was also stressed that for many in the Global South, the WSIS vision remains aspirational, and that the next phase must ensure the information society becomes an effective right rather than an empty promise (Cuba). 

Comments regarding the outcome document

In the last segment of the meeting, several delegations made statements regarding the WSIS+20 outcome document.

Some expressed concern about the limited transparency, inclusiveness, and predictability in the final phase of negotiations, stating that the process did not fully reflect multilateral dialogue and affected trust and collective ownership of the document (India, Israel, Iraq on behalf of Group of 77 and China, Iran).

Reservations were placed on language perceived as going beyond the WSIS mandate or national policy space, with reaffirmation of national sovereignty and the right of states to determine their own regulatory, social, and cultural frameworks. Concerns were raised regarding references to gender-related terminology, sexual and reproductive health, sexual and gender-based violence, misinformation, disinformation, and hate speech (Saudi Arabia, Argentina, Iran, Nigeria). Concerns were also noted regarding references to international instruments to which some states are not parties, citing concerns related to national legislation, culture, and sovereignty (Saudi Arabia). Dissociations were recorded from paragraphs related to human rights, information integrity, and the role of the Office of the High Commissioner for Human Rights in the digital sphere (Russian Federation). Concerns were further expressed that the outcome document advances what were described as divisive social themes, including climate change, gender, diversity, equity and inclusion, and the right to development (the USA).

Several delegations expressed concern that references to unilateral coercive measures were weakened and did not reflect their negative impact on access to technology, capacity building, and digital infrastructure in developing countries (Iraq on behalf of Group of 77 and China, Russian Federation, Iran). Others noted that such measures adopted in accordance with international law are legitimate foreign policy tools for addressing threats to peace and security (USA, Ukraine).

Some delegations noted that the outcome document does not sufficiently reflect the development dimension, particularly with regard to concrete commitments on financing, technology transfer, and capacity building, and that the absence of references to common but differentiated responsibilities weakens the development pillar (India, Iraq on behalf of Group of 77 and China, Iran). It was also said that the document does not adequately address the impacts of automation and artificial intelligence on labour and employment, despite requests from developing countries (Iraq on behalf of the Group of 77 and China).

While support for the multistakeholder nature of internet governance and the permanent nature of the IGF was noted, concerns were expressed that the outcome treats the IGF as a substitute rather than a complement to enhanced intergovernmental cooperation, and that the language regarding the intergovernmental segment for dialogue among governments has been weakened. It was said that intergovernmental spaces need to be strengthened so that all governments, particularly those from developing countries, can perform their roles in global governance (Iran, Iraq on behalf of Group of 77 and China). 

Serious reservations were placed on language viewed as legitimising international governance of the Internet, with opposition expressed to references to the Global Digital Compact, the Summit for the Future, and the Independent International Scientific Panel on AI, alongside reaffirmed support for a multistakeholder model of internet governance (USA).

Despite these reservations, several delegations stated that they joined the consensus in the interest of multilateralism and unity, while placing their positions and dissociations on record (India, Iraq on behalf of the Group of 77 and China, Iran, Nigeria, USA).

For a detailed summary of the discussions, including session transcripts and data statistics from the WSIS+20 High-Level meeting, visit our dedicated web page, where we are following the event. To explore the WSIS+20 review process in more depth, including its objectives and ongoing developments, see the dedicated WSIS+20 web page.
.

WSIS20 banner 4 final
Twenty years after the WSIS, the WSIS+20 review assesses progress, identifies ICT gaps, and highlights challenges such as bridging the digital divide and leveraging ICTs for development. The review will conclude with a two-day UNGA high-level meeting on 16–17 December 2025, featuring plenary sessions and the adoption of the draft outcome document.
wsis
This page keeps track of the process leading to the UNGA meeting in December 2025. It also provides background information about WSIS and related activities and processes since 1998.

UNGA High-level meeting on WSIS+20 review – Day 1

Dear readers,

Welcome to our overview of statements delivered during Day 1 at UNGA’s high-level meeting on the WSIS+20 review. 

Throughout the day, ICTs were framed as indispensable enablers of sustainable development and as core elements of economic participation and social inclusion. Speakers highlighted the transformative role of digital technologies across sectors such as education, health, agriculture, public administration, and disaster risk reduction, while underscoring the growing importance of digital public infrastructure and digital public goods as shared foundations for inclusive and resilient development. At the same time, advanced technologies, including artificial intelligence (AI), were described as reshaping economies and societies, offering new development opportunities while also introducing governance, capacity, and equity challenges that require coordinated international responses.

Discussions also returned repeatedly to the persistence of deep and multidimensional digital divides, spanning connectivity, affordability, skills, gender, geography, and access to emerging technologies. Speakers stressed that access alone is insufficient without trust, safety, institutional capacity, and respect for human rights. 

Internet governance featured prominently, with support for an open, free, global, interoperable, and secure internet grounded in human rights and multistakeholder cooperation. The Internet Governance Forum was widely recognised as a central platform for inclusive dialogue, with many calling for its strengthening through a permanent mandate, sustainable funding, and broader participation, particularly from developing countries and underrepresented groups. 

Across interventions, a shared message emerged that effective digital governance, strengthened international cooperation, and coherent implementation of WSIS commitments remain essential to ensuring that digital transformation leaves no one behind .

Our summary is structured around the thematic areas of the draft outcome document, which is expected to be adopted at the end of the high-level meeting, later today. 

DW team

Information and communication technologies for development

ICTs were consistently framed as indispensable and critical enablers of sustainable development and no longer peripheral but at the heart of development strategies (Slovakia, Azerbaijan, Timor-Leste). They increasingly shape how societies govern, learn, innovate, and connect, and are essential tools to advance economic growth, social inclusion, and quality of life (Azerbaijan, Chile). 

ICTs, including AI, were also described as tools to bring people closer together and collectively address sustainable development challenges, while boosting education and health, supporting climate adaptation and mitigation, and contributing to economic growth (Senegal, Israel). They were further framed as essential for transforming key sectors such as agriculture, health, education, and public administration (Uganda). The role of ICTs in disaster risk reduction and early warning systems was also highlighted, with emphasis on international cooperation through existing UN mechanisms (Japan).

Digital public infrastructure and digital public goods were highlighted as foundational backbones for inclusive and resilient development (India, Indonesia, Uganda, Kenya, Ghana). Shared digital foundations such as digital identity, payment systems, and data systems were described as transforming service delivery, expanding opportunities, and strengthening citizen engagement when built in ways that respect human rights and promote inclusion (Under-Secretary-General). 

Emerging technologies, including AI, big data, and cloud computing, were described as reshaping economies, transforming modes of production, and creating new opportunities for innovation. For developing countries, these technologies were seen as holding significant potential to accelerate structural transformation, expand access to services, enhance productivity, and support the achievement of the SDGs (Tunisia). Emerging technologies were also framed as creating opportunities for development and innovation and helping to address major global challenges (Norway). 

Several speakers stressed that digital transformation cannot be limited to the rollout of technology alone and must remain people-centred (Peru), while others emphasised its role in improving quality of life (Chile). However, it was emphasised that those without connectivity remain excluded from the opportunities that ICTs can give (President of the General Assembly).

Closing all digital divides

As highlighted during the entire WSIS+20 review process, persistent and multidimensional digital divides remain a central challenge that must be addressed if the WSIS vision of a truly inclusive information society is to be fully achieved. The divide was characterised as a ‘digital canyon’, reflecting stark disparities in access between and within countries, as well as a continuing gender gap in internet use (President of the General Assembly). 

Digital divides were widely described as multidimensional, spanning connectivity, affordability, skills, institutions, data, and emerging technologies, including AI (Kenya, Pakistan). Particular concern was expressed that gaps are deepening both between and within countries, and increasingly between those who shape technology and those who are shaped by it (Türkiye, Norway). The persistence of divides along gender, age, rural–urban, and disability lines was repeatedly highlighted, with warnings that uneven access to digital public services, skills, and meaningful connectivity risks reinforcing existing inequalities (Slovenia, Luxembourg, Mongolia).

More than a quarter of the world’s population remains offline, and affordability remains a significant barrier (Secretary-General). However, a recurring message was that digital inclusion requires more than connectivity. Skills, affordability, trust, safety, institutional capacity, and respect for human rights and fundamental freedoms online were repeatedly highlighted as essential components of meaningful access (Albania, Slovakia, Finland).

The inclusion of women and girls was identified as a critical priority for closing digital divides, with calls for targeted digital literacy, skills development, empowerment initiatives, and protection from online harms (President of the General Assembly, Israel, Finland, Belgium, Saudi Arabia). 

Attention was also drawn to intersecting forms of exclusion, including those affecting rural communities, persons with disabilities, older persons, and marginalised groups, with warnings that digital transformation risks reinforcing existing inequalities if these dimensions are not addressed systematically (Belgium, Luxembourg, Uganda, CANZ, Mongolia).

The emergence of an AI divide, linked to the concentration of infrastructure, data, and computing power, was also highlighted as a growing risk with far-reaching implications (Pakistan, Saudi Arabia). Concerns were raised that as global attention increasingly shifts toward AI and advanced technologies, many countries risk falling into perpetual catch-up without foundational investments in affordable and resilient broadband and in digital skills (Timor-Leste, Saudi Arabia, Philippines).

Developing countries highlighted structural constraints. The digital divide was described as a daily barrier to education, health care, and governance, with warnings that inequalities could deepen as global attention shifts toward AI and advanced technologies (Timor-Leste). 

Strong calls were made for enhanced international cooperation, financing, and technology transfer to close all dimensions of the digital divide. Adequate, predictable, and affordable financing was described as indispensable for extending digital infrastructure, promoting universal and meaningful connectivity, and strengthening skills and capacities, particularly in developing countries (Bangladesh, Azerbaijan, Cambodia, Egypt, Senegal, Algeria, Tunisia). Speakers emphasised that no country can address digital divides alone and stressed the importance of coordinated global action, inclusive partnerships, and knowledge sharing (Singapore, Mongolia, Latvia).

More broadly, speakers emphasised that the WSIS process remains of vital importance for developing countries and must prioritise the closure of all digital divides through concrete, actionable measures and inclusive, multistakeholder cooperation (Iraq on behalf of G77 and China, CANZ, Tonga).

The digital economy

Speakers repeatedly linked digitalisation to economic participation, productivity, and inclusion, while cautioning that unequal access risks excluding many countries and communities from emerging digital economic opportunities. Digital technologies were framed as enablers of entrepreneurship, micro, small and medium-sized enterprises, and access to markets, particularly when supported by digital public infrastructure and digital public services (Indonesia, Zimbabwe, Ghana).

Several delegations stressed that participation in the digital economy depends not only on connectivity but also on access to digital identity, digital payments, and interoperable platforms that enable transactions between governments, businesses, and citizens. Digital public infrastructure was described as a foundation for economic activity, transparency, and efficiency, helping to integrate citizens and businesses into formal economic systems (India, Ghana, Indonesia).

Developing countries highlighted that structural digital divides constrain their ability to benefit from the digital economy. These constraints were described as affecting access to education, finance, employment opportunities, and innovation ecosystems, with warnings that attention to advanced technologies, such as AI, could widen economic gaps if foundational issues remain unaddressed (Timor-Leste, Bangladesh, Cambodia, Egypt, and Senegal).

Several speakers explicitly connected digital economy participation to global inequities. It was argued that without enhanced international cooperation, financing, and technology transfer, developing countries risk remaining marginalised in global digital value chains and digital governance processes (Bangladesh, Egypt, Algeria, CANZ).

At the same time, some interventions emphasised national strategies to modernise legal and regulatory frameworks governing the digital economy, including updates to legislation related to digital services, AI, and electronic transactions, as part of broader economic transformation agendas (Ghana, Kyrgyzstan).

Social and economic development

Several interventions described digitalisation as enabling more inclusive economic participation, particularly through support for micro, small and medium-sized enterprises and by widening access to markets and services in developing country contexts (Indonesia, Zimbabwe). In this sense, digital technologies were presented as tools for integrating more people and businesses into economic activity, rather than simply increasing efficiency.

Digitalisation was also linked to the functioning of the state and public institutions, with references to digital government and digital public services as ways to improve access, responsiveness, and service delivery for citizens (Belgium, Senegal, Timor-Leste, Morocco). 

Beyond economic participation and public administration, digital technologies were associated with human development outcomes, including education, health, and social services. Several speakers referred to digital tools as supporting learning, healthcare delivery, and social inclusion, particularly where physical access to services remains limited (Egypt, Indonesia, Ghana). Digitalisation was also connected to livelihoods and rural development, including in agriculture, highlighting its relevance for poverty reduction and local economic resilience (Zimbabwe, Senegal).

Environmental impacts

Environmental dimensions of digitalisation were highlighted as a growing concern. The environmental footprint of digitalisation must be addressed, including energy use, critical minerals, and e-waste, calling for global standards and greener infrastructure (Secretary-General). Concerns were raised about the risk of e-waste and the importance of climate-resilient and sustainable digital infrastructure (Timor-Leste). The environmental impact of data centres and AI, and the need for circular economy approaches and responsible management of critical minerals, were also emphasised (Morocco). The role of governments and the private sector in ensuring sustainable and durable digital infrastructure, including opportunities to advance clean energy, was underlined (France).

The enabling environment for digital development

The importance of predictable policies, investment, and international cooperation featured prominently. Financing, technology transfer, and capacity-building were identified as prerequisites for inclusive digital development, particularly for developing countries (Algeria, Egypt, Cambodia, Bangladesh, Kenya). The need for a coherent UN digital governance architecture that builds on existing processes and avoids fragmentation was emphasised (Switzerland, Germany).

Concerns were raised that unilateral economic measures and unilateral coercive measures undermine the enabling environment for digital development by restricting access to technologies, digital infrastructure, financing, and capacity-building opportunities. Such measures were described as distorting the global supply chains and market order (Venezuela), exacerbating digital divides and disproportionately affecting developing countries, limiting their ability to participate meaningfully in the global digital economy and to implement WSIS commitments (Iraq on behalf of the Group of 77 and China, Venezuela on behalf of the Group of Friends in Defence of the UN Charter, Nicaragua).

Financial mechanisms

Financing was raised as a condition for implementation of the WSIS vision, with calls for adequate, predictable, and affordable financing to expand digital infrastructure and close persistent digital divides, particularly in developing countries (Iraq on behalf of the G77 and China, Algeria). It was stressed that political ambition cannot be realised without financing, alongside calls for sustained investment in digital public infrastructure and targeted financing for last-mile connectivity to reach underserved populations (Kenya, Timor-Leste).

Several interventions called for concessional and innovative financing to support digital development in developing countries. References were made to the Task Force for Financial Mechanisms as a platform for sharing best practices and strengthening financing approaches for digital development and universal connectivity, alongside calls to expand concessional financing to enable investment in digital infrastructure and services (Bangladesh, United Kingdom, Côte d’Ivoire).

Some delegations also described national financing efforts and instruments, including large-scale investments in fibre infrastructure, digital public services, and cybersecurity, as well as the use of universal service mechanisms and dedicated digital investment tools.  And a proposal was made to create a working group to examine financial mechanisms and present recommendations in 2027, prioritising financing on favourable terms and North-South, South-South and triangular partnerships (Senegal, Côte d’Ivoire, Morocco).

Building confidence and security in the use of ICTs

Building confidence and security in the use of ICTs was discussed through concrete governance and security measures at both national and international levels. National cybersecurity frameworks, legislation, and institutional arrangements were highlighted as essential for protecting digital infrastructure, data, and citizens, and for fostering trust in digital systems (Senegal, Morocco, Ghana). Capacity gaps in cybersecurity and technical expertise were identified as a major challenge, particularly for developing countries seeking to expand digital services while managing growing cyber risks (Uganda). The protection of critical infrastructure and citizens from cyber threats was emphasised as digitalisation deepens across public services and essential sectors (Timor-Leste, Zimbabwe).

At the international level, references were made to the UN Convention against Cybercrime (Uruguay, Venezuela, Russian Federation) and to the establishment of a permanent intergovernmental mechanism under UN auspices in the context of international information security and cooperation (Russian Federation).

Capacity development

Capacity development was presented as a prerequisite for inclusive digital transformation and for closing persistent digital divides. Several speakers emphasised that meaningful participation in the information society requires digital literacy, technical skills, institutional capacity, and policy expertise, particularly in developing countries and least developed countries (Albania, Egypt, Bangladesh, Cambodia, Uganda, Timor-Leste, Lesotho).

A recurring message was that access alone is insufficient without the skills and capabilities needed to use digital technologies safely, productively, and effectively. Digital skills development was linked to education, employability, participation in the digital economy, and confidence in digital public services (Albania, Egypt, Israel, South Africa).

Capacity gaps were highlighted in specific technical and governance areas, notably cybersecurity and emerging technologies, with warnings that skills shortages expose developing countries to heightened risks as they expand digital services and digitise public institutions (Timor-Leste, Uganda, Senegal).

International cooperation was framed as essential for capacity development, with references to the need for technology transfer, technical assistance, and sustained capacity-building support, particularly for developing countries and least developed countries. Strengthened North–South, South–South, and triangular cooperation was highlighted as a means to support skills development, knowledge sharing, hands-on training, and institutional and cybersecurity capacities aligned with national priorities and vulnerabilities (Cambodia, Bangladesh, Tunisia, Timor-Leste). Capacity building in emerging technologies, including artificial intelligence and cybersecurity, was also linked to international support, financing, and technology transfer (Nepal, Algeria).

Human rights and the ethical dimensions of the information society

Human rights were consistently framed as foundational to the information society and as a central reference point for digital governance. Numerous delegations reaffirmed that the same rights apply online and offline, with explicit references to international human rights law, including the rights to privacy, freedom of expression, access to information, and non-discrimination (Estonia, Spain, Belgium, Poland, Lithuania, Luxembourg, Finland, France). 

Several interventions stressed that digital technologies must respect and promote human dignity, with human dignity presented as a guiding value of the information society and a core ethical reference for digital transformation. Technological development, including artificial intelligence, was framed as needing to advance development and inclusion while enhancing dignity, autonomy, accountability, and respect for the individual, rather than treating people merely as data points or objects of automation (Estonia, Belgium, Lithuania, India, Türkiye, Slovenia).

Concerns were repeatedly raised about the misuse of digital technologies in ways that undermine fundamental rights. These included references to censorship, digital repression, surveillance practices that infringe on privacy, and restrictions on freedom of expression and civic space online (Belgium, Spain, Poland, Finland, France). Particular attention was drawn to risks faced by vulnerable groups, underscoring the need for safeguards, oversight, and accountability in the design and deployment of digital technologies (Belgium, Finland).

Artificial intelligence was explicitly cited as amplifying existing human rights challenges. Several interventions warned that AI systems, if not governed in line with human rights principles, could facilitate surveillance, enable censorship, or reinforce discrimination and exclusion, reinforcing calls to integrate human rights considerations throughout the lifecycle of emerging technologies (Belgium, Spain, France, Lithuania).

Data governance

Data governance was mentioned as an emerging governance concern, with broader implications for trust, ethics, and development. References were made to the establishment of national data governance frameworks, including efforts to build secure and interoperable data systems as part of digital transformation and public sector modernisation strategies (Morocco). Data governance was also identified as an outstanding challenge alongside data protection and digital capacity-building, particularly in relation to the deployment of AI (Chile). Several interventions framed data governance in terms of responsible and ethical data use, highlighting concerns about data concentration, data gaps, and the societal implications of data-driven technologies, while also linking data protection frameworks to trust in digital ecosystems and the effective functioning of digital government (Senegal, Saudi Arabia, Ghana). More broadly, data governance was framed through the lens of digital sovereignty and national authority over data, particularly from developing-country perspectives (Iraq on behalf of the G77 and China).

Artificial intelligence

Artificial intelligence featured prominently as both a development accelerator and a source of new risks. Ethical, human-centred, and rights-based approaches to AI governance were repeatedly emphasised, with references to human dignity, accountability, transparency, and the application of existing human rights obligations in AI-enabled systems (Estonia, Belgium, Spain, Albania, Lithuania, Indonesia, Israel, Senegal, Zimbabwe). Several speakers stressed that the rapid deployment of AI, particularly in public services, requires governance approaches that safeguard trust, inclusion, and democratic values (Albania, Lithuania, Türkiye).

Attention was drawn to structural AI divides. Unequal access to computing capacity, algorithms, data, and linguistic resources was identified as a growing concern, with the risk that lack of access to AI capabilities translates into exclusion from future employment, education, and economic opportunities (Saudi Arabia). Concerns were also expressed that disparities in AI infrastructure, skills, and institutional capacity could reinforce existing inequalities, particularly for the least developed and small developing countries. Without targeted international support, AI was seen as likely to widen development gaps rather than close them (Timor-Leste, Bangladesh, Lesotho).

The need to strengthen capacity within public institutions was underlined, extending beyond technical expertise to include policymakers, regulators, and civil servants responsible for oversight and implementation. National AI strategies were presented as tools to anchor AI use in public value and ethical governance rather than purely market-driven deployment (Kenya, Ghana).

The international governance of AI was discussed primarily in terms of coherence, coordination, and institutional continuity. Several interventions stressed the importance of building on existing international processes and initiatives, particularly within the UN system, and warned against fragmentation or duplication in global AI governance efforts (Japan, Estonia). AI governance was also situated within broader international challenges related to information manipulation, disinformation, and democratic resilience, reinforcing calls for approaches that strengthen trust and information integrity as part of global digital cooperation frameworks (France, Lithuania). More generally, AI governance was framed as needing to serve humanity and to be embedded within a strengthened global digital governance architecture grounded in human rights and multistakeholder cooperation, without reference to specific institutional mechanisms (Switzerland, European Union). 

Internet governance

Many speakers reaffirmed the multistakeholder model as a core principle of internet governance. They emphasised the importance of inclusive participation by governments, the private sector, civil society, the technical community, academia, and users, and stressed that no single actor or group of actors should control the internet or global internet governance processes. The multistakeholder approach was framed as essential for transparency, trust, legitimacy, and effective governance of the internet (President of the General Assembly, Estonia, Germany, Poland, Lithuania, Luxembourg, Ireland, Israel, Nigeria, Finland).

Several statements highlighted support for an internet that is open, free, global, interoperable, secure, and inclusive, and rooted in respect for human rights. This vision was linked to economic development, democratic participation, access to knowledge, and the protection of fundamental freedoms (European Union, President of the General Assembly, Germany, Estonia, Spain, Poland, Lithuania, Luxembourg, Finland, Norway). Some speakers warned that fragmentation, excessive centralisation, or restrictive approaches to internet governance could undermine this vision and weaken the global nature of the Internet (Germany, Estonia, Poland, Lithuania, Norway). There were also references to an ongoing process of fragmentation of the digital space and what was described as the lack of practical action to preserve a unified global network (Russian Federation). 

The Internet Governance Forum (IGF) was widely referenced as a central space for multistakeholder dialogue on internet-related public policy issues, with several speakers also pointing to its role as an inclusive platform for broader digital governance discussions, including emerging technologies and cross-cutting digital policy challenges (Under-Secretary-General, Japan, Estonia). Many expressed support for strengthening the IGF, including through elements such as a permanent mandate, predictable and sustainable funding, a strengthened Secretariat, enhanced intersessional work, and broader participation, particularly from developing countries and underrepresented groups. Concrete expressions of support included financial contributions to reinforce the IGF’s work and sustainability (Germany).

At the same time, some speakers questioned whether the IGF’s non-decision-making nature enables governments to participate on an equal footing in addressing international public policy issues related to the internet, as envisaged in the Tunis Agenda (Iran, Venezuela). There were also arguments according to which current internet governance arrangements remain unjust or incomplete; these were accompanied by calls for stronger intergovernmental cooperation, including legally binding frameworks and a more central role for the United Nations and its specialised bodies in addressing international internet public policy issues (Russian Federation, Venezuela). The mandate for enhanced cooperation, as set out in the Tunis Agenda, was described as unfinished in a few statements, which pointed out that progress in operationalising this mandate has been limited or blocked, and that existing arrangements do not allow governments to carry out their roles and responsibilities on an equal footing in international internet public policy discussions (Venezuela, Iran, Nicaragua). 

Monitoring and measurement

References to monitoring and measurement were limited. While some statements noted a need for WSIS action lines to be applied in more measurable and dynamic ways (South Africa, Switzerland), there were no substantive discussions on indicators, metrics, data collection, or monitoring frameworks for assessing WSIS implementation.

WSIS framework & Follow-up and review

There was strong and consistent support for the WSIS framework and its continued relevance. Several speakers reaffirmed the original WSIS outcome documents – in particular the Geneva Declaration and the Tunis Agenda – as enduring foundations of a people-centred, inclusive, and development-oriented information society. The WSIS+20 outcome document – yet to be adopted – was welcomed as reaffirming the WSIS vision, while recognising the need for the framework to adapt to changes in the digital landscape. Such adaptation should preserve the foundations of WSIS and its multistakeholder character (South Africa, Switzerland, Lesotho).

The relevance of the WSIS action lines was also reaffirmed, alongside calls to apply them in more agile, measurable, and context-responsive ways. Some delegations argued that the action lines should be operationalised more dynamically, to reflect emerging technologies such as AI while maintaining consistency with the Geneva Declaration and Tunis Agenda and with broader sustainable development objectives (South Africa, Poland, Switzerland).

Speakers also referred to institutional arrangements supporting WSIS implementation and follow-up. In addition to the repeated support for the IGF, several interventions noted the WSIS Forum, for instance in the context of its preparatory contributions to the WSIS+20 review and its continued annual convening (South Africa, Bangladesh, Russian Federation, UAE, ITU, Switzerland). References were also made to the United Nations Group on the Information Society as a coordination mechanism within the UN system, with speakers highlighting its role in facilitating coordination and increased efficiency across UN digital processes, including through the joint WSIS-GDC implementation roadmap that the draft outcome document tasks it with producing (Morocco, Republic of Korea).

Speakers repeatedly referred to the relationship between WSIS, the 2030 Agenda for Sustainable Development, and the Global Digital Compact. Several emphasised the importance of ensuring coherence and alignment among these processes, noting that WSIS remains closely linked to the implementation of the SDGs. The GDC was referenced as a related and complementary process that should reinforce and build upon existing WSIS frameworks rather than duplicate them. Calls were made for coordinated implementation, clear guidance, and avoidance of fragmentation across UN digital processes in order to ensure consistency and convergence in advancing sustainable development objectives (Albania, Spain, Switzerland, Luxembourg, Ireland, France, Under-Secretary-General).

For a detailed summary of the discussions, including session transcripts and data statistics from the WSIS+20 High-Level meeting, visit our dedicated web page, where we are following the event. To explore the WSIS+20 review process in more depth, including its objectives and ongoing developments, see the dedicated WSIS+20 web page.
.

WSIS20 banner 4 final

Twenty years after the WSIS, the WSIS+20 review assesses progress, identifies ICT gaps, and highlights challenges such as bridging the digital divide and leveraging ICTs for development. The review will conclude with a two-day UNGA high-level meeting on 16–17 December 2025, featuring plenary sessions and the adoption of the draft outcome document.

wsis

This page keeps track of the process leading to the UNGA meeting in December 2025. It also provides background information about WSIS and related activities and processes since 1998.

Weekly #242 Under-16 social media use in Australia: A delay or a ban?

 Logo, Text

5-12 December 2025


HIGHLIGHT OF THE WEEK

Under-16 social media use in Australia: A delay or a ban?

Australia made history on Wednesday as it began enforcing its landmark under-16 social media restrictions — the first nationwide rules of their kind anywhere in the world. 

The measure — a new Social Media Minimum Age (SMMA) requirement under the Online Safety Act — obliges major platforms to take ‘reasonable steps’ to delete underage accounts and block new sign-ups, backed by AUD 49.5 million fines and monthly compliance reporting.

As enforcement began, eSafety Commissioner Julie Inman Grant urged families — particularly those in regional and rural Australia — to consult the newly published guidance, which explains how the age limit works, why it has been raised from 13 to 16, and how to support young people during the transition.

The new framework should be viewed not as a ban but as a delay, Grant emphasised, raising the minimum account age from 13 to 16 to create ‘a reprieve from the powerful and persuasive design features built to keep them hooked and often enabling harmful content and conduct.’

It has been a few days since the ban—we continue to use the word ‘ban’ in the text, as it has already become part of the vernacular—took effect. Here’s what has happened in the days since.

Teen reactions. The shift was abrupt for young Australians. Teenagers posted farewell messages on the eve of the deadline, grieving the loss of communities, creative spaces, and peer networks that had anchored their daily lives. Youth advocates noted that those who rely on platforms for education, support networks, LGBTQ+ community spaces, or creative expression would be disproportionately affected.

Workarounds and their limits. Predictably, workarounds emerged immediately. Some teens tried (and succeeded) to fool facial-age estimation tools by distorting their expressions; others turned to VPNs to mask their locations. However, experts note that free VPNs frequently monetise user data or contain spyware, raising new risks. And it might be in vain – platforms retain an extensive set of signals they can use to infer a user’s true location and age, including IP addresses, GPS data, device identifiers, time-zone settings, mobile numbers, app-store information, and behavioural patterns. Age-related markers — such as linguistic analysis, school-hour activity patterns, face or voice age estimation, youth-focused interactions, and the age of an account give companies additional tools to identify underage users.

 Bus Stop, Outdoors, Book, Comics, Publication, Person, Adult, Female, Woman, People, Face, Head, Art

Privacy and effectiveness concerns. Critics argue that the policy raises serious privacy concerns, since age-verification systems, whether based on government ID uploads, biometrics, or AI-based assessments, force people to hand over sensitive data that could be misused, breached, or normalised as part of everyday surveillance. Others point out that facial-age technology is least reliable for teenagers — the very group it is now supposed to regulate. Some question whether the fines are even meaningful, given that Meta earns roughly AUD 50 million in under two hours.

The limited scope of the rules has drawn further scrutiny. Dating sites, gaming platforms, and AI chatbots remain outside the ban, even though some chatbots have been linked to harmful interactions with minors. Educators and child-rights advocates argue that digital literacy and resilience would better safeguard young people than removing access outright. Many teens say they will create fake profiles or share joint accounts with parents, raising doubts about long-term effectiveness.

Industry pushback. Most major platforms have publicly criticised the law’s development and substance. They maintain that the law will be extremely difficult to enforce, even as they prepare to comply to avoid fines. Industry group NetChoice has described the measure as ‘blanket censorship,’ while Meta and Snap argue that real enforcement power lies with Apple and Google through app-store age controls rather than at the platform level.

Reddit has filed a High Court challenge of the ban, naming the Commonwealth of Australia and Communications Minister Anika Wells as defendants, and claiming that the law is applied to Reddit inaccurately. The platform holds that it is a platform for adults, and doesn’t have the traditional social media features that the government has taken issue with.

Government position. The government, expecting a turbulent rollout, frames the measure as consistent with other age-based restrictions (such as no drinking alcohol under 18) and a response to sustained public concern about online harms. Officials argue that Australia is playing a pioneering role in youth online safety — a stance drawing significant international attention. 

International interest. This development has garnered considerable international attention. As we previously reported, there is a small but growing club of countries seeking to ban minors from major platforms. 

 Book, Comics, Publication, Baby, Person, Face, Head, Flag
  • The EU Parliament has proposed a minimum social media age of 16, allowing parental consent for users aged 13–15, and is exploring limits on addictive features such as autoplay and infinite scrolling. 
  • In France, lawmakers have suggested banning under-15s from social media and introducing a ‘curfew’ for older teens.
  • Spain is considering parental authorisation for under-16s. 
  • Malaysia plans to introduce a ban on social media accounts for people under 16 starting in 2026.
  • Denmark and Norway are considering raising the minimum social media age to 15, with Denmark potentially banning under-15s outright and Norway proposing stricter age verification and data protections. 
  • In New Zealand, political debate has considered restrictions for minors, but no formal policy has been enacted. 
  • According to Australia’s Communications Minister, Anika Wells, officials from the EU, Fiji, Greece, and Malta have approached Australia for guidance, viewing the SMMA rollout as a potential model. 

All of these jurisdictions are now looking closely at Australia, watching for proof of concept — or failure.

The unresolved question. Young people are reminded that they retain access to group messaging tools, gaming services and video conferencing apps while they await eligibility for full social media accounts. But the question lingers: if access to large parts of the digital ecosystem remains open, what is the practical value of fencing off only one segment of the internet?

IN OTHER NEWS LAST WEEK

This week in AI governance

National regulations

Vietnam. Vietnam’s National Assembly has passed the country’s first comprehensive AI law, establishing a risk management regime, sandbox testing, a National AI Development Fund and startup voucher schemes to balance strict safeguards with innovation incentives. The 35‑article legislation — largely inspired by EU and other models — centralises AI oversight under the government and will take effect in March 2026.

The USA. The US President Donald Trump has signed an executive order targeting what the administration views as the most onerous and excessive state-level AI laws. The White House argues that a growing patchwork of state rules threatens to stymie innovation, burden developers, and weaken US competitiveness.

To address this, the order creates an AI Litigation Task Force to challenge state laws deemed obstructive to the policy set out in the executive order – to sustain and enhance the US global AI dominance through a minimally burdensome national policy framework for AI. The Commerce Department is directed to review all state AI regulations within 90 days to identify those that impose undue burdens. It also uses federal funding as leverage, allowing certain grants to be conditioned on states aligning with national AI policy.

The UK. More than 100 UK parliamentarians from across parties are pushing the government to adopt binding rules on advanced AI systems, saying current frameworks lag behind rapid technological progress and pose risks to national and global security. The cross‑party campaign, backed by former ministers and figures from the tech community, seeks mandatory testing standards, independent oversight and stronger international cooperation — challenging the government’s preference for existing, largely voluntary regulation.

National plans and investments

Russia. Russia is advancing a nationwide plan to expand the use of generative AI across public administration and key sectors, with a proposed central headquarters to coordinate ministries and agencies. Officials see increased deployment of domestic generative systems as a way to strengthen sovereignty, boost efficiency and drive regional economic development, prioritising locally developed AI over foreign platforms.

Qatar. Qatar has launched Qai, a new national AI company designed to accelerate the country’s digital transformation and global AI footprint. Qai will provide high‑performance computing and scalable AI infrastructure, working with research institutions, policymakers and partners worldwide to promote the adoption of advanced technologies that support sustainable development and economic diversification.

The EU. The EU has advanced an ambitious gigafactory programme to strengthen AI leadership by scaling up infrastructure and computational capacity across member states. This involves expanding a network of AI ‘factories’ and antennas that provide high‑performance computing and technical expertise to startups, SMEs and researchers, integrating innovation support alongside regulatory frameworks like the AI Act. 

Australia. Australia has sealed a USD 4.6 billion deal for a new AI hub in western Sydney, partnering with private sector actors to build an AI campus with extensive GPU-based infrastructure capable of supporting advanced workloads. The investment forms part of broader national efforts to establish domestic AI innovation and computational capacity. 

Partnerships 

Canada‑EU. Canada and the EU have expanded their digital partnership on AI and security, committing to deepen cooperation on trusted AI systems, data governance and shared digital infrastructure. This includes memoranda aimed at advancing interoperability, harmonising standards and fostering joint work on trustworthy digital services. 

The International Network for Advanced AI Measurement, Evaluation and Science. The global network has strengthened cooperation on benchmarking AI governance progress, focusing on metrics that help compare national policies, identify gaps and support evidence‑based decision‑making in AI regulation internationally. This network includes Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the USA. The UK has assumed the role of Network Coordinator.


Trump allows Nvidia to sell chips to approved Chinese customers

The USA has decided to allow the sale of H200 chips to approved customers in China, a decision that marks a notable shift in export controls.

Under the new framework, sales of H200 chips will proceed, subject to conditions including licensing oversight by the US Department of Commerce and a revenue-sharing mechanism that directs 25% of the proceeds back to the US government. 

The road ahead. The policy is drawing scrutiny from some US lawmakers and national security experts who caution that increased hardware access could strengthen China’s technological capabilities in sensitive domains.


Poland halts crypto reform as Norway pauses CBDC plans

Poland’s effort to introduce a comprehensive crypto law has reached an impasse after the Sejm failed to overturn President Karol Nawrocki’s veto of a bill meant to align national rules with the EU’s MiCA framework. 

The government argued the reform was essential for consumer protection and national security, but the president rejected it as overly burdensome and a threat to economic freedom, citing expansive supervisory powers and website-blocking provisions. With the veto upheld, Poland remains without a clear domestic regulatory framework for digital assets. In the aftermath, Prime Minister Donald Tusk has pledged to renew efforts to pass crypto legislation.

In Norway, Norges Bank has concluded that current conditions do not justify launching a central bank digital currency, arguing that Norway’s payment system remains secure, efficient and well-tailored to users.

The bank maintains that the Norwegian krone continues to function reliably, supported by strong contingency arrangements and stable operational performance.  Governor Ida Wolden Bache said the assessment reflects timing rather than a rejection of CBDCs, noting the bank could introduce one if conditions change or if new risks emerge in the domestic payments landscape.

Zooming out. Both cases highlight a cautious approach to digital finance in Europe: while Poland grapples with how much oversight is too much, Norway is weighing whether innovation should wait until the timing is right.



LAST WEEK IN GENEVA
 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

On Wednesday (3 December), Diplo, UNEP, and Giga are co-organising an event at the Giga Connectivity Centre in Geneva, titled ‘Digital inclusion by design: Leveraging existing infrastructure to leave no one behind’. The event looked at realities on the ground when it comes to connectivity and digital inclusion, and at concrete examples of how community anchor institutions like posts, schools, and libraries can contribute significantly to advancing meaningful inclusion. There was also a call for policymakers at national and international levels to keep these community anchor institutions in mind when designing inclusion strategies or discussing frameworks, such as the GDC and WSIS+20.

Organisations and institutions are invited to submit event proposals for the second edition of Geneva Security Week. Submissions are open until 6 January 2026. Co-organised once again by the UN Institute for Disarmament Research (UNIDIR) and the Swiss Federal Department of Foreign Affairs (FDFA), Geneva Security Week 2026 will take place from 4 to 8 May 2026 under the theme ‘Advancing Global Cooperation in Cyberspace’.

LOOKING AHEAD
 Person, Face, Head, Binoculars

UN General Assembly High-level meeting on WSIS+20 review

Twenty years after the finalisation of the World Summit on the Information Society (WSIS), the WSIS+20 review process will take stock of the progress made in the implementation of WSIS outcomes and address potential ICT gaps and areas for continued focus, as well as address challenges, including bridging the digital divide and harnessing ICTs for development.

The overall review will be concluded by a two-day high-level meeting of the UN General Assembly (UNGA), scheduled to 16–17 December 2025. The meeting will consist of plenary meetings, which will include statements in accordance with General Assembly resolution 79/277 and the adoption of the draft outcome document.

Diplo and the Geneva Internet Platform (GIP) will provide just-in-time reporting from the meeting. Bookmark our dedicated web page; more details will be available soon.

 City, Urban, Flag


READING CORNER
flag united nations

Human rights are no longer abstract ideals but living principles shaping how AI, data, and digital governance influence everyday life, power structures, and the future of human dignity in an increasingly technological world.

Lettre d’information du Digital Watch – Numéro 105 – Mensuelle novembre 2025

Rétrospective de novembre 2025

Le numéro de ce mois-ci vous emmène de Washington à Genève, de la COP 30 aux négociations du SMSI+20, retraçant les développements majeurs qui redéfinissent la politique en matière d’IA, de la sécurité en ligne et de la résilience de l’infrastructure numérique dont nous dépendons quotidiennement.

Voici ce que nous vous proposons dans cette édition.

La bulle de l’IA est-elle sur le point d’éclater ? La bulle de l’IA va-t-elle éclater ? L’IA est-elle désormais « trop importante pour faire faillite » ? Le gouvernement américain va-t-il renflouer les géants de l’IA, et quelles seraient les conséquences pour l’économie mondiale ?

La lutte mondiale pour réguler l’IA – Les gouvernements s’empressent de définir des règles, qu’il s’agisse de stratégies nationales en matière d’IA ou de nouveaux cadres mondiaux. Nous présentons les dernières initiatives.

Points forts du SMSI+20 Rev 1 — Un aperçu du document qui guide actuellement les négociations entre les États membres de l’ONU avant la réunion de haut niveau de l’Assemblée générale les 16 et 17 décembre 2025.

Quand le numérique rencontre le climat — Ce dont les États membres de l’ONU ont discuté en matière d’IA et de numérique lors de la COP 30.

Sécurité des enfants en ligne — De l’Australie à l’UE, les gouvernements mettent en place de nouvelles mesures de protection pour préserver les enfants des dangers d’Internet. Nous examinons leurs approches.

Panne numérique — La panne de Cloudflare a révélé la fragilité des dépendances au sein de l’Internet mondial. Nous analysons ses causes et ce que cet incident révèle sur la résilience numérique.

Le mois dernier à Genève — Retrouvez les discussions, les événements et les conclusions qui ont façonné la gouvernance numérique internationale.


GOUVERNANCE NUMÉRIQUE

La France et l’Allemagne ont organisé à Berlin un sommet sur la souveraineté numérique européenne afin d’accélérer l’indépendance numérique de l’Europe. Elles ont présenté une feuille de route comportant sept priorités : simplifier la réglementation (notamment en reportant certaines règles de la loi sur l’IA), garantir l’équité des marchés du cloud et du numérique, renforcer la souveraineté des données, faire progresser les biens communs numériques, développer les infrastructures publiques numériques open source, créer un groupe de travail sur la souveraineté numérique et stimuler l’innovation de pointe en matière d’IA. Plus de 12 milliards d’euros d’investissements privés ont été promis. Un développement majeur accompagnant le sommet a été le lancement du Réseau européen pour la résilience et la souveraineté technologiques (ETRS) afin de réduire la dépendance vis-à-vis des technologies étrangères (actuellement supérieure à 80 %) grâce à la collaboration d’experts, à la cartographie de la dépendance technologique et au soutien à l’élaboration de politiques fondées sur des données probantes.

TECHNOLOGIES

Le gouvernement néerlandais a suspendu son projet de rachat de Nexperia, un fabricant de puces basé aux Pays-Bas et détenu par la société chinoise Wingtech, à la suite de négociations positives avec les autorités chinoises. La Chine a également commencé à libérer ses stocks de puces afin d’atténuer la pénurie.

Baidu a présenté deux puces IA développées en interne, la M100 pour une inférence efficace sur des modèles mixtes d’experts (prévue début 2026) et la M300 pour l’entraînement de modèles multimodaux à des trillions de paramètres (2027). L’entreprise a également présenté des architectures en groupement (Tianchi256 au premier semestre 2026 ; Tianchi512 au second semestre 2026) pour mettre à l’échelle l’inférence via de grandes interconnexions. IBM a dévoilé deux puces quantiques : Nighthawk (120 qubits, 218 coupleurs accordables) et Quantum Connect (100 qubits, 100 coupleurs accordables). Tianchi512 au second semestre 2026) pour faire évoluer l’inférence via de grandes interconnexions.

IBM a présenté deux puces quantiques : Nighthawk (120 qubits, 218 coupleurs accordables) permettant des circuits environ 30 % plus complexes, et Loon, un banc d’essai tolérant aux pannes avec une connectivité à six voies et des coupleurs à longue portée.

INFRASTRUCTURE

Six États membres de l’UE — l’Autriche, la France, l’Allemagne, la Hongrie, l’Italie et la Slovénie — ont conjointement demandé que la loi sur les réseaux numériques (DNA) soit réexaminée, arguant que les éléments fondamentaux de la proposition — notamment la réglementation harmonisée de type télécom, les mécanismes de règlement des litiges relatifs aux frais de réseau et les règles plus larges en matière de fusion — devraient plutôt rester sous contrôle national.

CYBERSÉCURITÉ

Roblox mettra en place une estimation obligatoire de l’âge (à partir de décembre dans certains pays, puis à l’échelle mondiale en janvier) et segmentera les utilisateurs en tranches d’âge strictes afin de bloquer les discussions avec des adultes inconnus. Les moins de 13 ans resteront exclus des messages privés, sauf si leurs parents donnent leur accord.

Eurofiber a confirmé une violation de sa plateforme client ATE française et de son système de billetterie via un logiciel tiers, précisant que les services restaient opérationnels et que les données bancaires étaient sécurisées.

La FCC s’apprête à voter l’abrogation des règles de janvier prévues par la section 105 de la CALEA, qui obligeaient les principaux opérateurs à renforcer la sécurité de leurs réseaux contre les accès non autorisés et les interceptions, mesures adoptées après que la campagne de cyberespionnage Salt Typhoon ait révélé les vulnérabilités des télécommunications.

Le Royaume-Uni prévoit un projet de loi sur la cybersécurité et la résilience afin de renforcer les infrastructures nationales critiques et l’économie numérique en général contre les cybermenaces croissantes. Environ 1 000 fournisseurs de services essentiels (santé, énergie, informatique) seraient soumis à des normes renforcées, avec une extension potentielle à plus de 200 centres de données.

ÉCONOMIE

Les Émirats arabes unis ont réalisé leur première transaction gouvernementale à l’aide du dirham numérique, un projet pilote de monnaie numérique de banque centrale (CBDC) dans le cadre de son programme de transformation de l’infrastructure financière. De plus, la banque centrale des Émirats arabes unis a approuvé le Zand AED, la première monnaie stable réglementée et multi-chaînes adossée à l’AED, émise par la banque agréée Zand.

La Banque nationale tchèque a créé un portefeuille test d’actifs numériques d’une valeur de 1 million de dollars, comprenant des bitcoins, une monnaie stable en dollars américains et un dépôt sous forme de jetons, afin d’acquérir une expérience pratique en matière d’opérations, de sécurité et de lutte contre le blanchiment d’argent, sans intention d’investir activement.

La Roumanie a mené à bien son premier projet pilote en argent réel avec le portefeuille d’identité numérique de l’UE (EUDIW), en collaboration avec Banca Transilvania et BPC, permettant à un titulaire de carte d’authentifier un achat via le portefeuille plutôt que par SMS OTP ou lecteur de carte.

La Commission européenne a ouvert une enquête DMA afin de déterminer si Google Search pénalise injustement les éditeurs de presse via sa politique d’« abus de réputation de site », qui peut faire baisser le classement des médias hébergeant du contenu partenaire.

En termes de stratégies numériques, l’Agenda 2030 de la Commission européenne pour les consommateurs présente un plan visant à renforcer la protection, la confiance et la compétitivité tout en simplifiant la réglementation pour les entreprises.
Le Turkménistan a adopté sa première loi complète sur les actifs virtuels, qui entrera en vigueur le 1er janvier 2026, légalisant le minage de cryptomonnaies et autorisant les échanges sous réserve d’un enregistrement strict auprès de l’État.

DROITS DE L’HOMME

Le Conseil de l’UE a adopté de nouvelles mesures visant à accélérer le traitement des plaintes transfrontalières en matière de protection des données, avec des critères de recevabilité harmonisés et des droits procéduraux renforcés pour les citoyens et les entreprises. Un processus de coopération simplifié pour les cas simples permettra également de réduire les charges administratives et d’accélérer les résolutions.

L’Inde a commencé à mettre en œuvre sa loi de 2023 sur la protection des données personnelles numériques grâce à des règles nouvellement approuvées qui établissent les structures de gouvernance initiales, notamment un comité de protection des données, tout en accordant aux organisations un délai supplémentaire pour se conformer pleinement à leurs obligations.

JURIDIQUE

OpenAI s’oppose à une demande juridique restreinte du New York Times concernant 20 millions de conversations ChatGPT, dans le cadre du procès intenté par le Times pour utilisation abusive présumée de son contenu. OpenAI avertit que le partage de ces données pourrait exposer des informations sensibles et créer des précédents importants quant à la manière dont les plateformes d’IA traitent la confidentialité des utilisateurs, la conservation des données et la responsabilité juridique.

Un juge américain a autorisé la poursuite de l’action en justice engagée par l’Authors Guild contre OpenAI, rejetant le renvoi et admettant les allégations selon lesquelles les résumés de ChatGPT reproduisent illégalement le ton, l’intrigue et les personnages des auteurs.

L’autorité de régulation des médias irlandaise a ouvert sa première enquête DSA sur X, afin de déterminer si les utilisateurs disposent de voies de recours accessibles et de résultats clairs lorsque les demandes de suppression de contenu sont refusées.

Dans un revers pour la FTC, un juge américain a statué que Meta n’exerçait pas actuellement de pouvoir monopolistique dans le domaine des réseaux sociaux, rejetant ainsi une proposition qui aurait pu contraindre à la cession d’Instagram et de WhatsApp.

SOCIOCULTUREL

La Commission européenne a lancé le Culture Compass for Europe, un cadre visant à placer la culture au cœur de la politique de l’UE, à promouvoir l’identité et la diversité et à soutenir les secteurs créatifs.

Les régulateurs chinois du cyberespace ont lancé une campagne de répression contre les deepfakes utilisant l’IA pour usurper l’identité de personnalités publiques dans le cadre de ventes en direct, ordonnant le nettoyage des plateformes et la responsabilisation des spécialistes du marketing.

DÉVELOPPEMENT

Les ministres d’Afrique occidentale et centrale ont adopté la Déclaration de Cotonou afin d’accélérer la transformation numérique d’ici 2030, en visant la création d’un marché numérique africain unique, la généralisation du haut débit, la mise en place d’infrastructures numériques interopérables et l’harmonisation des règles en matière de cybersécurité, de gouvernance des données et d’intelligence artificielle. Cette initiative met l’accent sur le capital humain et l’innovation, avec pour objectif de doter 20 millions de personnes de compétences numériques, de créer deux millions d’emplois dans le secteur numérique et de stimuler le développement de l’intelligence artificielle et des infrastructures numériques régionales sous l’égide de l’Afrique.

Le rapport de l’UIT intitulé « Mesurer le développement numérique : faits et chiffres 2025 » révèle que, si la connectivité mondiale continue de se développer (avec près de 6 milliards de personnes connectées à Internet en 2025), 2,2 milliards de personnes restent encore hors ligne, principalement dans les pays à faible et moyen revenu. Des écarts importants persistent en matière de qualité de connexion, d’utilisation des données, d’accessibilité financière et de compétences numériques, ce qui empêche de nombreuses personnes de profiter pleinement du monde numérique.

La Suisse s’est officiellement associée à Horizon Europe, Digital Europe et Euratom R&T, accordant ainsi aux chercheurs suisses un statut équivalent à celui des chercheurs de l’UE pour diriger des projets et obtenir des financements dans tous les domaines à partir du 1er janvier 2025.

L’Ouzbékistan accorde désormais une validité juridique totale aux données personnelles sur le portail de services publics my.gov.uz, les assimilant à des documents papier (à compter du 1er novembre). Les citoyens peuvent accéder, partager et gérer leurs dossiers entièrement en ligne.


Australie. L’Australie a présenté un nouveau plan national pour l’intelligence artificielle (IA) visant à l’exploiter au service de la croissance économique, de l’inclusion sociale et de l’efficacité du secteur public, tout en mettant l’accent sur la sécurité, la confiance et l’équité dans son utilisation. Ce plan mobilise des investissements substantiels : des centaines de millions de dollars australiens sont consacrés à la recherche, aux infrastructures, au développement des compétences et à des programmes visant à aider les petites et moyennes entreprises à adopter l’IA. Le gouvernement prévoit également d’étendre l’accès à cette technologie à l’ensemble du pays.

Les mesures concrètes comprennent la création d’un centre national dédié à l’IA, le soutien à l’adoption de l’IA par les entreprises et les organisations à but non lucratif, l’amélioration des compétences numériques par le biais de formations dans les écoles et les communautés, et l’intégration de l’IA dans la prestation des services publics.

Afin de garantir une utilisation responsable, le gouvernement créera l’AI Safety Institute (AISI), un centre national chargé de consolider la recherche sur la sécurité de l’IA, de coordonner l’élaboration de normes et de conseiller le gouvernement et l’industrie sur les meilleures pratiques. L’institut évaluera la sécurité des modèles d’IA avancés, favorisera la résilience face aux abus ou aux accidents et servira de plaque tournante pour la coopération internationale en matière de gouvernance et de recherche dans le domaine de l’IA.

Le rapport met en évidence les atouts relatifs du Bangladesh : une infrastructure administrative en ligne en pleine expansion et une confiance généralement élevée du public dans les services numériques. Cependant, il dresse également un tableau franc des défis structurels : une connectivité inégale et un approvisionnement électrique peu fiable en dehors des grandes zones urbaines, une fracture numérique persistante (notamment entre les sexes et entre les zones urbaines et rurales), une capacité informatique haut de gamme limitée et une protection des données, une cybersécurité et des compétences liées à l’IA insuffisantes dans de nombreux secteurs de la société.

Dans le cadre de sa feuille de route, le pays prévoit de donner la priorité aux cadres de gouvernance, au renforcement des capacités et au déploiement inclusif, en veillant notamment à ce que l’IA soutienne les services publics dans les domaines de la santé, de l’éducation, de la justice et de la protection sociale.

Belgique. La Belgique rejoint un nombre grandissant de pays et d’organisations du secteur public qui ont restreint ou bloqué l’accès à DeepSeek en raison de craintes liées à la sécurité. Tous les fonctionnaires du gouvernement fédéral belge doivent cesser d’utiliser DeepSeek à compter du 1er décembre, et toutes les instances de DeepSeek doivent être supprimées des appareils officiels.

Cette décision fait suite à un avertissement du Centre pour la cybersécurité en Belgique, qui a identifié de graves risques liés à la protection des données associés à cet outil et a signalé que son utilisation posait problème pour le traitement d’informations gouvernementales sensibles.

Russie. Lors de la principale conférence russe sur l’IA (AI Journey), le président Vladimir Poutine a annoncé la création d’un groupe de travail national sur l’IA, le présentant comme essentiel pour réduire la dépendance vis-à-vis de l’IA étrangère. Le plan prévoit la construction de centres de données (alimentés même par des centrales nucléaires à petite échelle) et leur utilisation pour héberger des modèles d’IA générative qui protègent les intérêts nationaux. M. Poutine a également fait valoir que seuls des modèles développés au niveau national devraient être utilisés dans les secteurs sensibles, tels que la sécurité nationale, afin d’éviter toute fuite de données.

Singapour. Singapour vient de créer un espace de test mondial pour sécuriser l’intelligence artificielle. Les entreprises de tous pays peuvent désormais y mener des expérimentations concrètes afin de garantir le bon fonctionnement de leurs systèmes d’IA.

Ce dispositif est régi par 11 principes de gouvernance conformes aux normes internationales, notamment le cadre de gestion des risques liés à l’IA du NIST et la norme ISO/IEC 42001. Singapour espère ainsi combler le fossé entre les réglementations nationales fragmentées en matière d’IA et établir des références communes en matière de sécurité et de confiance.

L’UE. Une importante tempête politique se prépare au sein de l’UE. La Commission européenne a présenté ce qu’elle appelle l’Omnibus numérique, un ensemble de propositions visant à simplifier sa législation numérique. Cette initiative est saluée par certains comme nécessaire pour améliorer la compétitivité des acteurs numériques de l’UE, mais critiquée par d’autres en raison de ses implications potentiellement négatives dans des domaines tels que les droits numériques. Ce paquet comprend la proposition de règlement omnibus numérique Digital et la proposition de règlement omnibus numérique sur l’IA.

Dans un autre registre, mais toujours en lien avec ce sujet, la Commission européenne a lancé un outil de dénonciation de la législation sur l’IA, offrant ainsi aux citoyens de l’UE un canal sécurisé et confidentiel pour signaler toute violation présumée de la loi sur l’IA, y compris les déploiements d’IA dangereux ou à haut risque. Avec le lancement de cet outil, l’UE vise à combler les lacunes dans l’application de la loi européenne sur l’IA, à renforcer la responsabilité des développeurs et des déployeurs, et à promouvoir une culture d’utilisation responsable de l’IA dans tous les États membres.

Cet outil vise également à favoriser la transparence, en permettant aux régulateurs de réagir plus rapidement aux violations potentielles sans se fier uniquement aux audits ou aux inspections. Quels sont les développements notables au sein de l’UE ? La proposition de règlement omnibus numérique sur l’IA reporte la mise en œuvre des règles « à haut risque » prévues par la loi européenne sur l’IA jusqu’en 2027, accordant ainsi aux grandes entreprises technologiques un délai supplémentaire avant l’entrée en vigueur d’une surveillance plus stricte. L’entrée en vigueur des règles relatives à l’IA à haut risque sera désormais alignée sur la disponibilité des outils de soutien, ce qui donnera aux entreprises jusqu’à 16 mois pour se mettre en conformité. Les PME et les petites entreprises à moyenne capitalisation bénéficieront d’une documentation simplifiée, d’un accès plus large aux bacs à sable réglementaires et d’une surveillance centralisée des systèmes d’IA à usage général par le biais du Bureau de l’IA.

Les obligations de déclaration en matière de cybersécurité sont également simplifiées grâce à une interface unique pour les incidents relevant de plusieurs législations, tandis que les règles de confidentialité sont clarifiées afin de soutenir l’innovation sans affaiblir les protections prévues par le RGPD. Les règles relatives aux cookies seront modernisées afin de réduire les demandes de consentement répétitives et de permettre aux utilisateurs de gérer plus efficacement leurs préférences.

L’accès aux données sera amélioré grâce à la consolidation de la législation européenne en matière de données via la stratégie de l’Union des données, à des exemptions ciblées pour les petites entreprises et à de nouvelles orientations sur la conformité contractuelle. Ces mesures visent à débloquer des ensembles de données de haute qualité pour l’IA et à renforcer le potentiel d’innovation de l’Europe, tout en permettant aux entreprises d’économiser des milliards et en améliorant la clarté réglementaire.

La proposition de règlement omnibus numérique a des implications pour la protection des données dans l’UE. Les modifications proposées au règlement général sur la protection des données (RGPD) redéfiniraient la notion de données à caractère personnel, affaiblissant les garanties relatives à l’utilisation de ces données par les entreprises, en particulier pour la formation de l’IA. Parallèlement, le consentement aux cookies est simplifié en un modèle « en un clic » qui dure jusqu’à six mois.

Les groupes de défense de la vie privée et des droits civils ont exprimé leur inquiétude quant au fait que les modifications proposées au RGPD profitent de manière disproportionnée aux grandes entreprises technologiques. Une coalition de 127 organisations a publié un avertissement public indiquant que cela pourrait constituer « le plus grand recul des droits fondamentaux numériques dans l’histoire de l’UE ».

Ces propositions doivent passer par le processus colégislatif de l’UE : le Parlement et le Conseil vont les examiner, les amender et les négocier. Compte tenu de la controverse (soutien de l’industrie, opposition de la société civile), le résultat final pourrait être très différent de la proposition initiale de la Commission.

Le Royaume-Uni. Le gouvernement britannique a lancé une initiative majeure en matière d’intelligence artificielle afin de stimuler la croissance nationale dans ce domaine, combinant investissements dans les infrastructures, soutien aux entreprises et financement de la recherche. Le déploiement immédiat de 150 millions de livres sterling dans le Northamptonshire marque le coup d’envoi d’un programme de 18 milliards de livres sterling sur cinq ans visant à renforcer les capacités nationales en matière d’IA. Grâce à un engagement de marché avancé de 100 millions de livres sterling, l’État agira en tant que premier client des start-ups nationales spécialisées dans le matériel informatique dédié à l’IA, contribuant ainsi à réduire les risques liés à l’innovation et à stimuler la compétitivité.

Le plan comprend des zones de croissance de l’IA, avec un site phare dans le sud du Pays de Galles qui devrait créer plus de 5 000 emplois, et un accès élargi au calcul haute performance pour les universités, les start-ups et les organismes de recherche. Un volet dédié de 137 millions de livres sterling, intitulé « AI for Science » (l’IA au service de la science), permettra d’accélérer les percées dans les domaines de la découverte de médicaments, des énergies propres et des matériaux avancés, garantissant ainsi que l’IA stimule à la fois la croissance économique et la valeur publique.

Les Etats-Unis. L’ombre d’une politique restrictive en matière de réglementation plane sur les États-Unis. Les républicains alignés sur Trump ont une nouvelle fois insisté pour obtenir un moratoire sur la réglementation de l’IA au niveau des États. L’idée est d’empêcher les États d’adopter leurs propres lois sur l’IA, en arguant qu’un paysage réglementaire fragmenté entraverait l’innovation. Une version de la proposition lierait le financement fédéral du haut débit à la volonté des États de renoncer aux règles relatives à l’IA, pénalisant ainsi efficacement tout État qui tenterait de légiférer. Cependant, cette opposition ne fait pas l’unanimité : plus de 260 législateurs d’État de tous les États-Unis, républicains comme démocrates, ont dénoncé ce moratoire.

Le président a officiellement créé la Genesis Mission par décret le 24 novembre 2025, chargeant le département américain de l’Énergie (DOE) de diriger un effort national de recherche scientifique axé sur l’IA. Cette mission permettra de créer une « plateforme américaine pour la science et la sécurité » unifiée, combinant les supercalculateurs des 17 laboratoires nationaux du DOE, les ensembles de données scientifiques fédérales accumulées au fil des décennies et une capacité de calcul haute performance sécurisée, créant ainsi ce que l’administration décrit comme « l’instrument scientifique le plus complexe et le plus puissant jamais construit au monde ».

Dans le cadre de ce plan, l’IA générera des « modèles scientifiques fondamentaux » et des agents IA capables d’automatiser la conception d’expériences, d’exécuter des simulations, de tester des hypothèses et d’accélérer les découvertes dans des domaines stratégiques clés : biotechnologie, matériaux avancés, minéraux critiques, science de l’information quantique, fission et fusion nucléaires, exploration spatiale, semi-conducteurs et micro-électronique.

Cette initiative est présentée comme essentielle pour la sécurité énergétique, le leadership technologique et la compétitivité nationale. L’administration affirme que malgré des décennies d’augmentation des fonds consacrés à la recherche, le rendement scientifique par dollar investi a stagné et que l’IA peut radicalement stimuler la productivité de la recherche en l’espace d’une décennie.

Pour concrétiser ces ambitions, le décret présidentiel établit une structure de gouvernance : le secrétaire du DOE supervise la mise en œuvre ; l’assistant du président pour la science et la technologie assure la coordination entre les agences ; et le DOE peut s’associer à des entreprises du secteur privé, des universités et d’autres parties prenantes pour intégrer les données, les calculs et les infrastructures.

Émirats arabes unis et Afrique. L’initiative « AI for Development » a été annoncée afin de promouvoir les infrastructures numériques à travers l’Afrique, soutenue par un engagement d’un milliard de dollars américains de la part des Émirats arabes unis. Selon les déclarations officielles, l’initiative prévoit d’allouer des ressources à des secteurs tels que l’éducation, l’agriculture, l’adaptation au changement climatique, les infrastructures et la gouvernance, aidant ainsi les gouvernements africains à adopter des solutions basées sur l’IA, même lorsque les capacités nationales en matière d’IA restent limitées.

Bien que tous les détails restent à préciser (par exemple, la sélection des pays partenaires, les mécanismes de gouvernance et de contrôle), l’ampleur et l’ambition de cette initiative témoignent de la volonté des Émirats arabes unis d’agir non seulement comme un centre d’adoption de l’IA, mais aussi comme un catalyseur régional et mondial du développement fondé sur l’IA.

Ouzbékistan. L’Ouzbékistan a annoncé le lancement du projet «5 millions de leaders en IA» visant à développer ses capacités nationales dans ce domaine. Dans le cadre de ce plan, le gouvernement intégrera des programmes axés sur l’IA dans les écoles, les formations professionnelles et les universités ; formera 4,75 millions d’étudiants, 150 000 enseignants et 100 000 fonctionnaires ; et lancera des concours à grande échelle pour les start-ups et les talents dans le domaine de l’IA.

Le programme prévoit également la mise en place d’une infrastructure informatique haute performance (en partenariat avec une grande entreprise technologique), la création d’un bureau national de transfert de l’IA à l’étranger et la création de laboratoires de pointe dans les établissements d’enseignement, le tout dans le but d’accélérer l’adoption de l’IA dans tous les secteurs.

Le gouvernement considère cela comme essentiel pour moderniser l’administration publique et positionner l’Ouzbékistan parmi les 50 premiers pays au monde prêts pour l’IA.

La bulle de l’intelligence artificielle est-elle sur le point d’éclater ?

La bulle de l’IA est en train de gonfler au point d’éclater. Il existe cinq causes à la situation actuelle et cinq scénarios futurs qui indiquent comment prévenir ou gérer un éventuel « éclatement ».

La frénésie des investissements dans l’IA ne s’est pas produite dans le vide. Plusieurs facteurs ont contribué à notre tendance à la surévaluation et à nos attentes irréalistes.

Première cause : le battage médiatique. L’IA a été présentée comme l’avenir inévitable de l’humanité. Ce discours a créé une forte peur de passer à côté (FOMO), incitant les entreprises et les gouvernements à investir massivement dans l’IA, souvent sans faire preuve de réalisme.

Deuxième cause : le rendement décroissant de la puissance de calcul et des données. La formule simple qui a dominé ces dernières années était la suivante : plus de puissance de calcul (c’est-à-dire plus de GPU Nvidia) + plus de données = une meilleure IA. Cette croyance a conduit à la création d’énormes usines d’IA : des centres de données à très grande échelle et une empreinte électrique et hydrique alarmante. Le simple fait d’empiler davantage de GPU ne permet plus aujourd’hui d’obtenir que des améliorations marginales.

Troisième cause : les limites logiques et conceptuelles des grands modèles linguistiques (LLM). Les LLM se heurtent à des limites structurelles qui ne peuvent être résolues simplement en augmentant les données et la puissance de calcul. Malgré le discours dominant sur l’imminence de la superintelligence, de nombreux chercheurs de premier plan doutent que les LLM actuels puissent simplement « évoluer » vers une intelligence artificielle générale (AGI) de niveau humain.

Quatrième cause : lenteur de la transformation de l’IA. La plupart des investissements dans l’IA sont encore basés sur le potentiel, et non sur une valeur mesurable et réalisée. La technologie progresse plus rapidement que la capacité de la société à l’absorber. Les précédents hivers de l’IA dans les années 1970 et à la fin des années 1980 ont fait suite à des périodes de promesses excessives et de résultats insuffisants, entraînant des réductions drastiques des financements et l’effondrement de l’industrie.

Cinquième cause : les écarts de coûts considérables. La dernière vague de modèles open source a démontré que ces derniers, qui coûtent quelques millions de dollars, peuvent égaler ou surpasser des modèles coûtant des centaines de millions. Cela soulève des questions quant à l’efficacité et la nécessité des dépenses actuelles en matière d’IA propriétaire.

 Balloon, Book, Comics, Publication, Person, Adult, Male, Man, Face, Head

Cinq scénarios décrivent comment l’engouement actuel pourrait évoluer.

Premier scénario : le pivot rationnel (la solution classique). Une correction du marché pourrait éloigner le développement de l’IA de l’hypothèse selon laquelle plus de puissance de calcul produit automatiquement de meilleurs modèles. Au contraire, le domaine s’orienterait vers des architectures plus intelligentes, une intégration plus profonde avec les connaissances humaines et les institutions, et des systèmes plus petits, spécialisés et souvent open source. Les politiques publiques s’orientent déjà dans cette direction : le plan d’action américain en matière d’IA considère les modèles ouverts comme des atouts stratégiques. Cependant, ce pivot se heurte à la résistance des modèles propriétaires bien établis, à la dépendance vis-à-vis des données fermées et aux débats non résolus sur la manière dont les créateurs de connaissances humaines devraient être rémunérés.

Deuxième scénario : « Trop grand pour faire faillite » (le scénario du sauvetage de 2008). Une autre issue consiste à considérer les grandes entreprises d’IA comme des infrastructures économiques essentielles. Les leaders du secteur mettent déjà en garde contre l’« irrationalité » des niveaux d’investissement actuels, suggérant qu’un trimestre faible d’une entreprise clé pourrait ébranler les marchés mondiaux. Dans ce scénario, les gouvernements fournissent des filets de sécurité implicites ou explicites (crédits bon marché, réglementation favorable ou accords d’infrastructure public-privé) en partant du principe que les géants de l’IA sont d’importance systémique.

Troisième scénario : justification géopolitique (la Chine est à nos portes). La concurrence avec la Chine pourrait devenir la principale justification d’un investissement public soutenu. Les progrès rapides de la Chine, notamment avec des modèles ouverts à faible coût comme DeepSeek R1, suscitent déjà des comparaisons avec le « choc Spoutnik ». Le soutien aux champions nationaux est alors présenté comme une question de souveraineté technologique, transférant le risque des investisseurs aux contribuables.

Quatrième scénario : monopolisation de l’IA (le pari de Wall Street). Si les petites entreprises ne parviennent pas à monétiser leurs activités, les capacités en matière d’IA pourraient se concentrer entre les mains d’une poignée de géants technologiques, à l’image de la monopolisation passée dans les domaines de la recherche, des réseaux sociaux et du cloud. La domination de Nvidia dans le domaine du matériel informatique dédié à l’IA renforce cette dynamique. Les modèles open source ralentissent la consolidation, mais ne l’empêchent pas.

Cinquième scénario : l’hiver de l’IA et les nouveaux jouets numériques. Enfin, un léger hiver de l’IA pourrait apparaître à mesure que les investissements se refroidissent et que l’attention se tourne vers de nouvelles frontières : l’informatique quantique, les jumeaux numériques, la réalité immersive. L’IA resterait une infrastructure vitale, mais ne serait plus au centre de l’engouement spéculatif.

Les prochaines années montreront si l’IA deviendra un autre jouet numérique surévalué ou une partie plus mesurée, ouverte et durable de notre infrastructure économique et politique.
Ce texte est une adaptation de l’article du Dr Jovan Kurbalija intitulé « La bulle de l’IA est-elle sur le point d’éclater ? Cinq causes et cinq scénarios ». Veuillez consulter l’article original ci-dessous.

BLOG featured image 2025 AI bubble
www.diplomacy.edu

This text outlines five causes and five scenarios around the AI bubble and potential burst.

Réajustement de l’agenda numérique : points clés du document SMSI+20 Rev 1

Une version révisée du document final du SMSI+20 – Révision 1 – a été publiée le 7 novembre par les co-facilitateurs du processus intergouvernemental. Ce document servira de base aux négociations entre les États membres de l’ONU avant la réunion de haut niveau de l’Assemblée générale qui se tiendra les 16 et 17 décembre 2025.

Tout en conservant la structure générale du projet zéro publié en août, la révision 1 introduit plusieurs modifications et nouveaux éléments.

Le nouveau texte comprend des formulations révisées et renforcées à certains endroits – soulignant la nécessité de combler plutôt que de réduire les fractures numériques, présentées comme des défis multidimensionnels qui doivent être relevés pour réaliser la vision du SMSI. 

Parallèlement, certaines questions ont été dépriorisées : par exemple, les références aux déchets électroniques et l’appel à l’adoption de normes mondiales de reporting sur les impacts environnementaux ont été supprimées de la section consacrée à l’environnement.

Plusieurs nouveaux éléments font également leur apparition. Dans la section consacrée aux environnements favorables, les États sont enjoints de s’abstenir de prendre des mesures unilatérales contraires au droit international.

L’importance d’une participation inclusive à l’élaboration des normes est également de nouveau reconnue.

La section consacrée aux mécanismes financiers invite le Secrétaire général à envisager la création d’un groupe de travail sur les futurs mécanismes financiers pour le développement numérique, dont les conclusions seront présentées à l’Assemblée générale des Nations unies lors de sa 81ème session.

La section consacrée à la gouvernance de l’Internet fait désormais référence aux lignes directrices NetMundial+10.

Les formulations relatives au Forum sur la gouvernance de l’Internet (FGI) restent largement fidèles au projet Zéro. Cela confirme la volonté de pérenniser le forum et de charger le Secrétaire général de soumettre des propositions sur son financement futur. De nouveaux passages invitent par ailleurs le FGI à renforcer la participation des gouvernements et des acteurs des pays en développement aux débats sur la gouvernance du web et les technologies émergentes.

Plusieurs domaines ont connu des changements de ton. Le langage utilisé dans la section consacrée aux droits de l’homme a été adouci à certains endroits (par exemple, les références aux garanties en matière de surveillance et aux menaces pesant sur les journalistes ont été supprimées).

Et la manière dont l’interaction entre le SMSI et le PNM est présentée a changé : l’accent est désormais mis sur l’alignement entre les processus du SMSI et du PNM plutôt que sur leur intégration. Par exemple, si la feuille de route conjointe PNM-SMSI visait initialement à « intégrer les engagements du PNM dans l’architecture du SMSI », elle devrait désormais « viser à renforcer la cohérence entre les processus du SMSI et du PNM ». Des ajustements correspondants sont également reflétés dans les rôles du Conseil économique et social et de la Commission de la science et de la technologie pour le développement.

Quelle est la prochaine étape ? Les inscriptions sont désormais ouvertes pour la prochaine consultation virtuelle des parties prenantes du SMSI+20, prévue le lundi 8 décembre 2025, afin de recueillir des commentaires sur le projet de document final révisé (Rev2). Les participants doivent s’inscrire avant le dimanche 7 décembre à 23 h 59 (heure de l’Est).

Une première réponse au Rev2 et un document-cadre guideront la session et seront publiés dès qu’ils seront disponibles.

Cette consultation s’inscrit dans le cadre des préparatifs de la réunion de haut niveau de l’Assemblée générale consacrée à l’examen global de la mise en œuvre des résultats du Sommet mondial sur la société de l’information (SMSI+20), qui se tiendra les 16 et 17 décembre 2025.

Programmation et climat : les enjeux de l’IA et du numérique à la COP 30

La COP 30, la 30e conférence annuelle des Nations unies sur le climat, s’est officiellement achevée vendredi dernier, le 21 novembre. Alors que le calme revient à Belém, nous examinons de plus près les résultats obtenus et leurs implications pour les technologies numériques et l’IA.

Dans le domaine de l’agriculture, la dynamique est clairement en train de s’accélérer. Le Brésil et les Émirats arabes unis ont dévoilé AgriLLM, le premier modèle linguistique open source de grande envergure spécialement conçu pour l’agriculture, développé avec le soutien de partenaires internationaux dans le domaine de la recherche et de l’innovation. L’objectif est de fournir aux gouvernements et aux organisations locales une base numérique commune pour créer des outils permettant de fournir aux agriculteurs des conseils pertinents et adaptés à leur situation locale. Parallèlement, l’initiative AIM for Scale vise à fournir des services de conseil numériques, notamment des prévisions climatiques et des informations sur les cultures, à 100 millions d’agriculteurs.

 Person, Book, Comics, Publication, Cleaning, Clothing, Hat, Outdoors

Les villes et les infrastructures s’engagent également davantage dans la transformation numérique. Grâce au Fonds de développement de la résilience des infrastructures, les assureurs, les banques de développement et les investisseurs privés mettent en commun leurs capitaux pour financer des infrastructures résilientes au changement climatique dans les économies émergentes, qu’il s’agisse de systèmes d’énergie propre et d’approvisionnement en eau ou de réseaux numériques nécessaires pour maintenir les communautés connectées et protégées en cas de chocs climatiques.

Le programme numérique le plus explicite a vu le jour dans le cadre des « facilitateurs et accélérateurs ». Le Brésil et ses partenaires ont lancé la première infrastructure numérique pour l’action climatique au monde, une initiative mondiale visant à aider les pays à adopter des biens publics numériques ouverts dans des domaines tels que la réponse aux catastrophes, la gestion de l’eau et l’agriculture résiliente au climat. Le défi de l’innovation qui l’accompagne soutient déjà de nouvelles solutions conçues pour être déployées à grande échelle.

Le Green Digital Action Hub a également été lancé et aidera les pays à mesurer et à réduire l’empreinte environnementale de la technologie, tout en élargissant l’accès aux outils qui utilisent la technologie au service de la durabilité.

La formation et le renforcement des capacités ont fait l’objet d’une attention particulière grâce au nouveau AI Climate Institute, qui aidera les pays du Sud à développer et à déployer des applications d’IA adaptées aux besoins locaux, en particulier des modèles légers et économes en énergie.

Le Nature’s Intelligence Studio, basé en Amazonie, soutiendra l’innovation inspirée par la nature et introduira des outils d’IA ouverts qui aideront à relever les défis réels en matière de durabilité grâce à des solutions biosourcées.

Enfin, la COP 30 a réalisé une première en inscrivant fermement l’intégrité de l’information à l’ordre du jour de l’action climatique.

La désinformation et la mésinformation étant reconnues comme un risque mondial majeur, les gouvernements et les partenaires ont lancé une déclaration et un nouveau processus multipartite visant à renforcer la transparence, la responsabilité partagée et la confiance du public dans les informations climatiques, y compris les plateformes numériques qui les façonnent.

Vue d’ensemble. Dans tous les domaines, la COP 30 a envoyé un message clair : la dimension numérique de l’action climatique n’est pas facultative, elle fait partie intégrante de la mise en œuvre de l’action climatique.

De l’Australie à l’Union européenne : de nouvelles mesures protègent les enfants contre les dangers en ligne

Les interdictions visant les moins de 16 ans se généralisent à l’échelle mondiale, et l’Australie est le pays qui va le plus loin dans cette voie. Les autorités de régulation australiennes ont désormais élargi le champ d’application de l’interdiction pour inclure des plateformes telles que Twitch, qui est considérée comme soumise à une restriction d’âge en raison de ses fonctionnalités d’interaction sociale. Meta a commencé à informer les utilisateurs australiens présumés âgés de moins de 16 ans que leurs comptes Facebook et Instagram seront désactivés à compter du 4 décembre, une semaine avant l’entrée en vigueur officielle de la loi le 10 décembre.

Afin d’accompagner les familles dans cette transition, le gouvernement a mis en place un groupe consultatif de parents, réunissant des organisations représentant divers types de ménages, afin d’aider les parents à guider leurs enfants en matière de sécurité en ligne, de communication et de connexion numérique sécurisée.

L’interdiction a déjà suscité une opposition. Les principales plateformes de réseaux sociaux ont critiqué cette interdiction, mais ont indiqué qu’elles s’y conformeraient, YouTube étant la dernière à se rallier. Cependant, l’interdiction est désormais contestée devant la Haute Cour par deux jeunes de 15 ans, soutenus par le groupe de défense Digital Freedom Project. Ils affirment que la loi limite de manière injuste la capacité des moins de 16 ans à participer au débat public et à l’expression politique, réduisant ainsi au silence les jeunes sur des questions qui les concernent directement.

La Malaisie prévoit également d’interdire les comptes sur les réseaux sociaux aux personnes de moins de 16 ans à partir de 2026. Le gouvernement a approuvé cette mesure afin de protéger les enfants contre les dangers en ligne tels que le cyberharcèlement, les escroqueries et l’exploitation sexuelle. Les autorités envisagent des approches telles que la vérification électronique de l’âge à l’aide de cartes d’identité ou de passeports, bien que la date exacte de mise en œuvre n’ait pas encore été fixée.

Les législateurs européens ont proposé des protections similaires. Le Parlement européen a adopté un rapport non législatif appelant à une harmonisation de l’âge minimum à 16 ans au sein de l’UE pour les réseaux sociaux, les plateformes de partage de vidéos et les assistants IA. L’accès des 13-16 ans ne serait autorisé qu’avec le consentement parental. Les députés soutiennent le développement d’une application européenne de vérification d’âge et du portefeuille d’identité numérique européen (eID), tout en insistant sur le fait que ces outils ne dispensent pas les plateformes de concevoir des services sûrs par défaut.

Au-delà des restrictions d’âge, l’UE renforce les mesures de protection de manière plus générale. Les États membres ont approuvé une position du Conseil en faveur d’un règlement visant à prévenir et à combattre les abus sexuels sur enfants en ligne qui définit des obligations concrètes et exécutoires pour les fournisseurs de services en ligne. Les plateformes devront procéder à des évaluations formelles des risques afin d’identifier comment leurs services pourraient être utilisés pour diffuser du matériel pédopornographique ou pour solliciter des enfants, puis mettre en place des mesures d’atténuation, allant de paramètres de confidentialité par défaut plus sûrs pour les enfants et d’outils de signalement pour les utilisateurs à des garanties techniques. Les États membres désigneront des autorités nationales de coordination et compétentes qui pourront examiner ces évaluations des risques, obliger les fournisseurs à mettre en œuvre des mesures d’atténuation et, si nécessaire, imposer des sanctions financières en cas de non-respect.

Il est important de noter que le Conseil introduit une classification des risques à trois niveaux pour les services en ligne (élevé, moyen, faible). Les services jugés à haut risque, sur la base de critères concrets tels que le type de service, peuvent être tenus non seulement d’appliquer des mesures d’atténuation plus strictes, mais aussi de contribuer au développement de technologies visant à réduire ces risques. Les moteurs de recherche peuvent être contraints de supprimer des résultats ; les autorités compétentes peuvent exiger la suppression ou le blocage de l’accès aux contenus pédopornographiques. Cette position maintient et vise à rendre permanente une exemption temporaire existante qui permet aux fournisseurs (par exemple, les services de messagerie) de scanner volontairement les contenus à la recherche de pédopornographie — une exemption qui expire le 3 avril 2026.

Afin de mettre en œuvre et de coordonner l’application de la réglementation, celle-ci prévoit la création d’un nouvel organe régulateur, le Centre de l’UE sur les abus sexuels sur enfants.  Le Centre traitera et évaluera les informations et signalements transmis par les plateformes, gérera une base de données contenant les rapports des fournisseurs et ainsi qu’une base d’indicateurs d’abus sexuels, que les entreprises pourront utiliser pour leurs activités de détection volontaire, soutiendra les victimes à obtenir le retrait ou le blocage de l’accès aux contenus les mettant en scène, et partagera les informations pertinentes avec Europol et les services répressifs nationaux. Le siège du Centre n’a pas encore été fixé ; il fera l’objet de négociations avec le Parlement européen. L’accord trouvé au sein du Conseil marque une étape décisive. Les négociations formelles en « trilogue » (discussions entre le Conseil, le Parlement et la Commission) peuvent désormais débuter, le Parlement ayant déjà adopté sa propre position en novembre 2023.

Le rapport du Parlement européen s’attaque également aux risques numériques du quotidien. Les députés appellent à l’interdiction des pratiques addictives les plus nocives, notamment : le défilement infini (infinite scroll), la lecture automatique (autoplay), les boucles de récompense (reward loops), le mécanisme de “tirer pour rafraîchir” (pull-to-refresh). D’autres fonctionnalités addictives devraient être désactivées par défaut pour les mineurs. Le Parlement demande instamment l’interdiction des algorithmes de recommandation basés sur l’engagement pour les jeunes utilisateurs, tout en exigeant que les règles claires du règlement sur les services numériques (DSA) soient étendues aux plateformes de partage de vidéos. Le rapport cible également les mécanismes de jeu qui imitent les jeux de hasard. Les « loot boxes » (coffres à butin), les récompenses aléatoires intégrées aux applications et les mécaniques de « pay-to-progress » (payer pour progresser) devraient être proscrites afin de protéger les plus jeunes de l’engrenage financier et psychologique. Enfin, le texte aborde l’exploitation commerciale, demandant instamment l’interdiction pour les plateformes de proposer des incitations financières pour le « kidfluencing », c’est-à-dire l’utilisation d’enfants comme influenceurs. 

Les députés européens ont pointé du doigt les risques liés à l’IA générative : deepfakes, chatbots de compagnie, agents d’IA et applications de nudité par IA créant des images manipulées non consensuelles, appelant à une action juridique et éthique urgente. La rapporteure Christel Schaldemose a présenté ces mesures comme le tracé d’une ligne rouge claire : les plateformes ne sont « pas conçues pour les enfants » et l’expérimentation consistant à laisser des designs addictifs et manipulateurs cibler les mineurs doit prendre fin.

Une nouvelle initiative multilatérale est également en cours : le commissaire à la sécurité électronique (Australie), l’Ofcom (Royaume-Uni) et la DG CNECT de la Commission européenne vont coopérer afin de protéger les droits, la sécurité et la vie privée des enfants en ligne.

Les instances de régulation appliqueront les lois relatives à la sécurité en ligne, exigeront des plateformes qu’elles évaluent et atténuent les risques pour les enfants, encourageront les technologies préservant la vie privée telles que la vérification de l’âge, et s’associeront à la société civile et au monde universitaire afin que les approches réglementaires restent ancrées dans la réalité. 

Un nouveau groupe technique trilatéral sera créé afin d’étudier comment les systèmes de vérification de l’âge peuvent fonctionner de manière fiable et interopérable, renforçant ainsi les preuves pour de futures mesures réglementaires.

Les députés européens ont mis en avant les risques liés à l’IA générative (deepfakes, chatbots de compagnie, agents IA et applications de nudité alimentées par l’IA qui créent des images modifiées sans consentement) et ont appelé à une action juridique et éthique urgente. La rapporteure Christel Schaldemose a présenté ces mesures comme une ligne claire : les plateformes ne sont « pas conçues pour les enfants » et l’expérience consistant à laisser des designs addictifs et manipulateurs cibler les mineurs doit cesser.

Une nouvelle initiative multilatérale est également en cours : le commissaire à la sécurité électronique (Australie), l’Ofcom (Royaume-Uni) et la DG CNECT de la Commission européenne vont coopérer afin de protéger les droits, la sécurité et la vie privée des enfants en ligne.

Les instances de régulation appliqueront les lois relatives à la sécurité en ligne, exigeront des plateformes qu’elles évaluent et atténuent les risques pour les enfants, encourageront les technologies préservant la vie privée telles que la vérification de l’âge, et s’associeront à la société civile et au monde universitaire afin que les approches réglementaires restent ancrées dans la réalité.

Un nouveau groupe technique trilatéral sera créé afin d’étudier comment les systèmes de vérification de l’âge peuvent fonctionner de manière fiable et interopérable, renforçant ainsi les preuves pour de futures mesures réglementaires.

L’objectif global est d’aider les enfants et les familles à utiliser Internet de manière plus sûre et plus confiante, en favorisant la culture numérique, l’esprit critique et en rendant les plateformes en ligne plus responsables.

Cloud en panne : le grand désert numérique.

Le 18 novembre, Cloudflare, l’infrastructure invisible qui soutient des millions de sites web, a connu une panne que l’entreprise qualifie de « la plus grave depuis 2019 ». Les utilisateurs du monde entier ont reçu des messages d’erreur interne du serveur lorsque des services tels que X et ChatGPT ont été temporairement hors ligne.

La cause était une erreur de configuration interne. Une modification de routine des autorisations dans une base de données ClickHouse a entraîné la création d’un « fichier de fonctionnalités » mal formé utilisé par l’outil de gestion des robots de Cloudflare. Ce fichier a doublé de taille de manière inattendue et, lorsqu’il a été diffusé sur le réseau mondial de Cloudflare, il a dépassé les limites intégrées, provoquant des pannes en cascade.

Alors que les ingénieurs s’empressaient d’isoler le fichier défectueux, le trafic a progressivement repris. En milieu d’après-midi, Cloudflare a interrompu la propagation, remplacé le fichier corrompu et redémarré les systèmes clés ; le réseau a été entièrement rétabli quelques heures plus tard.

Une vision plus large. Cet incident n’est pas isolé. Le mois dernier, Microsoft Azure a subi une panne de plusieurs heures qui a perturbé les clients professionnels en Europe et aux États-Unis, tandis qu’Amazon Web Services (AWS) a connu des interruptions intermittentes affectant les plateformes de streaming et les sites de commerce électronique. Ces événements, combinés à la panne de Cloudflare, soulignent la fragilité de l’infrastructure cloud mondiale.

Cette panne survient à un moment politiquement sensible dans le débat sur la politique européenne en matière de cloud. Les régulateurs bruxellois enquêtent déjà sur AWS et Microsoft Azure afin de déterminer s’ils doivent être désignés comme « gardiens » au titre de la loi européenne sur les marchés numériques (DMA). Ces enquêtes visent à évaluer si leur position dominante dans l’infrastructure cloud leur confère un contrôle disproportionné, même si, techniquement, ils ne répondent pas aux critères habituels de taille fixés par la loi.

Ce schéma récurrent met en évidence une vulnérabilité majeure de l’Internet moderne, née d’une dépendance excessive à l’égard d’une poignée de fournisseurs essentiels. Lorsqu’un de ces piliers centraux vacille, que ce soit en raison d’une mauvaise configuration, d’un bug logiciel ou d’un problème régional, les effets se répercutent à tous les niveaux. La concentration même des services, qui permet l’efficacité et l’évolutivité, crée également des points de défaillance uniques avec des conséquences en cascade.

Le mois dernier à Genève

Le monde de la gouvernance numérique a été très actif à Genève en novembre. Voici ce que nous avons essayé de suivre.

 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

Le CERN présente sa stratégie en matière d’intelligence artificielle pour faire progresser la recherche et les opérations

Le CERN a approuvé une stratégie globale en matière d’IA afin de guider son utilisation dans les domaines de la recherche, des opérations et de l’administration. Cette stratégie rassemble différentes initiatives au sein d’un cadre cohérent visant à promouvoir une IA responsable et efficace au service de l’excellence scientifique et opérationnelle.

Elle s’articule autour de quatre objectifs principaux : accélérer les découvertes scientifiques, améliorer la productivité et la fiabilité, attirer et développer les talents, et permettre le déploiement à grande échelle de l’IA grâce à des partenariats stratégiques avec l’industrie et les États membres.

Des outils communs et des expériences partagées entre les différents secteurs renforceront la communauté du CERN et garantiront un déploiement efficace.

La mise en œuvre impliquera des plans prioritaires et une collaboration avec les programmes de l’UE, l’industrie et les États membres afin de renforcer les capacités, d’obtenir des financements et de développer les infrastructures. Les applications de l’IA soutiendront les expériences de physique des hautes énergies, les futurs accélérateurs, les détecteurs et la prise de décision fondée sur les données.

Le groupe intersessions 2025-2026 de la Commission des Nations Unies pour la science et la technologie au service du développement (CSTD)

La Commission de la science et de la technique au service du développement (CSTD) de l’ONU a tenu sa réunion intersessions 2025-2026 le 17 novembre au Palais des Nations à Genève. L’ordre du jour était axé sur la science, la technologie et l’innovation à l’ère de l’IA, avec des contributions d’experts issus du monde universitaire, d’organisations internationales et du secteur privé. Les délégations ont également examiné les progrès réalisés dans la mise en œuvre du SMSI avant le processus SMSI+20 et ont reçu des informations actualisées sur la mise en œuvre du Pacte numérique mondial (PNM) et les travaux en cours sur la gouvernance des données au sein du groupe de travail dédié de la CSTD. Les conclusions et recommandations du panel seront examinées lors de la vingt-neuvième session de la Commission en 2026.

Quatrième réunion du groupe de travail multipartite de la CSTD des Nations Unies sur la gouvernance des données à tous les niveaux

Le groupe de travail multipartite de la CSTD sur la gouvernance des données à tous les niveaux s’est réuni pour la quatrième fois les 18 et 19 novembre. Le programme a débuté par des allocutions d’ouverture et l’adoption officielle de l’ordre du jour. Le secrétariat de la CNUCED a ensuite présenté un aperçu des contributions soumises depuis la dernière session, en soulignant les nouveaux domaines de convergence et de divergence entre les parties prenantes. La réunion s’est poursuivie par des délibérations de fond organisées autour de quatre axes couvrant les dimensions clés de la gouvernance des données : les principes applicables à tous les niveaux ; l’interopérabilité entre les systèmes ; le partage des avantages des données ; et la mise en place de flux de données sûrs, sécurisés et fiables, y compris au-delà des frontières. Ces discussions ont pour objectif d’explorer les approches pratiques, les défis existants et les voies possibles vers un consensus.

Après la pause déjeuner, les délégués se sont réunis à nouveau pour une séance plénière qui a duré tout l’après-midi afin de poursuivre les échanges thématiques, avec des occasions d’interaction entre les États membres, le secteur privé, la société civile, le monde universitaire, la communauté technique et les organisations internationales.

La deuxième journée a été consacrée aux étapes importantes du groupe de travail. Les délégations ont examiné les grandes lignes, le calendrier et les attentes concernant le rapport d’étape à présenter à l’Assemblée générale, ainsi que le processus de sélection du prochain président du groupe de travail. La session s’est conclue par un accord sur le calendrier des prochaines réunions et sur toute question supplémentaire soulevée par les participants.

Dialogue sur l’innovation 2025 : Les neurotechnologies et leurs implications pour la paix et la sécurité internationales

On 24 November, UNIDIR hosted its Innovations Dialogue on neurotechnologies and their implications for international peace and security in Geneva and online. Experts from neuroscience, law, ethics, and security policy discussed developments such as brain-computer interfaces and cognitive enhancement tools, exploring both their potential applications and the challenges they present, including ethical and security considerations. The event included a poster exhibition on responsible use and governance approaches.

14e Forum des Nations Unies sur les entreprises et les droits de l’Homme
Le 14e Forum des Nations Unies sur les entreprises et les droits de l’Homme s’est tenu du 24 au 26 novembre à Genève et en ligne, sur le thème « Accélérer l’action en faveur des entreprises et des droits de l’Homme face aux crises et aux transformations ». Le forum a abordé des questions clés, notamment la protection des droits de l’Homme à l’ère de l’IA et l’exploration des droits de l’Homme et du travail sur les plateformes dans la région Asie-Pacifique dans le contexte de la transformation numérique en cours. En marge de l’événement, une session a également examiné de près “le travail de l’ombre” derrière l’intelligence artificielle.