Weekly #244 Looking ahead: Our annual AI and digital forecast

 Logo, Text

2-9 January 2026


HIGHLIGHT OF THE WEEK

Looking ahead: Our annual AI and digital forecast

As we enter the new year, we begin this issue of the Weekly newsletter with our annual outlook on AI and digital developments, featuring insights from our Executive Director. Drawing on our coverage of digital policy over the past year on the Digital Watch Observatory, as well as our professional experience and expertise, we highlight the 10 trends and events we expect to shape the digital landscape in the year ahead.

Technologies. AI is becoming a commodity, affecting everyone—from countries competing for AI sovereignty to individual citizens. Equally important is the rise of bottom-up AI: in 2026, small to large language models will be able to run on corporate or institutional servers. Open-source development, a major milestone in 2025, is expected to become a central focus of future geostrategic competition.

Geostrategy. The good news is that, despite all geopolitical pressure, we still have an integrated global internet. However, digital fragmentation is accelerating, with continued fragmentation of filtering social media, other services and the other developments around three major hubs: the United States, China, and potentially the EU. Geoeconomics is becoming a critical dimension of this shift, particularly given the global footprint of major technology companies. And any fragmentation, including trade fragmentation and taxation fragmentation, will inevitably affect them. Equally important is the role of “geo-emotions”: the growing disconnect between public sentiment and industry enthusiasm. While companies remain largely optimistic about AI, public scepticism is increasing, and this divergence may carry significant political implications.

Governance. The core governance dilemma remains whether national representatives—parliamentarians domestically and diplomats internationally—are truly able to protect citizens’ digital interests related to data, knowledge, and cybersecurity. While there are moments of productive discussion and well-run events, substantive progress remains limited. One positive note is that inclusive governance, at least in principle, continues through multistakeholder participation, though it raises its own unresolved questions.

Security. The adoption of the Hanoi Cybercrime Convention at the end of the year is a positive development, and substantive discussions at the UN continue despite ongoing criticism of the institution. While it remains unclear whether these processes are making us more secure, they are expanding the governance toolbox. At the same time, attention should extend beyond traditional concerns—such as cyberwarfare, terrorism, and crime—to emerging risks associated with the interconnectivity to AI systems through APIs. These points of integration create new interdependencies and potential backdoors for cyberattacks.

Human rights. Human rights are increasingly under strain, with recent policy shifts by technology companies and growing transatlantic tensions between the EU and the United States highlighting a changing landscape. While debates continue to focus heavily on bias and ethics, deeper human rights concerns—such as the rights to knowledge, education, dignity, meaningful work, and the freedom to remain human rather than optimised—receive far less attention. As AI reshapes society, the human rights community must urgently revisit its priorities, grounding them in the protection of life, dignity, and human potential.

Economy. The traditional three-pillar framework comprising security, development, and human rights is shifting toward economic and security concerns, with human rights being increasingly sidelined. Technological and economic issues, from access to rare earths to AI models, are now treated as strategic security matters. This trend is expected to accelerate in 2026, making the digital economy a central component of national security. Greater attention should be paid to taxation, the stability of the global trade system, and how potential fragmentation or disruption of global trade could impact the tech sector.

Standards. The lesson from social media is clear: without interoperable standards, users get locked into single platforms. The same risk exists for AI. To avoid repeating these mistakes, developing interoperable AI standards is critical. Ideally, individuals and companies should build their own AI, but where that isn’t feasible, at a minimum, platforms should be interoperable, allowing seamless movement across providers such as OpenAI, Cloudy, or DeepSeek. This approach can foster innovation, competition, and user choice in the emerging AI-dominated ecosystem.

Content. The key issue for content in 2026 is the tension between governments and US tech, particularly regarding compliance with EU laws. At the core, countries have the right to set rules for content within their territories, reflecting their interests, and citizens expect their governments to enforce them. While media debates often focus on misuse or censorship, the fundamental question remains: can a country regulate content on its own soil? The answer is yes, and adapting to these rules will be a major source of tension going forward.

Development. Countries that are currently behind in AI aren’t necessarily losing. Success in AI is less about owning large models or investing heavily in hardware, and more about preserving and cultivating local knowledge. Small countries should invest in education, skills, and open-source platforms to retain and grow knowledge locally. Paradoxically, a slower entry into AI could be an advantage, allowing countries to focus on what truly matters: people, skills, and effective governance.

Environment. Concerns about AI’s impact on the environment and water resources persist. It is worth asking whether massive AI farms are truly necessary. Small AI systems could serve as extensions of these processes or as support for training and education, reducing the need for energy- and water-intensive platforms. At a minimum, AI development should prioritise sustainability and efficiency, mitigating the risk of large-scale digital waste while still enabling practical benefits.

 Adult, Male, Man, Person, Head
IN OTHER NEWS THIS WEEK

This week in AI governance

Italy. Italy’s antitrust authority has formally closed its investigation into the Chinese AI developer DeepSeek after the company agreed to binding commitments to make risks from AI hallucinations — false or misleading outputs — clearer and more accessible to users. Regulators stated that DeepSeek will enhance transparency, providing clearer warnings and disclosures tailored to Italian users, thereby aligning its chatbot deployment with local regulatory requirements. If these conditions aren’t met, enforcement action under Italian law could follow.

UK. Britain has escalated pressure on Elon Musk’s social media platform X and its integrated AI chatbot Grok after reports that the tool was used to generate sexually explicit and non‑consensual deepfake images of women and minors. UK technology officials have publicly demanded that X act swiftly to prevent the spread of such content and ensure compliance with the Online Safety Act, which requires platforms to block unsolicited sexual imagery. Musk, however, has suggested that users who use such prompts be held liable, a move criticised as shifting responsibility. Critics note that the platform should still have to embed stronger safeguards.


Brussels bets on open-source to boost tech sovereignty

The European Commission is preparing a strategy to commercialise European open-source software to strengthen digital sovereignty and reduce reliance on foreign technology providers. 

The upcoming strategy, expected alongside the Cloud and AI Development Act in early 2026, will prioritise community upscaling, industrial deployment, and market integration. Strengthening developer communities, supporting adoption across various sectors, and ensuring market competitiveness are key objectives. Governance reforms and improved supply chain security are also planned to address vulnerabilities in widely used open-source components, enhancing trust and reliability.

Financial sustainability will be a key focus, with public sector partnerships encouraged to ensure the long-term viability of projects. By providing stable support and fostering collaboration between government and industry, the strategy seeks to create an economically sustainable open-source ecosystem.

The big picture. Despite funding fostering innovation, commercial-scale success has often occurred outside the EU. By focusing on open-source solutions developed within the EU, Brussels aims to strengthen Europe’s technological autonomy, retain the benefits of domestic innovation, and foster a resilient and competitive digital landscape.


USA pulls out of several international bodies

In a new move, US President Trump issued a memorandum directing the US withdrawal from numerous international organisations, conventions, and treaties deemed contrary to the interests of the USA.

The list includes 35 non-UN entities (e.g. the GFCE and the Freedom Online Coalition) and 31 UN bodies (e.g. the Department of Economic and Social Affairs, the UN Conference on Trade and Development and the UN Framework Convention on Climate Change (UNFCCC)). 

Why does it matter? The order was not a surprise, following the Trump administration’s 2025 retreat from the Paris Agreement, WHO and other international organisations focusing on climate change, sustainable development, and identity issues. Two initiatives in the technology and digital governance ecosystem are explicitly dropped: the Freedom Online Coalition (FOC) and the Global Forum on Cyber Expertise (GFCE). And there is also some uncertainty regarding the meaning and the implications of the US ‘withdrawal’ from UNCTAD and UN DESA, given the roles these entities play in relation to initiatives such as WSIS and Agenda 2030 follow-up processes, the Internet Governance Forum (IGF), and data governance. 



LOOKING AHEAD
 Person, Face, Head, Binoculars

The year has just begun, and the digital policy calendar is still taking shape. To stay up to date with upcoming events and discussions shaping the digital landscape, we encourage you to follow our calendar of events at dig.watch/events.



READING CORNER

Weekly #243 What the WSIS+20 outcome means for global digital governance?

 Logo, Text

12-19 December 2025


HIGHLIGHT OF THE WEEK

From review to recalibration: What the WSIS+20 outcome means for global digital governance

The WSIS+20 review, conducted 20 years after the World Summit on the Information Society, concluded in New York with the adoption of a high-level outcome document by the UN General Assembly. The review assesses progress toward building a people-centred, inclusive, and development-oriented information society, highlights areas needing further effort, and outlines measures to strengthen international cooperation.

A major institutional decision was to make the Internet Governance Forum (IGF) a permanent UN body. The outcome also includes steps to strengthen its functioning: broadening participation—especially from developing countries and underrepresented communities—enhancing intersessional work, supporting national and regional initiatives, and adopting innovative and transparent collaboration methods. The IGF Secretariat is to be strengthened, sustainable funding ensured, and annual reporting on progress provided to UN bodies, including the Commission on Science and Technology for Development (CSTD).

Negotiations addressed the creation of a governmental segment at the IGF. While some member states supported this as a way to foster more dialogue among governments, others were concerned it could compromise the IGF’s multistakeholder nature. The final compromise encourages dialogue among governments with the participation of all stakeholders.

Beyond the IGF, the outcome confirms the continuation of the annual WSIS Forum and calls for the United Nations Group on the Information Society (UNGIS) to increase efficiency, agility, and membership. 

WSIS action line facilitators are tasked with creating targeted implementation roadmaps linking WSIS action lines to Sustainable Development Goals (SDGs) and Global Digital Compact (GDC) commitments. 

UNGIS is requested to prepare a joint implementation roadmap to strengthen coherence between WSIS and the Global Digital Compact, to be presented to CSTD in 2026. The Secretary-General will submit biennial reports on WSIS implementation, and the next high-level review is scheduled for 2035.

The document places closing digital divides at the core of the WSIS+20 agenda. It addresses multiple aspects of digital exclusion, including accessibility, affordability, quality of connectivity, inclusion of vulnerable groups, multilingualism, cultural diversity, and connecting all schools to the internet. It stresses that connectivity alone is insufficient, highlighting the importance of skills development, enabling policy environments, and human rights protection.

The outcome also emphasises open, fair, and non-discriminatory digital development, including predictable and transparent policies, legal frameworks, and technology transfer to developing countries. Environmental sustainability is highlighted, with commitments to leverage digital technologies while addressing energy use, e-waste, critical minerals, and international standards for sustainable digital products.

Human rights and ethical considerations are reaffirmed as fundamental. The document stresses that rights online mirror those offline, calls for safeguards against adverse impacts of digital technologies, and urges the private sector to respect human rights throughout the technology lifecycle. It addresses online harms such as violence, hate speech, misinformation, cyberbullying, and child sexual exploitation, while promoting media freedom, privacy, and freedom of expression.

Capacity development and financing are recognised as essential. The document highlights the need to strengthen digital skills, technical expertise, and institutional capacities, including in AI. It invites the International Telecommunication Union to establish an internal task force to assess gaps and challenges in financial mechanisms for digital development and to report recommendations to CSTD by 2027. It also calls on the UN Inter-Agency Working Group on AI to map existing capacity-building initiatives, identify gaps, and develop programs such as an AI capacity-building fellowship for government officials and research programmes.

Finally, the outcome underscores the importance of monitoring and measurement, requesting a systematic review of existing ICT indicators and methodologies by the Partnership on Measuring ICT for Development, in cooperation with action line facilitators and the UN Statistical Commission. The Partnership is tasked with reporting to CSTD in 2027. Overall, the CSTD, ECOSOC, and the General Assembly maintain a central role in WSIS follow-up and review.

The final text reflects a broad compromise and was adopted without a vote, though some member states and groups raised concerns about certain provisions.

 Adult, Male, Man, Person, Clothing, Formal Wear, Suit, Book, Comics, Publication, Face, Head, Coat, Text
IN OTHER NEWS LAST WEEK

This week in AI governance

El Salvador. El Salvador has partnered with xAI to launch the world’s first nationwide AI-powered education programme, deploying the Grok model across more than 5,000 public schools to deliver personalised, curriculum-aligned tutoring to over one million students over the next two years. The initiative will support teachers with adaptive AI tools while co-developing methodologies, datasets and governance frameworks for responsible AI use in classrooms, aiming to close learning gaps and modernise the education system. President Nayib Bukele described the move as a leap forward in national digital transformation. 

BRICS. Talks on AI governance within the BRICS bloc have deepened as member states seek to harmonise national approaches and shared principles to ethical, inclusive and cooperative AI deployment. Still premature to talk about the creation of an AI-BRICS, Deputy Foreign Minister Sergey Ryabkov, Russia’s BRICS sherpa.

Pax Silica. A diverse group of nations has announced Pax Silica, a new partnership aimed at building secure, resilient, and innovation-driven supply chains for the technologies that underpin the AI era. These include critical minerals and energy inputs, advanced manufacturing, semiconductors, AI infrastructure and logistics. Analysts warn that diverging views may emerge if Washington pushes for tougher measures targeting China, potentially increasing political and economic pressure on participating nations. However, the USA, which leads the platform, clarified that the platform will focus on strengthening supply chains among its members rather than penalising non-members, like China.

UN AI Resource Hub. The UN AI Resource Hub has gone live as a centralised platform aggregating AI activities and expertise across the UN system. Presented by the UN Inter-Agency Working Group on AI. This platform has been developed through the joint collaboration of UNDP, UNESCO and ITU. It enables stakeholders to explore initiatives by agency, country and SDGs. The hub supports inter-agency collaboration, capacity for UN member states, and enhanced coherence in AI governance and terminology.


ByteDance inks US joint-venture deal to head off a TikTok ban

ByteDance has signed binding agreements to shift control of TikTok’s US operations to a new joint venture majority-owned (80.1%) by American and other non-Chinese investors, including Oracle, Silver Lake and Abu Dhabi-based MGX.

In exchange, ByteDance retains a 19.9% minority stake, in an effort to meet US national security demands and avoid a ban under the 2024 divest-or-ban law. 

The deal is slated to close on 22 January 2026, and US officials previously cited an implied valuation of approximately $14 billion, although the final terms have not been disclosed. 

TikTok CEO Shou Zi Chew told staff the new entity will independently oversee US data protection, algorithm and software security, and content moderation, with Oracle acting as the ‘trusted security partner’ hosting US user data in a US-based cloud and auditing compliance.


China edges closer to semiconductor independence with EUV prototype

Chinese scientists have reportedly built a prototype extreme ultraviolet (EUV) lithography machine, a technology long monopolised by ASML — the Dutch company that is the world’s sole supplier of EUV systems and a central chokepoint in global semiconductor manufacturing. 

EUV machines enable the production of the most advanced chips by etching ultra-fine circuits onto silicon wafers, making them indispensable for AI, advanced computing and modern weapons systems.

The Chinese prototype is already generating EUV light, though it has not yet produced working chips. 

The project reportedly involved former ASML engineers who reverse-engineered key elements of EUV systems, suggesting China may be closer to advanced chip-making capability than Western policymakers and analysts had assumed. 

Officials are targeting chip production by 2028, with insiders pointing to 2030 as a more realistic milestone.


USA launches tech force to boost federal AI and advanced tech skills

The Trump administration has unveiled a new initiative, branded the US Tech Force, aimed at rebuilding the US government’s technical capacity after deep workforce reductions, with a particular focus on AI and digital transformation. 

The programme reflects growing concern within the administration that federal agencies lack the in-house expertise needed to deploy and oversee advanced technologies, especially as AI becomes central to public administration, defence, and service delivery.

According to the official TechForce.gov website, participants will work on high-impact federal missions, addressing large-scale civic and national challenges. The programme positions itself as a bridge between Silicon Valley and Washington, encouraging experienced technologists to bring industry practices into government environments.

Supporters argue that the approach could quickly strengthen federal AI capacity and reduce reliance on external contractors. Critics, however, warn of potential conflicts of interest and question whether short-term deployments can substitute for sustained investment in the public sector workforce.


Brussels targets ultra-cheap imports

The EU member states will introduce a new customs duty on low-value e-commerce imports, starting 1 July 2026. Under the agreement, a customs duty of €3 per item will be applied to parcels valued at less than €150 imported directly into the EU from third countries. 

This marks a significant shift from the previous regime, under which such low-value goods were generally exempt from customs duties.

The temporary duty is intended to bridge the gap until the EU Customs Data Hub, a broader customs reform initiative designed to provide comprehensive import data and enhance enforcement capacity, becomes fully operational in 2028.

The Commission framed the measure as a necessary interim solution to ensure fair competition between EU-based retailers and overseas e-commerce sellers. The measure also lands squarely in the shadow of platforms such as Shein and Temu, whose business models are built on shipping vast volumes of ultra-low-value parcels.


USA reportedly suspends Tech Prosperity Deal with UK

The USA has reportedly suspended the implementation of the Tech Prosperity Deal with the UK, pausing a pact originally agreed during President Trump’s September state visit to London.

The Tech Prosperity Deal was designed to strengthen collaboration in frontier technologies, with a strong emphasis on AI, quantum, and the secure foundations needed for future innovation, and included commitments from major US tech firms to invest in the UK.

According to the Financial Times, Washington’s decision to suspend the deal reflects growing frustration with London’s stance on broader trade issues beyond technology. U.S. officials reportedly wanted the UK to make concessions on non-tariff barriers, particularly regulatory standards affecting food and industrial goods, before advancing the tech agreement

Neither government has commented yet. 



LOOKING AHEAD
 Person, Face, Head, Binoculars

Digital Watch Weekly will take a short break over the next two weeks. Thank you for your continued engagement and support.



READING CORNER

UNGA High-level meeting on WSIS+20 review – Day 2

Dear readers,

Welcome to our overview of statements delivered during Day 2 at UNGA’s high-level meeting on the WSIS+20 review.

Speakers repeatedly underscored that the WSIS vision remains relevant, but that it needs to be matched with concrete action, sustained cooperation, and inclusive governance arrangements. Digital transformation was framed as both an opportunity and a risk: a powerful accelerator of sustainable development, resilience, and service delivery, but also a driver of new inequalities if structural gaps, concentration of power, and governance challenges are left unaddressed. Digital public infrastructure and digital public goods were highlighted as foundations for inclusive development, while persistent digital divides were described as urgent and unresolved. Artificial intelligence (AI) featured prominently as a general-purpose technology with transformative potential, but also with risks related to exclusion, labour, environmental sustainability, and governance capacity.

Particular attention was given to the Internet Governance Forum (IGF), with widespread support for its permanent mandate, alongside calls to strengthen its funding, working modalities, and participation.

Throughout the day, speakers reaffirmed that no single stakeholder can deliver digital development alone, and that WSIS must continue to function as a people-centred, multistakeholder framework aligned with the SDGs and the Global Digital Compact (GDC).

DW team

Information and communication technologies for development

Digital transformation is no longer optional, underpinning early warning systems, disaster preparedness, climate adaptation, education, health services, and economic diversification, especially for Small Island Developing States (Fiji).

ICTs were widely framed as key enablers of sustainable development, innovation, resilience, and inclusive growth, and as major accelerators of the 2030 Agenda, particularly in contexts facing economic, climate, or security challenges (Ethiopia, Eritrea, Ukraine, Fiji, Colombia). It was noted that technologies, AI, and digital transformation must serve humanity through education, culture, science, communication, and information (UNESCO).

Strong emphasis was placed on digital public infrastructure (DPI) and digital public goods (DPGs) as foundations for inclusion, innovation, growth and public value (UNDP, Trinidad and Tobago, Malaysia). Digital public infrastructure was emphasised as needing to be secure, interoperable, and rights-based, grounded in safeguards, open systems, and public-interest governance (UNDP).

Digital commons, open-source solutions, and community-driven knowledge infrastructures were highlighted as central to sustainable development outcomes (IT for Change, Wikimedia, OIF). DPGs, such as open-source platforms, have been developed by stakeholders brought together by the WSIS process. However, member states need to create conditions for DPGs’ continued success within the WSIS framework (Wikimedia). Libraries were identified as global digital public infrastructure and significant public goods, with calls for their systematic integration into digital inclusion strategies and WSIS implementation efforts (International Federation of Library Associations and Institutions).

Persistent inequalities in sharing digitalisation gains were highlighted. While more than 6 billion people are online globally, low-income countries continue to lag significantly, including in digital commerce participation, underscoring the need for short-term policy choices that secure inclusive and sustainable development outcomes in the long term (UNCTAD).

The positive impact of digital technologies is considerably lower in developing countries compared to that in developed countries (Cuba). Concerns were raised that developing countries risk being locked into technological dependence, further deepening global asymmetries if left unaddressed (Colombia).

Environmental impacts

An environmentally sustainable information society was emphasised, with calls to align digital and green transformations to address climate change and resource scarcity, and to harness ICTs to achieve the SDGs (China).

Digital innovation was described as needing to support environmental sustainability and responsible resource use, ensuring positive long-term social and economic outcomes (Thailand).

The enabling environment for digital development

Speakers reaffirmed that enabling environments are central to the WSIS vision of a people-centred, inclusive, and development-oriented information society. Predictable, coherent, and transparent policy frameworks were highlighted as essential for enabling innovation and investment, and for ensuring that all countries can benefit from the digital economy (Microsoft, ICC).

These environments were linked to openness and coherence, including regulatory clarity and predictability, support for the free flow of information across borders, avoidance of unnecessary fragmentation, and the promotion of interoperability and scalable digital solutions (ICC). The importance of developing policies through dialogue with relevant stakeholders was also stressed (ICC).

Several speakers underlined that enabling environments must address persistent development gaps. The uneven distribution of the benefits of the information society, particularly in developing countries, was noted, alongside calls for enhanced international cooperation to facilitate investment, innovation, effective governance, and access to financial and technological resources (Holy See). Partnerships across all sectors were seen as essential to mobilise financing, capacity building, and technology transfer, given that governments cannot deliver alone (Fiji).

Divergent views were expressed on unilateral coercive measures. Some speakers argued that such measures impede economic and social development and hinder digital transformation, calling for international cooperation focused on capacity building, technology transfer, and financing of public digital infrastructure (Eritrea, Cuba). In contrast, a delegation stated that economic sanctions are lawful, legitimate, and effective tools for addressing threats to peace and security (USA).

Governance frameworks were identified as a core component of enabling environments. It was stressed that digital development must be safe, equitable, and rooted in trust, with adequate governance frameworks ensuring transparency, accountability, user protection, and meaningful stakeholder participation in line with the multistakeholder approach (Thailand).

Building confidence and security in the use of ICTs

Building confidence and security in the digital environment was framed as a prerequisite for realising the social and economic benefits of digitalisation, with trust and safety needing to be embedded across the entire digital ecosystem (Malaysia).

Trust was described as requiring regulation, accountability, and sustained public education to ensure that users can engage confidently with digital technologies (Malaysia).

Cybercrime was identified as a persistent and serious concern requiring concerted collective solutions beyond national approaches (Namibia).

Cybersecurity and cybercrime were highlighted as increasingly serious and complex challenges that undermine trust and risk eroding the socio-economic gains of digitalisation if left unaddressed (Thailand).

Investment in capacity building was emphasised as essential to strengthening national and individual resilience against cyber threats, alongside the adoption of security- and privacy-by-design principles (Thailand, International Federation for Information Processing).

Capacity development

Capacity development was consistently framed as a core enabler of inclusive digital transformation, with widespread recognition of persistent constraints in digital skills, institutional capacity, and governance capabilities (UNDP, Malaysia, Trinidad and Tobago).

Capacity development was identified as one of the most frequent requests from countries, particularly in relation to inclusive digital transformation (UNDP).

Effective capacity development was described as requiring institutional anchors, with centres of excellence highlighted as providing infrastructure and expertise that many countries—especially least developed countries, landlocked developing countries, and small island states—cannot afford independently (UNIDO).

Efforts are underway to establish a network of centres of excellence across the Global South, including in China, Ethiopia, the Western Balkans, Belarus, and Latin America (UNIDO).

Sustainable digital education was highlighted as essential, including fostering learner aspiration, addressing diversity and underrepresented communities, embedding computational thinking, and strengthening teacher preparation (International Federation for Information Processing). The emphasis should be on empowering people to understand information, question it, and use it wisely (UNESCO.

Libraries were highlighted as trusted, non-commercial public spaces that provide access to connectivity, devices, skills, and confidence-building support. For many people, particularly the most disenfranchised, libraries were described as the only way to get online and as key sources of diverse content and cultural heritage (International Federation of Library Associations and Institutions).

Financial mechanisms

Financing was described as a critical and non-negotiable component of implementing the WSIS vision, with repeated warnings that without adequate and predictable public and private resources, WSIS commitments risk remaining aspirational (APC).

Effective implementation was described as requiring a shift from fragmented, project-based funding toward systems-level financing approaches capable of delivering impact at scale (UNDP).

Calls were made for adequate, predictable, and accessible funding for digital infrastructure and capacity development, particularly to ensure effective participation of developing countries and the Global South (Colombia).

Support was expressed for the proposed establishment of a working group on future financial mechanisms for digital development, provided it focuses on the concrete needs of developing countries (Eritrea).

Financing challenges were also linked to linguistic and cultural diversity, with calls for decentralisation of computing capacity and ambitious strategies to finance digital development and AI, building on proposals by the UN Secretary-General (OIF).

Calls were made for UNGIS and ITU to ensure inclusive participation in the interagency financing task force and to approach the IGF’s permanent mandate with creativity and ambition (APC).

Existing financing mechanisms were highlighted for their tangible impact, including funds that have mobilised resources for digital infrastructure in more than 100 countries (Kuwait).

Human rights and the ethical dimensions of the information society

Human rights were reaffirmed as a foundational pillar of the WSIS vision, grounded in the UN Charter and the Universal Declaration of Human Rights, with emphasis on ensuring that the same rights people enjoy offline are protected online (International Institute for Democracy and Electoral Assistance, Costa Rica, Austria).

Anchoring WSIS in international human rights law was highlighted as essential to preserving an open, free, interoperable, reliable, and secure internet, particularly amid trends toward fragmentation, surveillance-based governance, and concentration of technological power (International Institute for Democracy and Electoral Assistance, OHCHR).

The centrality of human rights and the multistakeholder character of digital governance were described as practical conditions for legitimacy and effectiveness, particularly as freedom online declines and civic space shrinks (GPD, APC).

Concerns were raised about harms associated with profit-driven algorithmic systems and platform design, including addiction, mental health impacts, polarisation, extremism, and erosion of trustworthy information, with particularly severe effects in developing countries (HitRecord, Brazil).

A rights-based approach to digital governance was described as necessary to ensure accountability, participation, impact assessment, and protection of rights such as privacy, non-discrimination, and freedom of expression (OHCHR, ICC).

Divergent views were expressed on content regulation. Some cautioned against any threats to freedom of speech and expression (USA), while others emphasised the legitimate authority of states to regulate the digital domain to protect citizens and uphold the principle that what is illegal offline must also be illegal online (Brazil).

Ethical frameworks were emphasised to protect privacy, personal data, children, women, and vulnerable groups, and to orient digital development toward human dignity, justice, and the common good, including embedding ethical principles by design and protecting cultural diversity and the rights of artists and creators in AI-driven environments (UNESCO, Holy See, International Federation for Information Processing, Costa Rica, Kuwait, Colombia, Foundation Cibervoluntarios, Eritrea).

Concerns were raised about trends toward a more fragmented and state-centric internet, with warnings that such shifts pose risks to human rights, including privacy and freedom of expression, and could undermine the open and global nature of the internet (International Institute for Democracy and Electoral Assistance).

Data governance

The growing importance of data was linked to the expansion of AI (UNCTAD). Unlocking the value of data in a responsible manner was presented as a common problem and a civilizational challenge (Internet and Jurisdiction Policy Network). Concerns were raised about an innovation economy built on data extractivism, dispossession, and disenfranchisement, with countries and people from the Global South resisting unjust trade arrangements and seeking to reclaim the internet and its promise (IT for Change). 

Artificial intelligence

AI was described as a general-purpose technology at the centre of the technological revolution, shaping economic growth, national security, global competitiveness, and development trajectories (Brazil, the USA).

Concerns were raised that AI is currently being developed and deployed largely according to market-driven and engagement-maximising business models, similar to those that shaped social media. Without practical guardrails, AI risks reproducing harmful effects, and so governments need to move beyond historically hands-off approaches and play a more active role in governance (HitRecord).

Specific AI-related harms were identified, including deepfakes, rising environmental impacts from AI infrastructure (IT for Change), and labour impacts (Brazil). Concerns were expressed that AI adoption is contributing to job displacement and the fragilisation of labour rights, despite the centrality of decent work to the information society agenda (Brazil).

Noting uneven global capacities in AI development, deployment, and use, concerns were expressed that the speed of AI development may exceed the adaptive capacities of developing countries, including small island developing states, risking new forms of exclusion (Eritrea, Trinidad and Tobago). And it was highlighted that cultural and linguistic diversity is critically under-represented in AI systems (OIF).

Calls were made for AI governance frameworks to address AI-related risks and ensure that the technology is placed at the service of humanity (Kuwait, Namibia). Divergent views were expressed on governance approaches, with some cautioning against additional bureaucracy, while others stressed that relying on market forces alone will not ensure AI benefits all people (USA, HitRecord). It was also said that the UN should not shy away from looking into AI governance matters (Brazil). 

From an industrial perspective, it was noted that regulation often lags behind AI developments, with support expressed for evidence-based policymaking and regulatory testbeds to de-risk innovation and translate AI strategies into practice (UNIDO).

Ethical safeguards were emphasised as essential, with AI described as opening new horizons for creativity while also raising serious concerns about its impact on humanity’s relationship to truth, beauty, and contemplation (Holy See).

Internet governance

Widespread support was expressed for the Internet Governance Forum (IGF), described as a central pillar of the WSIS architecture and a cornerstone of global digital cooperation (International Institute for Democracy and Electoral Assistance, GPD, APC, ICANN, ICC, UNESCO, Austria, Africa ICT Alliance, Meta, Italy, Colombia). Making the IGF permanent was seen as an affirmation of confidence in the multistakeholder model and its continued relevance for addressing governance issues (APC, ICC, OHCHR).

The IGF was also described as a unique and inclusive multistakeholder space, bringing together governments, the private sector, civil society, the technical community, academia, and international organisations on equal footing. This model was credited with helping the internet remain global, interoperable, resilient, and stable through periods of rapid technological and geopolitical change (Microsoft, ICANN, IGF Leadership Panel, Meta).

Several speakers highlighted that the IGF has evolved into a self-organised global network, with more than 170 national, regional, sub-regional, and youth IGFs, enabling voices from remote, marginalised, and under-represented communities to feed into global discussions and bridge the gap between high-level diplomacy and ground-level implementation (Internet and Jurisdiction Policy Network, IGF Leadership Panel, Africa ICT Alliance, Internet Society). At the same time, it was stressed that while the IGF represents a remarkable institutional innovation, it has not yet fulfilled its full potential. Calls were made to continue improving its working modalities, clarify its institutional evolution, and ensure sustainable and predictable funding (Internet and Jurisdiction Policy Network, Brazil, ICANN).

Protecting and reaffirming the multistakeholder model of internet governance was repeatedly identified as important to the success of WSIS implementation. This model – anchored in dialogue, transparency, inclusivity, and accountability – was presented as a practical governance tool rather than a symbolic principle, ensuring that those who build, use, and regulate the internet can jointly shape its future (International Institute for Democracy and Electoral Assistance, Wikimedia, Microsoft, ICANN, ICC).

At the same time, several speakers stressed the need for stronger and more effective government participation in governance processes. It was noted that governments have legitimate roles and responsibilities in shaping digital policy, and that intergovernmental spaces must be strengthened so that all governments – particularly those from developing countries – can effectively perform their roles in global digital governance (APC, Brazil, Cuba). In this context, there was also a concern that calls for greater government engagement in the IGF have been framed primarily toward developing countries, with emphasis placed instead on the need for equal-footing participation of governments from all regions to ensure the forum’s long-term sustainability (APC).

Monitoring and measurement

It was noted that WSIS+20 must deliver measurable commitments with verifiable indicators (Costa Rica). And a streamlined and inclusive monitoring and review framework was seen as essential moving forward (Cuba).

WSIS framework, follow-up and implementation

There was broad recognition that the WSIS framework remains a central reference for a people-centred, inclusive, and development-oriented information society, while requiring reinforcement to respond to growing complexity, concentration of digital power, and risks posed by advanced AI systems (Costa Rica, Malaysia, Cuba).

The multistakeholder model was repeatedly reaffirmed as a cornerstone of the WSIS vision, anchored in dialogue, transparency, inclusivity, and accountability, and seen as essential to maintaining a resilient and open digital ecosystem (International Institute for Democracy and Electoral Assistance, GPD, USA, Meta, ICC, Italy, Thailand). The inclusive nature of the WSIS+20 review process itself was highlighted, with the Informal Multi-Stakeholder Sounding Board described as enabling substantive contributions from diverse stakeholder groups that helped identify both achievements and gaps in WSIS implementation over the past 20 years (WSIS+20 Co-Facilitators Informal Multi-Stakeholder Sounding Board).

Speaking of inclusivity, many speakers stressed that no single stakeholder can deliver digital development alone, and called for collaboration among governments, private sector, civil society, academia, technical communities, and international organisations to mobilise resources, share knowledge, transfer technology, and support nationally driven digital strategies (ICC, Namibia, Italy, Thailand). There were also calls to include knowledge actors such as universities, libraries, archives, cultural figures, and public media, reflecting that digital governance now concerns the status of knowledge itself (OIF). Youth representatives called for funded programmes, institutionalised youth seats in WSIS action line implementation, and recognition of young people as co-designers of digital policy (AI for Good Young Leaders).

On matters related to WSIS action lines, human rights expertise was highlighted as requiring a stronger and more systematic role within the WSIS architecture (GPD, OHCHR). And gender equality was welcomed as an explicit implementation priority within WSIS action lines (APC).

Strengthening UN system-wide coherence was highlighted as a priority, including clearer action line roadmaps and improved coordination across the UN system (GPD, UNDP). Alignment among WSIS, the Global Digital Compact (GDC), the Pact for the Future, and the SDGs was seen as necessary to maximise impact and avoid duplication (International Institute for Democracy and Electoral Assistance, Meta, Brazil, Colombia, Austria, Cuba). At the same time, one delegation expressed reservations about references to the GDC in the final outcome document, noting also concerns about what they considered to be international organisations setting a standard that legitimises international governance of the internet (USA).

Looking ahead, the task was framed not as preserving WSIS, but reinforcing it so that it remains future-proof, capable of anticipating rapid technological change while staying anchored in people-centred values, human rights, and inclusive governance (UNESCO, GPD). It was also stressed that for many in the Global South, the WSIS vision remains aspirational, and that the next phase must ensure the information society becomes an effective right rather than an empty promise (Cuba). 

Comments regarding the outcome document

In the last segment of the meeting, several delegations made statements regarding the WSIS+20 outcome document.

Some expressed concern about the limited transparency, inclusiveness, and predictability in the final phase of negotiations, stating that the process did not fully reflect multilateral dialogue and affected trust and collective ownership of the document (India, Israel, Iraq on behalf of Group of 77 and China, Iran).

Reservations were placed on language perceived as going beyond the WSIS mandate or national policy space, with reaffirmation of national sovereignty and the right of states to determine their own regulatory, social, and cultural frameworks. Concerns were raised regarding references to gender-related terminology, sexual and reproductive health, sexual and gender-based violence, misinformation, disinformation, and hate speech (Saudi Arabia, Argentina, Iran, Nigeria). Concerns were also noted regarding references to international instruments to which some states are not parties, citing concerns related to national legislation, culture, and sovereignty (Saudi Arabia). Dissociations were recorded from paragraphs related to human rights, information integrity, and the role of the Office of the High Commissioner for Human Rights in the digital sphere (Russian Federation). Concerns were further expressed that the outcome document advances what were described as divisive social themes, including climate change, gender, diversity, equity and inclusion, and the right to development (the USA).

Several delegations expressed concern that references to unilateral coercive measures were weakened and did not reflect their negative impact on access to technology, capacity building, and digital infrastructure in developing countries (Iraq on behalf of Group of 77 and China, Russian Federation, Iran). Others noted that such measures adopted in accordance with international law are legitimate foreign policy tools for addressing threats to peace and security (USA, Ukraine).

Some delegations noted that the outcome document does not sufficiently reflect the development dimension, particularly with regard to concrete commitments on financing, technology transfer, and capacity building, and that the absence of references to common but differentiated responsibilities weakens the development pillar (India, Iraq on behalf of Group of 77 and China, Iran). It was also said that the document does not adequately address the impacts of automation and artificial intelligence on labour and employment, despite requests from developing countries (Iraq on behalf of the Group of 77 and China).

While support for the multistakeholder nature of internet governance and the permanent nature of the IGF was noted, concerns were expressed that the outcome treats the IGF as a substitute rather than a complement to enhanced intergovernmental cooperation, and that the language regarding the intergovernmental segment for dialogue among governments has been weakened. It was said that intergovernmental spaces need to be strengthened so that all governments, particularly those from developing countries, can perform their roles in global governance (Iran, Iraq on behalf of Group of 77 and China). 

Serious reservations were placed on language viewed as legitimising international governance of the Internet, with opposition expressed to references to the Global Digital Compact, the Summit for the Future, and the Independent International Scientific Panel on AI, alongside reaffirmed support for a multistakeholder model of internet governance (USA).

Despite these reservations, several delegations stated that they joined the consensus in the interest of multilateralism and unity, while placing their positions and dissociations on record (India, Iraq on behalf of the Group of 77 and China, Iran, Nigeria, USA).

For a detailed summary of the discussions, including session transcripts and data statistics from the WSIS+20 High-Level meeting, visit our dedicated web page, where we are following the event. To explore the WSIS+20 review process in more depth, including its objectives and ongoing developments, see the dedicated WSIS+20 web page.
.

WSIS20 banner 4 final
Twenty years after the WSIS, the WSIS+20 review assesses progress, identifies ICT gaps, and highlights challenges such as bridging the digital divide and leveraging ICTs for development. The review will conclude with a two-day UNGA high-level meeting on 16–17 December 2025, featuring plenary sessions and the adoption of the draft outcome document.
wsis
This page keeps track of the process leading to the UNGA meeting in December 2025. It also provides background information about WSIS and related activities and processes since 1998.

UNGA High-level meeting on WSIS+20 review – Day 1

Dear readers,

Welcome to our overview of statements delivered during Day 1 at UNGA’s high-level meeting on the WSIS+20 review. 

Throughout the day, ICTs were framed as indispensable enablers of sustainable development and as core elements of economic participation and social inclusion. Speakers highlighted the transformative role of digital technologies across sectors such as education, health, agriculture, public administration, and disaster risk reduction, while underscoring the growing importance of digital public infrastructure and digital public goods as shared foundations for inclusive and resilient development. At the same time, advanced technologies, including artificial intelligence (AI), were described as reshaping economies and societies, offering new development opportunities while also introducing governance, capacity, and equity challenges that require coordinated international responses.

Discussions also returned repeatedly to the persistence of deep and multidimensional digital divides, spanning connectivity, affordability, skills, gender, geography, and access to emerging technologies. Speakers stressed that access alone is insufficient without trust, safety, institutional capacity, and respect for human rights. 

Internet governance featured prominently, with support for an open, free, global, interoperable, and secure internet grounded in human rights and multistakeholder cooperation. The Internet Governance Forum was widely recognised as a central platform for inclusive dialogue, with many calling for its strengthening through a permanent mandate, sustainable funding, and broader participation, particularly from developing countries and underrepresented groups. 

Across interventions, a shared message emerged that effective digital governance, strengthened international cooperation, and coherent implementation of WSIS commitments remain essential to ensuring that digital transformation leaves no one behind .

Our summary is structured around the thematic areas of the draft outcome document, which is expected to be adopted at the end of the high-level meeting, later today. 

DW team

Information and communication technologies for development

ICTs were consistently framed as indispensable and critical enablers of sustainable development and no longer peripheral but at the heart of development strategies (Slovakia, Azerbaijan, Timor-Leste). They increasingly shape how societies govern, learn, innovate, and connect, and are essential tools to advance economic growth, social inclusion, and quality of life (Azerbaijan, Chile). 

ICTs, including AI, were also described as tools to bring people closer together and collectively address sustainable development challenges, while boosting education and health, supporting climate adaptation and mitigation, and contributing to economic growth (Senegal, Israel). They were further framed as essential for transforming key sectors such as agriculture, health, education, and public administration (Uganda). The role of ICTs in disaster risk reduction and early warning systems was also highlighted, with emphasis on international cooperation through existing UN mechanisms (Japan).

Digital public infrastructure and digital public goods were highlighted as foundational backbones for inclusive and resilient development (India, Indonesia, Uganda, Kenya, Ghana). Shared digital foundations such as digital identity, payment systems, and data systems were described as transforming service delivery, expanding opportunities, and strengthening citizen engagement when built in ways that respect human rights and promote inclusion (Under-Secretary-General). 

Emerging technologies, including AI, big data, and cloud computing, were described as reshaping economies, transforming modes of production, and creating new opportunities for innovation. For developing countries, these technologies were seen as holding significant potential to accelerate structural transformation, expand access to services, enhance productivity, and support the achievement of the SDGs (Tunisia). Emerging technologies were also framed as creating opportunities for development and innovation and helping to address major global challenges (Norway). 

Several speakers stressed that digital transformation cannot be limited to the rollout of technology alone and must remain people-centred (Peru), while others emphasised its role in improving quality of life (Chile). However, it was emphasised that those without connectivity remain excluded from the opportunities that ICTs can give (President of the General Assembly).

Closing all digital divides

As highlighted during the entire WSIS+20 review process, persistent and multidimensional digital divides remain a central challenge that must be addressed if the WSIS vision of a truly inclusive information society is to be fully achieved. The divide was characterised as a ‘digital canyon’, reflecting stark disparities in access between and within countries, as well as a continuing gender gap in internet use (President of the General Assembly). 

Digital divides were widely described as multidimensional, spanning connectivity, affordability, skills, institutions, data, and emerging technologies, including AI (Kenya, Pakistan). Particular concern was expressed that gaps are deepening both between and within countries, and increasingly between those who shape technology and those who are shaped by it (Türkiye, Norway). The persistence of divides along gender, age, rural–urban, and disability lines was repeatedly highlighted, with warnings that uneven access to digital public services, skills, and meaningful connectivity risks reinforcing existing inequalities (Slovenia, Luxembourg, Mongolia).

More than a quarter of the world’s population remains offline, and affordability remains a significant barrier (Secretary-General). However, a recurring message was that digital inclusion requires more than connectivity. Skills, affordability, trust, safety, institutional capacity, and respect for human rights and fundamental freedoms online were repeatedly highlighted as essential components of meaningful access (Albania, Slovakia, Finland).

The inclusion of women and girls was identified as a critical priority for closing digital divides, with calls for targeted digital literacy, skills development, empowerment initiatives, and protection from online harms (President of the General Assembly, Israel, Finland, Belgium, Saudi Arabia). 

Attention was also drawn to intersecting forms of exclusion, including those affecting rural communities, persons with disabilities, older persons, and marginalised groups, with warnings that digital transformation risks reinforcing existing inequalities if these dimensions are not addressed systematically (Belgium, Luxembourg, Uganda, CANZ, Mongolia).

The emergence of an AI divide, linked to the concentration of infrastructure, data, and computing power, was also highlighted as a growing risk with far-reaching implications (Pakistan, Saudi Arabia). Concerns were raised that as global attention increasingly shifts toward AI and advanced technologies, many countries risk falling into perpetual catch-up without foundational investments in affordable and resilient broadband and in digital skills (Timor-Leste, Saudi Arabia, Philippines).

Developing countries highlighted structural constraints. The digital divide was described as a daily barrier to education, health care, and governance, with warnings that inequalities could deepen as global attention shifts toward AI and advanced technologies (Timor-Leste). 

Strong calls were made for enhanced international cooperation, financing, and technology transfer to close all dimensions of the digital divide. Adequate, predictable, and affordable financing was described as indispensable for extending digital infrastructure, promoting universal and meaningful connectivity, and strengthening skills and capacities, particularly in developing countries (Bangladesh, Azerbaijan, Cambodia, Egypt, Senegal, Algeria, Tunisia). Speakers emphasised that no country can address digital divides alone and stressed the importance of coordinated global action, inclusive partnerships, and knowledge sharing (Singapore, Mongolia, Latvia).

More broadly, speakers emphasised that the WSIS process remains of vital importance for developing countries and must prioritise the closure of all digital divides through concrete, actionable measures and inclusive, multistakeholder cooperation (Iraq on behalf of G77 and China, CANZ, Tonga).

The digital economy

Speakers repeatedly linked digitalisation to economic participation, productivity, and inclusion, while cautioning that unequal access risks excluding many countries and communities from emerging digital economic opportunities. Digital technologies were framed as enablers of entrepreneurship, micro, small and medium-sized enterprises, and access to markets, particularly when supported by digital public infrastructure and digital public services (Indonesia, Zimbabwe, Ghana).

Several delegations stressed that participation in the digital economy depends not only on connectivity but also on access to digital identity, digital payments, and interoperable platforms that enable transactions between governments, businesses, and citizens. Digital public infrastructure was described as a foundation for economic activity, transparency, and efficiency, helping to integrate citizens and businesses into formal economic systems (India, Ghana, Indonesia).

Developing countries highlighted that structural digital divides constrain their ability to benefit from the digital economy. These constraints were described as affecting access to education, finance, employment opportunities, and innovation ecosystems, with warnings that attention to advanced technologies, such as AI, could widen economic gaps if foundational issues remain unaddressed (Timor-Leste, Bangladesh, Cambodia, Egypt, and Senegal).

Several speakers explicitly connected digital economy participation to global inequities. It was argued that without enhanced international cooperation, financing, and technology transfer, developing countries risk remaining marginalised in global digital value chains and digital governance processes (Bangladesh, Egypt, Algeria, CANZ).

At the same time, some interventions emphasised national strategies to modernise legal and regulatory frameworks governing the digital economy, including updates to legislation related to digital services, AI, and electronic transactions, as part of broader economic transformation agendas (Ghana, Kyrgyzstan).

Social and economic development

Several interventions described digitalisation as enabling more inclusive economic participation, particularly through support for micro, small and medium-sized enterprises and by widening access to markets and services in developing country contexts (Indonesia, Zimbabwe). In this sense, digital technologies were presented as tools for integrating more people and businesses into economic activity, rather than simply increasing efficiency.

Digitalisation was also linked to the functioning of the state and public institutions, with references to digital government and digital public services as ways to improve access, responsiveness, and service delivery for citizens (Belgium, Senegal, Timor-Leste, Morocco). 

Beyond economic participation and public administration, digital technologies were associated with human development outcomes, including education, health, and social services. Several speakers referred to digital tools as supporting learning, healthcare delivery, and social inclusion, particularly where physical access to services remains limited (Egypt, Indonesia, Ghana). Digitalisation was also connected to livelihoods and rural development, including in agriculture, highlighting its relevance for poverty reduction and local economic resilience (Zimbabwe, Senegal).

Environmental impacts

Environmental dimensions of digitalisation were highlighted as a growing concern. The environmental footprint of digitalisation must be addressed, including energy use, critical minerals, and e-waste, calling for global standards and greener infrastructure (Secretary-General). Concerns were raised about the risk of e-waste and the importance of climate-resilient and sustainable digital infrastructure (Timor-Leste). The environmental impact of data centres and AI, and the need for circular economy approaches and responsible management of critical minerals, were also emphasised (Morocco). The role of governments and the private sector in ensuring sustainable and durable digital infrastructure, including opportunities to advance clean energy, was underlined (France).

The enabling environment for digital development

The importance of predictable policies, investment, and international cooperation featured prominently. Financing, technology transfer, and capacity-building were identified as prerequisites for inclusive digital development, particularly for developing countries (Algeria, Egypt, Cambodia, Bangladesh, Kenya). The need for a coherent UN digital governance architecture that builds on existing processes and avoids fragmentation was emphasised (Switzerland, Germany).

Concerns were raised that unilateral economic measures and unilateral coercive measures undermine the enabling environment for digital development by restricting access to technologies, digital infrastructure, financing, and capacity-building opportunities. Such measures were described as distorting the global supply chains and market order (Venezuela), exacerbating digital divides and disproportionately affecting developing countries, limiting their ability to participate meaningfully in the global digital economy and to implement WSIS commitments (Iraq on behalf of the Group of 77 and China, Venezuela on behalf of the Group of Friends in Defence of the UN Charter, Nicaragua).

Financial mechanisms

Financing was raised as a condition for implementation of the WSIS vision, with calls for adequate, predictable, and affordable financing to expand digital infrastructure and close persistent digital divides, particularly in developing countries (Iraq on behalf of the G77 and China, Algeria). It was stressed that political ambition cannot be realised without financing, alongside calls for sustained investment in digital public infrastructure and targeted financing for last-mile connectivity to reach underserved populations (Kenya, Timor-Leste).

Several interventions called for concessional and innovative financing to support digital development in developing countries. References were made to the Task Force for Financial Mechanisms as a platform for sharing best practices and strengthening financing approaches for digital development and universal connectivity, alongside calls to expand concessional financing to enable investment in digital infrastructure and services (Bangladesh, United Kingdom, Côte d’Ivoire).

Some delegations also described national financing efforts and instruments, including large-scale investments in fibre infrastructure, digital public services, and cybersecurity, as well as the use of universal service mechanisms and dedicated digital investment tools.  And a proposal was made to create a working group to examine financial mechanisms and present recommendations in 2027, prioritising financing on favourable terms and North-South, South-South and triangular partnerships (Senegal, Côte d’Ivoire, Morocco).

Building confidence and security in the use of ICTs

Building confidence and security in the use of ICTs was discussed through concrete governance and security measures at both national and international levels. National cybersecurity frameworks, legislation, and institutional arrangements were highlighted as essential for protecting digital infrastructure, data, and citizens, and for fostering trust in digital systems (Senegal, Morocco, Ghana). Capacity gaps in cybersecurity and technical expertise were identified as a major challenge, particularly for developing countries seeking to expand digital services while managing growing cyber risks (Uganda). The protection of critical infrastructure and citizens from cyber threats was emphasised as digitalisation deepens across public services and essential sectors (Timor-Leste, Zimbabwe).

At the international level, references were made to the UN Convention against Cybercrime (Uruguay, Venezuela, Russian Federation) and to the establishment of a permanent intergovernmental mechanism under UN auspices in the context of international information security and cooperation (Russian Federation).

Capacity development

Capacity development was presented as a prerequisite for inclusive digital transformation and for closing persistent digital divides. Several speakers emphasised that meaningful participation in the information society requires digital literacy, technical skills, institutional capacity, and policy expertise, particularly in developing countries and least developed countries (Albania, Egypt, Bangladesh, Cambodia, Uganda, Timor-Leste, Lesotho).

A recurring message was that access alone is insufficient without the skills and capabilities needed to use digital technologies safely, productively, and effectively. Digital skills development was linked to education, employability, participation in the digital economy, and confidence in digital public services (Albania, Egypt, Israel, South Africa).

Capacity gaps were highlighted in specific technical and governance areas, notably cybersecurity and emerging technologies, with warnings that skills shortages expose developing countries to heightened risks as they expand digital services and digitise public institutions (Timor-Leste, Uganda, Senegal).

International cooperation was framed as essential for capacity development, with references to the need for technology transfer, technical assistance, and sustained capacity-building support, particularly for developing countries and least developed countries. Strengthened North–South, South–South, and triangular cooperation was highlighted as a means to support skills development, knowledge sharing, hands-on training, and institutional and cybersecurity capacities aligned with national priorities and vulnerabilities (Cambodia, Bangladesh, Tunisia, Timor-Leste). Capacity building in emerging technologies, including artificial intelligence and cybersecurity, was also linked to international support, financing, and technology transfer (Nepal, Algeria).

Human rights and the ethical dimensions of the information society

Human rights were consistently framed as foundational to the information society and as a central reference point for digital governance. Numerous delegations reaffirmed that the same rights apply online and offline, with explicit references to international human rights law, including the rights to privacy, freedom of expression, access to information, and non-discrimination (Estonia, Spain, Belgium, Poland, Lithuania, Luxembourg, Finland, France). 

Several interventions stressed that digital technologies must respect and promote human dignity, with human dignity presented as a guiding value of the information society and a core ethical reference for digital transformation. Technological development, including artificial intelligence, was framed as needing to advance development and inclusion while enhancing dignity, autonomy, accountability, and respect for the individual, rather than treating people merely as data points or objects of automation (Estonia, Belgium, Lithuania, India, Türkiye, Slovenia).

Concerns were repeatedly raised about the misuse of digital technologies in ways that undermine fundamental rights. These included references to censorship, digital repression, surveillance practices that infringe on privacy, and restrictions on freedom of expression and civic space online (Belgium, Spain, Poland, Finland, France). Particular attention was drawn to risks faced by vulnerable groups, underscoring the need for safeguards, oversight, and accountability in the design and deployment of digital technologies (Belgium, Finland).

Artificial intelligence was explicitly cited as amplifying existing human rights challenges. Several interventions warned that AI systems, if not governed in line with human rights principles, could facilitate surveillance, enable censorship, or reinforce discrimination and exclusion, reinforcing calls to integrate human rights considerations throughout the lifecycle of emerging technologies (Belgium, Spain, France, Lithuania).

Data governance

Data governance was mentioned as an emerging governance concern, with broader implications for trust, ethics, and development. References were made to the establishment of national data governance frameworks, including efforts to build secure and interoperable data systems as part of digital transformation and public sector modernisation strategies (Morocco). Data governance was also identified as an outstanding challenge alongside data protection and digital capacity-building, particularly in relation to the deployment of AI (Chile). Several interventions framed data governance in terms of responsible and ethical data use, highlighting concerns about data concentration, data gaps, and the societal implications of data-driven technologies, while also linking data protection frameworks to trust in digital ecosystems and the effective functioning of digital government (Senegal, Saudi Arabia, Ghana). More broadly, data governance was framed through the lens of digital sovereignty and national authority over data, particularly from developing-country perspectives (Iraq on behalf of the G77 and China).

Artificial intelligence

Artificial intelligence featured prominently as both a development accelerator and a source of new risks. Ethical, human-centred, and rights-based approaches to AI governance were repeatedly emphasised, with references to human dignity, accountability, transparency, and the application of existing human rights obligations in AI-enabled systems (Estonia, Belgium, Spain, Albania, Lithuania, Indonesia, Israel, Senegal, Zimbabwe). Several speakers stressed that the rapid deployment of AI, particularly in public services, requires governance approaches that safeguard trust, inclusion, and democratic values (Albania, Lithuania, Türkiye).

Attention was drawn to structural AI divides. Unequal access to computing capacity, algorithms, data, and linguistic resources was identified as a growing concern, with the risk that lack of access to AI capabilities translates into exclusion from future employment, education, and economic opportunities (Saudi Arabia). Concerns were also expressed that disparities in AI infrastructure, skills, and institutional capacity could reinforce existing inequalities, particularly for the least developed and small developing countries. Without targeted international support, AI was seen as likely to widen development gaps rather than close them (Timor-Leste, Bangladesh, Lesotho).

The need to strengthen capacity within public institutions was underlined, extending beyond technical expertise to include policymakers, regulators, and civil servants responsible for oversight and implementation. National AI strategies were presented as tools to anchor AI use in public value and ethical governance rather than purely market-driven deployment (Kenya, Ghana).

The international governance of AI was discussed primarily in terms of coherence, coordination, and institutional continuity. Several interventions stressed the importance of building on existing international processes and initiatives, particularly within the UN system, and warned against fragmentation or duplication in global AI governance efforts (Japan, Estonia). AI governance was also situated within broader international challenges related to information manipulation, disinformation, and democratic resilience, reinforcing calls for approaches that strengthen trust and information integrity as part of global digital cooperation frameworks (France, Lithuania). More generally, AI governance was framed as needing to serve humanity and to be embedded within a strengthened global digital governance architecture grounded in human rights and multistakeholder cooperation, without reference to specific institutional mechanisms (Switzerland, European Union). 

Internet governance

Many speakers reaffirmed the multistakeholder model as a core principle of internet governance. They emphasised the importance of inclusive participation by governments, the private sector, civil society, the technical community, academia, and users, and stressed that no single actor or group of actors should control the internet or global internet governance processes. The multistakeholder approach was framed as essential for transparency, trust, legitimacy, and effective governance of the internet (President of the General Assembly, Estonia, Germany, Poland, Lithuania, Luxembourg, Ireland, Israel, Nigeria, Finland).

Several statements highlighted support for an internet that is open, free, global, interoperable, secure, and inclusive, and rooted in respect for human rights. This vision was linked to economic development, democratic participation, access to knowledge, and the protection of fundamental freedoms (European Union, President of the General Assembly, Germany, Estonia, Spain, Poland, Lithuania, Luxembourg, Finland, Norway). Some speakers warned that fragmentation, excessive centralisation, or restrictive approaches to internet governance could undermine this vision and weaken the global nature of the Internet (Germany, Estonia, Poland, Lithuania, Norway). There were also references to an ongoing process of fragmentation of the digital space and what was described as the lack of practical action to preserve a unified global network (Russian Federation). 

The Internet Governance Forum (IGF) was widely referenced as a central space for multistakeholder dialogue on internet-related public policy issues, with several speakers also pointing to its role as an inclusive platform for broader digital governance discussions, including emerging technologies and cross-cutting digital policy challenges (Under-Secretary-General, Japan, Estonia). Many expressed support for strengthening the IGF, including through elements such as a permanent mandate, predictable and sustainable funding, a strengthened Secretariat, enhanced intersessional work, and broader participation, particularly from developing countries and underrepresented groups. Concrete expressions of support included financial contributions to reinforce the IGF’s work and sustainability (Germany).

At the same time, some speakers questioned whether the IGF’s non-decision-making nature enables governments to participate on an equal footing in addressing international public policy issues related to the internet, as envisaged in the Tunis Agenda (Iran, Venezuela). There were also arguments according to which current internet governance arrangements remain unjust or incomplete; these were accompanied by calls for stronger intergovernmental cooperation, including legally binding frameworks and a more central role for the United Nations and its specialised bodies in addressing international internet public policy issues (Russian Federation, Venezuela). The mandate for enhanced cooperation, as set out in the Tunis Agenda, was described as unfinished in a few statements, which pointed out that progress in operationalising this mandate has been limited or blocked, and that existing arrangements do not allow governments to carry out their roles and responsibilities on an equal footing in international internet public policy discussions (Venezuela, Iran, Nicaragua). 

Monitoring and measurement

References to monitoring and measurement were limited. While some statements noted a need for WSIS action lines to be applied in more measurable and dynamic ways (South Africa, Switzerland), there were no substantive discussions on indicators, metrics, data collection, or monitoring frameworks for assessing WSIS implementation.

WSIS framework & Follow-up and review

There was strong and consistent support for the WSIS framework and its continued relevance. Several speakers reaffirmed the original WSIS outcome documents – in particular the Geneva Declaration and the Tunis Agenda – as enduring foundations of a people-centred, inclusive, and development-oriented information society. The WSIS+20 outcome document – yet to be adopted – was welcomed as reaffirming the WSIS vision, while recognising the need for the framework to adapt to changes in the digital landscape. Such adaptation should preserve the foundations of WSIS and its multistakeholder character (South Africa, Switzerland, Lesotho).

The relevance of the WSIS action lines was also reaffirmed, alongside calls to apply them in more agile, measurable, and context-responsive ways. Some delegations argued that the action lines should be operationalised more dynamically, to reflect emerging technologies such as AI while maintaining consistency with the Geneva Declaration and Tunis Agenda and with broader sustainable development objectives (South Africa, Poland, Switzerland).

Speakers also referred to institutional arrangements supporting WSIS implementation and follow-up. In addition to the repeated support for the IGF, several interventions noted the WSIS Forum, for instance in the context of its preparatory contributions to the WSIS+20 review and its continued annual convening (South Africa, Bangladesh, Russian Federation, UAE, ITU, Switzerland). References were also made to the United Nations Group on the Information Society as a coordination mechanism within the UN system, with speakers highlighting its role in facilitating coordination and increased efficiency across UN digital processes, including through the joint WSIS-GDC implementation roadmap that the draft outcome document tasks it with producing (Morocco, Republic of Korea).

Speakers repeatedly referred to the relationship between WSIS, the 2030 Agenda for Sustainable Development, and the Global Digital Compact. Several emphasised the importance of ensuring coherence and alignment among these processes, noting that WSIS remains closely linked to the implementation of the SDGs. The GDC was referenced as a related and complementary process that should reinforce and build upon existing WSIS frameworks rather than duplicate them. Calls were made for coordinated implementation, clear guidance, and avoidance of fragmentation across UN digital processes in order to ensure consistency and convergence in advancing sustainable development objectives (Albania, Spain, Switzerland, Luxembourg, Ireland, France, Under-Secretary-General).

For a detailed summary of the discussions, including session transcripts and data statistics from the WSIS+20 High-Level meeting, visit our dedicated web page, where we are following the event. To explore the WSIS+20 review process in more depth, including its objectives and ongoing developments, see the dedicated WSIS+20 web page.
.

WSIS20 banner 4 final

Twenty years after the WSIS, the WSIS+20 review assesses progress, identifies ICT gaps, and highlights challenges such as bridging the digital divide and leveraging ICTs for development. The review will conclude with a two-day UNGA high-level meeting on 16–17 December 2025, featuring plenary sessions and the adoption of the draft outcome document.

wsis

This page keeps track of the process leading to the UNGA meeting in December 2025. It also provides background information about WSIS and related activities and processes since 1998.

Weekly #242 Under-16 social media use in Australia: A delay or a ban?

 Logo, Text

5-12 December 2025


HIGHLIGHT OF THE WEEK

Under-16 social media use in Australia: A delay or a ban?

Australia made history on Wednesday as it began enforcing its landmark under-16 social media restrictions — the first nationwide rules of their kind anywhere in the world. 

The measure — a new Social Media Minimum Age (SMMA) requirement under the Online Safety Act — obliges major platforms to take ‘reasonable steps’ to delete underage accounts and block new sign-ups, backed by AUD 49.5 million fines and monthly compliance reporting.

As enforcement began, eSafety Commissioner Julie Inman Grant urged families — particularly those in regional and rural Australia — to consult the newly published guidance, which explains how the age limit works, why it has been raised from 13 to 16, and how to support young people during the transition.

The new framework should be viewed not as a ban but as a delay, Grant emphasised, raising the minimum account age from 13 to 16 to create ‘a reprieve from the powerful and persuasive design features built to keep them hooked and often enabling harmful content and conduct.’

It has been a few days since the ban—we continue to use the word ‘ban’ in the text, as it has already become part of the vernacular—took effect. Here’s what has happened in the days since.

Teen reactions. The shift was abrupt for young Australians. Teenagers posted farewell messages on the eve of the deadline, grieving the loss of communities, creative spaces, and peer networks that had anchored their daily lives. Youth advocates noted that those who rely on platforms for education, support networks, LGBTQ+ community spaces, or creative expression would be disproportionately affected.

Workarounds and their limits. Predictably, workarounds emerged immediately. Some teens tried (and succeeded) to fool facial-age estimation tools by distorting their expressions; others turned to VPNs to mask their locations. However, experts note that free VPNs frequently monetise user data or contain spyware, raising new risks. And it might be in vain – platforms retain an extensive set of signals they can use to infer a user’s true location and age, including IP addresses, GPS data, device identifiers, time-zone settings, mobile numbers, app-store information, and behavioural patterns. Age-related markers — such as linguistic analysis, school-hour activity patterns, face or voice age estimation, youth-focused interactions, and the age of an account give companies additional tools to identify underage users.

 Bus Stop, Outdoors, Book, Comics, Publication, Person, Adult, Female, Woman, People, Face, Head, Art

Privacy and effectiveness concerns. Critics argue that the policy raises serious privacy concerns, since age-verification systems, whether based on government ID uploads, biometrics, or AI-based assessments, force people to hand over sensitive data that could be misused, breached, or normalised as part of everyday surveillance. Others point out that facial-age technology is least reliable for teenagers — the very group it is now supposed to regulate. Some question whether the fines are even meaningful, given that Meta earns roughly AUD 50 million in under two hours.

The limited scope of the rules has drawn further scrutiny. Dating sites, gaming platforms, and AI chatbots remain outside the ban, even though some chatbots have been linked to harmful interactions with minors. Educators and child-rights advocates argue that digital literacy and resilience would better safeguard young people than removing access outright. Many teens say they will create fake profiles or share joint accounts with parents, raising doubts about long-term effectiveness.

Industry pushback. Most major platforms have publicly criticised the law’s development and substance. They maintain that the law will be extremely difficult to enforce, even as they prepare to comply to avoid fines. Industry group NetChoice has described the measure as ‘blanket censorship,’ while Meta and Snap argue that real enforcement power lies with Apple and Google through app-store age controls rather than at the platform level.

Reddit has filed a High Court challenge of the ban, naming the Commonwealth of Australia and Communications Minister Anika Wells as defendants, and claiming that the law is applied to Reddit inaccurately. The platform holds that it is a platform for adults, and doesn’t have the traditional social media features that the government has taken issue with.

Government position. The government, expecting a turbulent rollout, frames the measure as consistent with other age-based restrictions (such as no drinking alcohol under 18) and a response to sustained public concern about online harms. Officials argue that Australia is playing a pioneering role in youth online safety — a stance drawing significant international attention. 

International interest. This development has garnered considerable international attention. As we previously reported, there is a small but growing club of countries seeking to ban minors from major platforms. 

 Book, Comics, Publication, Baby, Person, Face, Head, Flag
  • The EU Parliament has proposed a minimum social media age of 16, allowing parental consent for users aged 13–15, and is exploring limits on addictive features such as autoplay and infinite scrolling. 
  • In France, lawmakers have suggested banning under-15s from social media and introducing a ‘curfew’ for older teens.
  • Spain is considering parental authorisation for under-16s. 
  • Malaysia plans to introduce a ban on social media accounts for people under 16 starting in 2026.
  • Denmark and Norway are considering raising the minimum social media age to 15, with Denmark potentially banning under-15s outright and Norway proposing stricter age verification and data protections. 
  • In New Zealand, political debate has considered restrictions for minors, but no formal policy has been enacted. 
  • According to Australia’s Communications Minister, Anika Wells, officials from the EU, Fiji, Greece, and Malta have approached Australia for guidance, viewing the SMMA rollout as a potential model. 

All of these jurisdictions are now looking closely at Australia, watching for proof of concept — or failure.

The unresolved question. Young people are reminded that they retain access to group messaging tools, gaming services and video conferencing apps while they await eligibility for full social media accounts. But the question lingers: if access to large parts of the digital ecosystem remains open, what is the practical value of fencing off only one segment of the internet?

IN OTHER NEWS LAST WEEK

This week in AI governance

National regulations

Vietnam. Vietnam’s National Assembly has passed the country’s first comprehensive AI law, establishing a risk management regime, sandbox testing, a National AI Development Fund and startup voucher schemes to balance strict safeguards with innovation incentives. The 35‑article legislation — largely inspired by EU and other models — centralises AI oversight under the government and will take effect in March 2026.

The USA. The US President Donald Trump has signed an executive order targeting what the administration views as the most onerous and excessive state-level AI laws. The White House argues that a growing patchwork of state rules threatens to stymie innovation, burden developers, and weaken US competitiveness.

To address this, the order creates an AI Litigation Task Force to challenge state laws deemed obstructive to the policy set out in the executive order – to sustain and enhance the US global AI dominance through a minimally burdensome national policy framework for AI. The Commerce Department is directed to review all state AI regulations within 90 days to identify those that impose undue burdens. It also uses federal funding as leverage, allowing certain grants to be conditioned on states aligning with national AI policy.

The UK. More than 100 UK parliamentarians from across parties are pushing the government to adopt binding rules on advanced AI systems, saying current frameworks lag behind rapid technological progress and pose risks to national and global security. The cross‑party campaign, backed by former ministers and figures from the tech community, seeks mandatory testing standards, independent oversight and stronger international cooperation — challenging the government’s preference for existing, largely voluntary regulation.

National plans and investments

Russia. Russia is advancing a nationwide plan to expand the use of generative AI across public administration and key sectors, with a proposed central headquarters to coordinate ministries and agencies. Officials see increased deployment of domestic generative systems as a way to strengthen sovereignty, boost efficiency and drive regional economic development, prioritising locally developed AI over foreign platforms.

Qatar. Qatar has launched Qai, a new national AI company designed to accelerate the country’s digital transformation and global AI footprint. Qai will provide high‑performance computing and scalable AI infrastructure, working with research institutions, policymakers and partners worldwide to promote the adoption of advanced technologies that support sustainable development and economic diversification.

The EU. The EU has advanced an ambitious gigafactory programme to strengthen AI leadership by scaling up infrastructure and computational capacity across member states. This involves expanding a network of AI ‘factories’ and antennas that provide high‑performance computing and technical expertise to startups, SMEs and researchers, integrating innovation support alongside regulatory frameworks like the AI Act. 

Australia. Australia has sealed a USD 4.6 billion deal for a new AI hub in western Sydney, partnering with private sector actors to build an AI campus with extensive GPU-based infrastructure capable of supporting advanced workloads. The investment forms part of broader national efforts to establish domestic AI innovation and computational capacity. 

Partnerships 

Canada‑EU. Canada and the EU have expanded their digital partnership on AI and security, committing to deepen cooperation on trusted AI systems, data governance and shared digital infrastructure. This includes memoranda aimed at advancing interoperability, harmonising standards and fostering joint work on trustworthy digital services. 

The International Network for Advanced AI Measurement, Evaluation and Science. The global network has strengthened cooperation on benchmarking AI governance progress, focusing on metrics that help compare national policies, identify gaps and support evidence‑based decision‑making in AI regulation internationally. This network includes Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the USA. The UK has assumed the role of Network Coordinator.


Trump allows Nvidia to sell chips to approved Chinese customers

The USA has decided to allow the sale of H200 chips to approved customers in China, a decision that marks a notable shift in export controls.

Under the new framework, sales of H200 chips will proceed, subject to conditions including licensing oversight by the US Department of Commerce and a revenue-sharing mechanism that directs 25% of the proceeds back to the US government. 

The road ahead. The policy is drawing scrutiny from some US lawmakers and national security experts who caution that increased hardware access could strengthen China’s technological capabilities in sensitive domains.


Poland halts crypto reform as Norway pauses CBDC plans

Poland’s effort to introduce a comprehensive crypto law has reached an impasse after the Sejm failed to overturn President Karol Nawrocki’s veto of a bill meant to align national rules with the EU’s MiCA framework. 

The government argued the reform was essential for consumer protection and national security, but the president rejected it as overly burdensome and a threat to economic freedom, citing expansive supervisory powers and website-blocking provisions. With the veto upheld, Poland remains without a clear domestic regulatory framework for digital assets. In the aftermath, Prime Minister Donald Tusk has pledged to renew efforts to pass crypto legislation.

In Norway, Norges Bank has concluded that current conditions do not justify launching a central bank digital currency, arguing that Norway’s payment system remains secure, efficient and well-tailored to users.

The bank maintains that the Norwegian krone continues to function reliably, supported by strong contingency arrangements and stable operational performance.  Governor Ida Wolden Bache said the assessment reflects timing rather than a rejection of CBDCs, noting the bank could introduce one if conditions change or if new risks emerge in the domestic payments landscape.

Zooming out. Both cases highlight a cautious approach to digital finance in Europe: while Poland grapples with how much oversight is too much, Norway is weighing whether innovation should wait until the timing is right.



LAST WEEK IN GENEVA
 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

On Wednesday (3 December), Diplo, UNEP, and Giga are co-organising an event at the Giga Connectivity Centre in Geneva, titled ‘Digital inclusion by design: Leveraging existing infrastructure to leave no one behind’. The event looked at realities on the ground when it comes to connectivity and digital inclusion, and at concrete examples of how community anchor institutions like posts, schools, and libraries can contribute significantly to advancing meaningful inclusion. There was also a call for policymakers at national and international levels to keep these community anchor institutions in mind when designing inclusion strategies or discussing frameworks, such as the GDC and WSIS+20.

Organisations and institutions are invited to submit event proposals for the second edition of Geneva Security Week. Submissions are open until 6 January 2026. Co-organised once again by the UN Institute for Disarmament Research (UNIDIR) and the Swiss Federal Department of Foreign Affairs (FDFA), Geneva Security Week 2026 will take place from 4 to 8 May 2026 under the theme ‘Advancing Global Cooperation in Cyberspace’.

LOOKING AHEAD
 Person, Face, Head, Binoculars

UN General Assembly High-level meeting on WSIS+20 review

Twenty years after the finalisation of the World Summit on the Information Society (WSIS), the WSIS+20 review process will take stock of the progress made in the implementation of WSIS outcomes and address potential ICT gaps and areas for continued focus, as well as address challenges, including bridging the digital divide and harnessing ICTs for development.

The overall review will be concluded by a two-day high-level meeting of the UN General Assembly (UNGA), scheduled to 16–17 December 2025. The meeting will consist of plenary meetings, which will include statements in accordance with General Assembly resolution 79/277 and the adoption of the draft outcome document.

Diplo and the Geneva Internet Platform (GIP) will provide just-in-time reporting from the meeting. Bookmark our dedicated web page; more details will be available soon.

 City, Urban, Flag


READING CORNER
flag united nations

Human rights are no longer abstract ideals but living principles shaping how AI, data, and digital governance influence everyday life, power structures, and the future of human dignity in an increasingly technological world.

Lettre d’information du Digital Watch – Numéro 105 – Mensuelle novembre 2025

Rétrospective de novembre 2025

Le numéro de ce mois-ci vous emmène de Washington à Genève, de la COP 30 aux négociations du SMSI+20, retraçant les développements majeurs qui redéfinissent la politique en matière d’IA, de la sécurité en ligne et de la résilience de l’infrastructure numérique dont nous dépendons quotidiennement.

Voici ce que nous vous proposons dans cette édition.

La bulle de l’IA est-elle sur le point d’éclater ? La bulle de l’IA va-t-elle éclater ? L’IA est-elle désormais « trop importante pour faire faillite » ? Le gouvernement américain va-t-il renflouer les géants de l’IA, et quelles seraient les conséquences pour l’économie mondiale ?

La lutte mondiale pour réguler l’IA – Les gouvernements s’empressent de définir des règles, qu’il s’agisse de stratégies nationales en matière d’IA ou de nouveaux cadres mondiaux. Nous présentons les dernières initiatives.

Points forts du SMSI+20 Rev 1 — Un aperçu du document qui guide actuellement les négociations entre les États membres de l’ONU avant la réunion de haut niveau de l’Assemblée générale les 16 et 17 décembre 2025.

Quand le numérique rencontre le climat — Ce dont les États membres de l’ONU ont discuté en matière d’IA et de numérique lors de la COP 30.

Sécurité des enfants en ligne — De l’Australie à l’UE, les gouvernements mettent en place de nouvelles mesures de protection pour préserver les enfants des dangers d’Internet. Nous examinons leurs approches.

Panne numérique — La panne de Cloudflare a révélé la fragilité des dépendances au sein de l’Internet mondial. Nous analysons ses causes et ce que cet incident révèle sur la résilience numérique.

Le mois dernier à Genève — Retrouvez les discussions, les événements et les conclusions qui ont façonné la gouvernance numérique internationale.


GOUVERNANCE NUMÉRIQUE

La France et l’Allemagne ont organisé à Berlin un sommet sur la souveraineté numérique européenne afin d’accélérer l’indépendance numérique de l’Europe. Elles ont présenté une feuille de route comportant sept priorités : simplifier la réglementation (notamment en reportant certaines règles de la loi sur l’IA), garantir l’équité des marchés du cloud et du numérique, renforcer la souveraineté des données, faire progresser les biens communs numériques, développer les infrastructures publiques numériques open source, créer un groupe de travail sur la souveraineté numérique et stimuler l’innovation de pointe en matière d’IA. Plus de 12 milliards d’euros d’investissements privés ont été promis. Un développement majeur accompagnant le sommet a été le lancement du Réseau européen pour la résilience et la souveraineté technologiques (ETRS) afin de réduire la dépendance vis-à-vis des technologies étrangères (actuellement supérieure à 80 %) grâce à la collaboration d’experts, à la cartographie de la dépendance technologique et au soutien à l’élaboration de politiques fondées sur des données probantes.

TECHNOLOGIES

Le gouvernement néerlandais a suspendu son projet de rachat de Nexperia, un fabricant de puces basé aux Pays-Bas et détenu par la société chinoise Wingtech, à la suite de négociations positives avec les autorités chinoises. La Chine a également commencé à libérer ses stocks de puces afin d’atténuer la pénurie.

Baidu a présenté deux puces IA développées en interne, la M100 pour une inférence efficace sur des modèles mixtes d’experts (prévue début 2026) et la M300 pour l’entraînement de modèles multimodaux à des trillions de paramètres (2027). L’entreprise a également présenté des architectures en groupement (Tianchi256 au premier semestre 2026 ; Tianchi512 au second semestre 2026) pour mettre à l’échelle l’inférence via de grandes interconnexions. IBM a dévoilé deux puces quantiques : Nighthawk (120 qubits, 218 coupleurs accordables) et Quantum Connect (100 qubits, 100 coupleurs accordables). Tianchi512 au second semestre 2026) pour faire évoluer l’inférence via de grandes interconnexions.

IBM a présenté deux puces quantiques : Nighthawk (120 qubits, 218 coupleurs accordables) permettant des circuits environ 30 % plus complexes, et Loon, un banc d’essai tolérant aux pannes avec une connectivité à six voies et des coupleurs à longue portée.

INFRASTRUCTURE

Six États membres de l’UE — l’Autriche, la France, l’Allemagne, la Hongrie, l’Italie et la Slovénie — ont conjointement demandé que la loi sur les réseaux numériques (DNA) soit réexaminée, arguant que les éléments fondamentaux de la proposition — notamment la réglementation harmonisée de type télécom, les mécanismes de règlement des litiges relatifs aux frais de réseau et les règles plus larges en matière de fusion — devraient plutôt rester sous contrôle national.

CYBERSÉCURITÉ

Roblox mettra en place une estimation obligatoire de l’âge (à partir de décembre dans certains pays, puis à l’échelle mondiale en janvier) et segmentera les utilisateurs en tranches d’âge strictes afin de bloquer les discussions avec des adultes inconnus. Les moins de 13 ans resteront exclus des messages privés, sauf si leurs parents donnent leur accord.

Eurofiber a confirmé une violation de sa plateforme client ATE française et de son système de billetterie via un logiciel tiers, précisant que les services restaient opérationnels et que les données bancaires étaient sécurisées.

La FCC s’apprête à voter l’abrogation des règles de janvier prévues par la section 105 de la CALEA, qui obligeaient les principaux opérateurs à renforcer la sécurité de leurs réseaux contre les accès non autorisés et les interceptions, mesures adoptées après que la campagne de cyberespionnage Salt Typhoon ait révélé les vulnérabilités des télécommunications.

Le Royaume-Uni prévoit un projet de loi sur la cybersécurité et la résilience afin de renforcer les infrastructures nationales critiques et l’économie numérique en général contre les cybermenaces croissantes. Environ 1 000 fournisseurs de services essentiels (santé, énergie, informatique) seraient soumis à des normes renforcées, avec une extension potentielle à plus de 200 centres de données.

ÉCONOMIE

Les Émirats arabes unis ont réalisé leur première transaction gouvernementale à l’aide du dirham numérique, un projet pilote de monnaie numérique de banque centrale (CBDC) dans le cadre de son programme de transformation de l’infrastructure financière. De plus, la banque centrale des Émirats arabes unis a approuvé le Zand AED, la première monnaie stable réglementée et multi-chaînes adossée à l’AED, émise par la banque agréée Zand.

La Banque nationale tchèque a créé un portefeuille test d’actifs numériques d’une valeur de 1 million de dollars, comprenant des bitcoins, une monnaie stable en dollars américains et un dépôt sous forme de jetons, afin d’acquérir une expérience pratique en matière d’opérations, de sécurité et de lutte contre le blanchiment d’argent, sans intention d’investir activement.

La Roumanie a mené à bien son premier projet pilote en argent réel avec le portefeuille d’identité numérique de l’UE (EUDIW), en collaboration avec Banca Transilvania et BPC, permettant à un titulaire de carte d’authentifier un achat via le portefeuille plutôt que par SMS OTP ou lecteur de carte.

La Commission européenne a ouvert une enquête DMA afin de déterminer si Google Search pénalise injustement les éditeurs de presse via sa politique d’« abus de réputation de site », qui peut faire baisser le classement des médias hébergeant du contenu partenaire.

En termes de stratégies numériques, l’Agenda 2030 de la Commission européenne pour les consommateurs présente un plan visant à renforcer la protection, la confiance et la compétitivité tout en simplifiant la réglementation pour les entreprises.
Le Turkménistan a adopté sa première loi complète sur les actifs virtuels, qui entrera en vigueur le 1er janvier 2026, légalisant le minage de cryptomonnaies et autorisant les échanges sous réserve d’un enregistrement strict auprès de l’État.

DROITS DE L’HOMME

Le Conseil de l’UE a adopté de nouvelles mesures visant à accélérer le traitement des plaintes transfrontalières en matière de protection des données, avec des critères de recevabilité harmonisés et des droits procéduraux renforcés pour les citoyens et les entreprises. Un processus de coopération simplifié pour les cas simples permettra également de réduire les charges administratives et d’accélérer les résolutions.

L’Inde a commencé à mettre en œuvre sa loi de 2023 sur la protection des données personnelles numériques grâce à des règles nouvellement approuvées qui établissent les structures de gouvernance initiales, notamment un comité de protection des données, tout en accordant aux organisations un délai supplémentaire pour se conformer pleinement à leurs obligations.

JURIDIQUE

OpenAI s’oppose à une demande juridique restreinte du New York Times concernant 20 millions de conversations ChatGPT, dans le cadre du procès intenté par le Times pour utilisation abusive présumée de son contenu. OpenAI avertit que le partage de ces données pourrait exposer des informations sensibles et créer des précédents importants quant à la manière dont les plateformes d’IA traitent la confidentialité des utilisateurs, la conservation des données et la responsabilité juridique.

Un juge américain a autorisé la poursuite de l’action en justice engagée par l’Authors Guild contre OpenAI, rejetant le renvoi et admettant les allégations selon lesquelles les résumés de ChatGPT reproduisent illégalement le ton, l’intrigue et les personnages des auteurs.

L’autorité de régulation des médias irlandaise a ouvert sa première enquête DSA sur X, afin de déterminer si les utilisateurs disposent de voies de recours accessibles et de résultats clairs lorsque les demandes de suppression de contenu sont refusées.

Dans un revers pour la FTC, un juge américain a statué que Meta n’exerçait pas actuellement de pouvoir monopolistique dans le domaine des réseaux sociaux, rejetant ainsi une proposition qui aurait pu contraindre à la cession d’Instagram et de WhatsApp.

SOCIOCULTUREL

La Commission européenne a lancé le Culture Compass for Europe, un cadre visant à placer la culture au cœur de la politique de l’UE, à promouvoir l’identité et la diversité et à soutenir les secteurs créatifs.

Les régulateurs chinois du cyberespace ont lancé une campagne de répression contre les deepfakes utilisant l’IA pour usurper l’identité de personnalités publiques dans le cadre de ventes en direct, ordonnant le nettoyage des plateformes et la responsabilisation des spécialistes du marketing.

DÉVELOPPEMENT

Les ministres d’Afrique occidentale et centrale ont adopté la Déclaration de Cotonou afin d’accélérer la transformation numérique d’ici 2030, en visant la création d’un marché numérique africain unique, la généralisation du haut débit, la mise en place d’infrastructures numériques interopérables et l’harmonisation des règles en matière de cybersécurité, de gouvernance des données et d’intelligence artificielle. Cette initiative met l’accent sur le capital humain et l’innovation, avec pour objectif de doter 20 millions de personnes de compétences numériques, de créer deux millions d’emplois dans le secteur numérique et de stimuler le développement de l’intelligence artificielle et des infrastructures numériques régionales sous l’égide de l’Afrique.

Le rapport de l’UIT intitulé « Mesurer le développement numérique : faits et chiffres 2025 » révèle que, si la connectivité mondiale continue de se développer (avec près de 6 milliards de personnes connectées à Internet en 2025), 2,2 milliards de personnes restent encore hors ligne, principalement dans les pays à faible et moyen revenu. Des écarts importants persistent en matière de qualité de connexion, d’utilisation des données, d’accessibilité financière et de compétences numériques, ce qui empêche de nombreuses personnes de profiter pleinement du monde numérique.

La Suisse s’est officiellement associée à Horizon Europe, Digital Europe et Euratom R&T, accordant ainsi aux chercheurs suisses un statut équivalent à celui des chercheurs de l’UE pour diriger des projets et obtenir des financements dans tous les domaines à partir du 1er janvier 2025.

L’Ouzbékistan accorde désormais une validité juridique totale aux données personnelles sur le portail de services publics my.gov.uz, les assimilant à des documents papier (à compter du 1er novembre). Les citoyens peuvent accéder, partager et gérer leurs dossiers entièrement en ligne.


Australie. L’Australie a présenté un nouveau plan national pour l’intelligence artificielle (IA) visant à l’exploiter au service de la croissance économique, de l’inclusion sociale et de l’efficacité du secteur public, tout en mettant l’accent sur la sécurité, la confiance et l’équité dans son utilisation. Ce plan mobilise des investissements substantiels : des centaines de millions de dollars australiens sont consacrés à la recherche, aux infrastructures, au développement des compétences et à des programmes visant à aider les petites et moyennes entreprises à adopter l’IA. Le gouvernement prévoit également d’étendre l’accès à cette technologie à l’ensemble du pays.

Les mesures concrètes comprennent la création d’un centre national dédié à l’IA, le soutien à l’adoption de l’IA par les entreprises et les organisations à but non lucratif, l’amélioration des compétences numériques par le biais de formations dans les écoles et les communautés, et l’intégration de l’IA dans la prestation des services publics.

Afin de garantir une utilisation responsable, le gouvernement créera l’AI Safety Institute (AISI), un centre national chargé de consolider la recherche sur la sécurité de l’IA, de coordonner l’élaboration de normes et de conseiller le gouvernement et l’industrie sur les meilleures pratiques. L’institut évaluera la sécurité des modèles d’IA avancés, favorisera la résilience face aux abus ou aux accidents et servira de plaque tournante pour la coopération internationale en matière de gouvernance et de recherche dans le domaine de l’IA.

Le rapport met en évidence les atouts relatifs du Bangladesh : une infrastructure administrative en ligne en pleine expansion et une confiance généralement élevée du public dans les services numériques. Cependant, il dresse également un tableau franc des défis structurels : une connectivité inégale et un approvisionnement électrique peu fiable en dehors des grandes zones urbaines, une fracture numérique persistante (notamment entre les sexes et entre les zones urbaines et rurales), une capacité informatique haut de gamme limitée et une protection des données, une cybersécurité et des compétences liées à l’IA insuffisantes dans de nombreux secteurs de la société.

Dans le cadre de sa feuille de route, le pays prévoit de donner la priorité aux cadres de gouvernance, au renforcement des capacités et au déploiement inclusif, en veillant notamment à ce que l’IA soutienne les services publics dans les domaines de la santé, de l’éducation, de la justice et de la protection sociale.

Belgique. La Belgique rejoint un nombre grandissant de pays et d’organisations du secteur public qui ont restreint ou bloqué l’accès à DeepSeek en raison de craintes liées à la sécurité. Tous les fonctionnaires du gouvernement fédéral belge doivent cesser d’utiliser DeepSeek à compter du 1er décembre, et toutes les instances de DeepSeek doivent être supprimées des appareils officiels.

Cette décision fait suite à un avertissement du Centre pour la cybersécurité en Belgique, qui a identifié de graves risques liés à la protection des données associés à cet outil et a signalé que son utilisation posait problème pour le traitement d’informations gouvernementales sensibles.

Russie. Lors de la principale conférence russe sur l’IA (AI Journey), le président Vladimir Poutine a annoncé la création d’un groupe de travail national sur l’IA, le présentant comme essentiel pour réduire la dépendance vis-à-vis de l’IA étrangère. Le plan prévoit la construction de centres de données (alimentés même par des centrales nucléaires à petite échelle) et leur utilisation pour héberger des modèles d’IA générative qui protègent les intérêts nationaux. M. Poutine a également fait valoir que seuls des modèles développés au niveau national devraient être utilisés dans les secteurs sensibles, tels que la sécurité nationale, afin d’éviter toute fuite de données.

Singapour. Singapour vient de créer un espace de test mondial pour sécuriser l’intelligence artificielle. Les entreprises de tous pays peuvent désormais y mener des expérimentations concrètes afin de garantir le bon fonctionnement de leurs systèmes d’IA.

Ce dispositif est régi par 11 principes de gouvernance conformes aux normes internationales, notamment le cadre de gestion des risques liés à l’IA du NIST et la norme ISO/IEC 42001. Singapour espère ainsi combler le fossé entre les réglementations nationales fragmentées en matière d’IA et établir des références communes en matière de sécurité et de confiance.

L’UE. Une importante tempête politique se prépare au sein de l’UE. La Commission européenne a présenté ce qu’elle appelle l’Omnibus numérique, un ensemble de propositions visant à simplifier sa législation numérique. Cette initiative est saluée par certains comme nécessaire pour améliorer la compétitivité des acteurs numériques de l’UE, mais critiquée par d’autres en raison de ses implications potentiellement négatives dans des domaines tels que les droits numériques. Ce paquet comprend la proposition de règlement omnibus numérique Digital et la proposition de règlement omnibus numérique sur l’IA.

Dans un autre registre, mais toujours en lien avec ce sujet, la Commission européenne a lancé un outil de dénonciation de la législation sur l’IA, offrant ainsi aux citoyens de l’UE un canal sécurisé et confidentiel pour signaler toute violation présumée de la loi sur l’IA, y compris les déploiements d’IA dangereux ou à haut risque. Avec le lancement de cet outil, l’UE vise à combler les lacunes dans l’application de la loi européenne sur l’IA, à renforcer la responsabilité des développeurs et des déployeurs, et à promouvoir une culture d’utilisation responsable de l’IA dans tous les États membres.

Cet outil vise également à favoriser la transparence, en permettant aux régulateurs de réagir plus rapidement aux violations potentielles sans se fier uniquement aux audits ou aux inspections. Quels sont les développements notables au sein de l’UE ? La proposition de règlement omnibus numérique sur l’IA reporte la mise en œuvre des règles « à haut risque » prévues par la loi européenne sur l’IA jusqu’en 2027, accordant ainsi aux grandes entreprises technologiques un délai supplémentaire avant l’entrée en vigueur d’une surveillance plus stricte. L’entrée en vigueur des règles relatives à l’IA à haut risque sera désormais alignée sur la disponibilité des outils de soutien, ce qui donnera aux entreprises jusqu’à 16 mois pour se mettre en conformité. Les PME et les petites entreprises à moyenne capitalisation bénéficieront d’une documentation simplifiée, d’un accès plus large aux bacs à sable réglementaires et d’une surveillance centralisée des systèmes d’IA à usage général par le biais du Bureau de l’IA.

Les obligations de déclaration en matière de cybersécurité sont également simplifiées grâce à une interface unique pour les incidents relevant de plusieurs législations, tandis que les règles de confidentialité sont clarifiées afin de soutenir l’innovation sans affaiblir les protections prévues par le RGPD. Les règles relatives aux cookies seront modernisées afin de réduire les demandes de consentement répétitives et de permettre aux utilisateurs de gérer plus efficacement leurs préférences.

L’accès aux données sera amélioré grâce à la consolidation de la législation européenne en matière de données via la stratégie de l’Union des données, à des exemptions ciblées pour les petites entreprises et à de nouvelles orientations sur la conformité contractuelle. Ces mesures visent à débloquer des ensembles de données de haute qualité pour l’IA et à renforcer le potentiel d’innovation de l’Europe, tout en permettant aux entreprises d’économiser des milliards et en améliorant la clarté réglementaire.

La proposition de règlement omnibus numérique a des implications pour la protection des données dans l’UE. Les modifications proposées au règlement général sur la protection des données (RGPD) redéfiniraient la notion de données à caractère personnel, affaiblissant les garanties relatives à l’utilisation de ces données par les entreprises, en particulier pour la formation de l’IA. Parallèlement, le consentement aux cookies est simplifié en un modèle « en un clic » qui dure jusqu’à six mois.

Les groupes de défense de la vie privée et des droits civils ont exprimé leur inquiétude quant au fait que les modifications proposées au RGPD profitent de manière disproportionnée aux grandes entreprises technologiques. Une coalition de 127 organisations a publié un avertissement public indiquant que cela pourrait constituer « le plus grand recul des droits fondamentaux numériques dans l’histoire de l’UE ».

Ces propositions doivent passer par le processus colégislatif de l’UE : le Parlement et le Conseil vont les examiner, les amender et les négocier. Compte tenu de la controverse (soutien de l’industrie, opposition de la société civile), le résultat final pourrait être très différent de la proposition initiale de la Commission.

Le Royaume-Uni. Le gouvernement britannique a lancé une initiative majeure en matière d’intelligence artificielle afin de stimuler la croissance nationale dans ce domaine, combinant investissements dans les infrastructures, soutien aux entreprises et financement de la recherche. Le déploiement immédiat de 150 millions de livres sterling dans le Northamptonshire marque le coup d’envoi d’un programme de 18 milliards de livres sterling sur cinq ans visant à renforcer les capacités nationales en matière d’IA. Grâce à un engagement de marché avancé de 100 millions de livres sterling, l’État agira en tant que premier client des start-ups nationales spécialisées dans le matériel informatique dédié à l’IA, contribuant ainsi à réduire les risques liés à l’innovation et à stimuler la compétitivité.

Le plan comprend des zones de croissance de l’IA, avec un site phare dans le sud du Pays de Galles qui devrait créer plus de 5 000 emplois, et un accès élargi au calcul haute performance pour les universités, les start-ups et les organismes de recherche. Un volet dédié de 137 millions de livres sterling, intitulé « AI for Science » (l’IA au service de la science), permettra d’accélérer les percées dans les domaines de la découverte de médicaments, des énergies propres et des matériaux avancés, garantissant ainsi que l’IA stimule à la fois la croissance économique et la valeur publique.

Les Etats-Unis. L’ombre d’une politique restrictive en matière de réglementation plane sur les États-Unis. Les républicains alignés sur Trump ont une nouvelle fois insisté pour obtenir un moratoire sur la réglementation de l’IA au niveau des États. L’idée est d’empêcher les États d’adopter leurs propres lois sur l’IA, en arguant qu’un paysage réglementaire fragmenté entraverait l’innovation. Une version de la proposition lierait le financement fédéral du haut débit à la volonté des États de renoncer aux règles relatives à l’IA, pénalisant ainsi efficacement tout État qui tenterait de légiférer. Cependant, cette opposition ne fait pas l’unanimité : plus de 260 législateurs d’État de tous les États-Unis, républicains comme démocrates, ont dénoncé ce moratoire.

Le président a officiellement créé la Genesis Mission par décret le 24 novembre 2025, chargeant le département américain de l’Énergie (DOE) de diriger un effort national de recherche scientifique axé sur l’IA. Cette mission permettra de créer une « plateforme américaine pour la science et la sécurité » unifiée, combinant les supercalculateurs des 17 laboratoires nationaux du DOE, les ensembles de données scientifiques fédérales accumulées au fil des décennies et une capacité de calcul haute performance sécurisée, créant ainsi ce que l’administration décrit comme « l’instrument scientifique le plus complexe et le plus puissant jamais construit au monde ».

Dans le cadre de ce plan, l’IA générera des « modèles scientifiques fondamentaux » et des agents IA capables d’automatiser la conception d’expériences, d’exécuter des simulations, de tester des hypothèses et d’accélérer les découvertes dans des domaines stratégiques clés : biotechnologie, matériaux avancés, minéraux critiques, science de l’information quantique, fission et fusion nucléaires, exploration spatiale, semi-conducteurs et micro-électronique.

Cette initiative est présentée comme essentielle pour la sécurité énergétique, le leadership technologique et la compétitivité nationale. L’administration affirme que malgré des décennies d’augmentation des fonds consacrés à la recherche, le rendement scientifique par dollar investi a stagné et que l’IA peut radicalement stimuler la productivité de la recherche en l’espace d’une décennie.

Pour concrétiser ces ambitions, le décret présidentiel établit une structure de gouvernance : le secrétaire du DOE supervise la mise en œuvre ; l’assistant du président pour la science et la technologie assure la coordination entre les agences ; et le DOE peut s’associer à des entreprises du secteur privé, des universités et d’autres parties prenantes pour intégrer les données, les calculs et les infrastructures.

Émirats arabes unis et Afrique. L’initiative « AI for Development » a été annoncée afin de promouvoir les infrastructures numériques à travers l’Afrique, soutenue par un engagement d’un milliard de dollars américains de la part des Émirats arabes unis. Selon les déclarations officielles, l’initiative prévoit d’allouer des ressources à des secteurs tels que l’éducation, l’agriculture, l’adaptation au changement climatique, les infrastructures et la gouvernance, aidant ainsi les gouvernements africains à adopter des solutions basées sur l’IA, même lorsque les capacités nationales en matière d’IA restent limitées.

Bien que tous les détails restent à préciser (par exemple, la sélection des pays partenaires, les mécanismes de gouvernance et de contrôle), l’ampleur et l’ambition de cette initiative témoignent de la volonté des Émirats arabes unis d’agir non seulement comme un centre d’adoption de l’IA, mais aussi comme un catalyseur régional et mondial du développement fondé sur l’IA.

Ouzbékistan. L’Ouzbékistan a annoncé le lancement du projet «5 millions de leaders en IA» visant à développer ses capacités nationales dans ce domaine. Dans le cadre de ce plan, le gouvernement intégrera des programmes axés sur l’IA dans les écoles, les formations professionnelles et les universités ; formera 4,75 millions d’étudiants, 150 000 enseignants et 100 000 fonctionnaires ; et lancera des concours à grande échelle pour les start-ups et les talents dans le domaine de l’IA.

Le programme prévoit également la mise en place d’une infrastructure informatique haute performance (en partenariat avec une grande entreprise technologique), la création d’un bureau national de transfert de l’IA à l’étranger et la création de laboratoires de pointe dans les établissements d’enseignement, le tout dans le but d’accélérer l’adoption de l’IA dans tous les secteurs.

Le gouvernement considère cela comme essentiel pour moderniser l’administration publique et positionner l’Ouzbékistan parmi les 50 premiers pays au monde prêts pour l’IA.

La bulle de l’intelligence artificielle est-elle sur le point d’éclater ?

La bulle de l’IA est en train de gonfler au point d’éclater. Il existe cinq causes à la situation actuelle et cinq scénarios futurs qui indiquent comment prévenir ou gérer un éventuel « éclatement ».

La frénésie des investissements dans l’IA ne s’est pas produite dans le vide. Plusieurs facteurs ont contribué à notre tendance à la surévaluation et à nos attentes irréalistes.

Première cause : le battage médiatique. L’IA a été présentée comme l’avenir inévitable de l’humanité. Ce discours a créé une forte peur de passer à côté (FOMO), incitant les entreprises et les gouvernements à investir massivement dans l’IA, souvent sans faire preuve de réalisme.

Deuxième cause : le rendement décroissant de la puissance de calcul et des données. La formule simple qui a dominé ces dernières années était la suivante : plus de puissance de calcul (c’est-à-dire plus de GPU Nvidia) + plus de données = une meilleure IA. Cette croyance a conduit à la création d’énormes usines d’IA : des centres de données à très grande échelle et une empreinte électrique et hydrique alarmante. Le simple fait d’empiler davantage de GPU ne permet plus aujourd’hui d’obtenir que des améliorations marginales.

Troisième cause : les limites logiques et conceptuelles des grands modèles linguistiques (LLM). Les LLM se heurtent à des limites structurelles qui ne peuvent être résolues simplement en augmentant les données et la puissance de calcul. Malgré le discours dominant sur l’imminence de la superintelligence, de nombreux chercheurs de premier plan doutent que les LLM actuels puissent simplement « évoluer » vers une intelligence artificielle générale (AGI) de niveau humain.

Quatrième cause : lenteur de la transformation de l’IA. La plupart des investissements dans l’IA sont encore basés sur le potentiel, et non sur une valeur mesurable et réalisée. La technologie progresse plus rapidement que la capacité de la société à l’absorber. Les précédents hivers de l’IA dans les années 1970 et à la fin des années 1980 ont fait suite à des périodes de promesses excessives et de résultats insuffisants, entraînant des réductions drastiques des financements et l’effondrement de l’industrie.

Cinquième cause : les écarts de coûts considérables. La dernière vague de modèles open source a démontré que ces derniers, qui coûtent quelques millions de dollars, peuvent égaler ou surpasser des modèles coûtant des centaines de millions. Cela soulève des questions quant à l’efficacité et la nécessité des dépenses actuelles en matière d’IA propriétaire.

 Balloon, Book, Comics, Publication, Person, Adult, Male, Man, Face, Head

Cinq scénarios décrivent comment l’engouement actuel pourrait évoluer.

Premier scénario : le pivot rationnel (la solution classique). Une correction du marché pourrait éloigner le développement de l’IA de l’hypothèse selon laquelle plus de puissance de calcul produit automatiquement de meilleurs modèles. Au contraire, le domaine s’orienterait vers des architectures plus intelligentes, une intégration plus profonde avec les connaissances humaines et les institutions, et des systèmes plus petits, spécialisés et souvent open source. Les politiques publiques s’orientent déjà dans cette direction : le plan d’action américain en matière d’IA considère les modèles ouverts comme des atouts stratégiques. Cependant, ce pivot se heurte à la résistance des modèles propriétaires bien établis, à la dépendance vis-à-vis des données fermées et aux débats non résolus sur la manière dont les créateurs de connaissances humaines devraient être rémunérés.

Deuxième scénario : « Trop grand pour faire faillite » (le scénario du sauvetage de 2008). Une autre issue consiste à considérer les grandes entreprises d’IA comme des infrastructures économiques essentielles. Les leaders du secteur mettent déjà en garde contre l’« irrationalité » des niveaux d’investissement actuels, suggérant qu’un trimestre faible d’une entreprise clé pourrait ébranler les marchés mondiaux. Dans ce scénario, les gouvernements fournissent des filets de sécurité implicites ou explicites (crédits bon marché, réglementation favorable ou accords d’infrastructure public-privé) en partant du principe que les géants de l’IA sont d’importance systémique.

Troisième scénario : justification géopolitique (la Chine est à nos portes). La concurrence avec la Chine pourrait devenir la principale justification d’un investissement public soutenu. Les progrès rapides de la Chine, notamment avec des modèles ouverts à faible coût comme DeepSeek R1, suscitent déjà des comparaisons avec le « choc Spoutnik ». Le soutien aux champions nationaux est alors présenté comme une question de souveraineté technologique, transférant le risque des investisseurs aux contribuables.

Quatrième scénario : monopolisation de l’IA (le pari de Wall Street). Si les petites entreprises ne parviennent pas à monétiser leurs activités, les capacités en matière d’IA pourraient se concentrer entre les mains d’une poignée de géants technologiques, à l’image de la monopolisation passée dans les domaines de la recherche, des réseaux sociaux et du cloud. La domination de Nvidia dans le domaine du matériel informatique dédié à l’IA renforce cette dynamique. Les modèles open source ralentissent la consolidation, mais ne l’empêchent pas.

Cinquième scénario : l’hiver de l’IA et les nouveaux jouets numériques. Enfin, un léger hiver de l’IA pourrait apparaître à mesure que les investissements se refroidissent et que l’attention se tourne vers de nouvelles frontières : l’informatique quantique, les jumeaux numériques, la réalité immersive. L’IA resterait une infrastructure vitale, mais ne serait plus au centre de l’engouement spéculatif.

Les prochaines années montreront si l’IA deviendra un autre jouet numérique surévalué ou une partie plus mesurée, ouverte et durable de notre infrastructure économique et politique.
Ce texte est une adaptation de l’article du Dr Jovan Kurbalija intitulé « La bulle de l’IA est-elle sur le point d’éclater ? Cinq causes et cinq scénarios ». Veuillez consulter l’article original ci-dessous.

BLOG featured image 2025 AI bubble
www.diplomacy.edu

This text outlines five causes and five scenarios around the AI bubble and potential burst.

Réajustement de l’agenda numérique : points clés du document SMSI+20 Rev 1

Une version révisée du document final du SMSI+20 – Révision 1 – a été publiée le 7 novembre par les co-facilitateurs du processus intergouvernemental. Ce document servira de base aux négociations entre les États membres de l’ONU avant la réunion de haut niveau de l’Assemblée générale qui se tiendra les 16 et 17 décembre 2025.

Tout en conservant la structure générale du projet zéro publié en août, la révision 1 introduit plusieurs modifications et nouveaux éléments.

Le nouveau texte comprend des formulations révisées et renforcées à certains endroits – soulignant la nécessité de combler plutôt que de réduire les fractures numériques, présentées comme des défis multidimensionnels qui doivent être relevés pour réaliser la vision du SMSI. 

Parallèlement, certaines questions ont été dépriorisées : par exemple, les références aux déchets électroniques et l’appel à l’adoption de normes mondiales de reporting sur les impacts environnementaux ont été supprimées de la section consacrée à l’environnement.

Plusieurs nouveaux éléments font également leur apparition. Dans la section consacrée aux environnements favorables, les États sont enjoints de s’abstenir de prendre des mesures unilatérales contraires au droit international.

L’importance d’une participation inclusive à l’élaboration des normes est également de nouveau reconnue.

La section consacrée aux mécanismes financiers invite le Secrétaire général à envisager la création d’un groupe de travail sur les futurs mécanismes financiers pour le développement numérique, dont les conclusions seront présentées à l’Assemblée générale des Nations unies lors de sa 81ème session.

La section consacrée à la gouvernance de l’Internet fait désormais référence aux lignes directrices NetMundial+10.

Les formulations relatives au Forum sur la gouvernance de l’Internet (FGI) restent largement fidèles au projet Zéro. Cela confirme la volonté de pérenniser le forum et de charger le Secrétaire général de soumettre des propositions sur son financement futur. De nouveaux passages invitent par ailleurs le FGI à renforcer la participation des gouvernements et des acteurs des pays en développement aux débats sur la gouvernance du web et les technologies émergentes.

Plusieurs domaines ont connu des changements de ton. Le langage utilisé dans la section consacrée aux droits de l’homme a été adouci à certains endroits (par exemple, les références aux garanties en matière de surveillance et aux menaces pesant sur les journalistes ont été supprimées).

Et la manière dont l’interaction entre le SMSI et le PNM est présentée a changé : l’accent est désormais mis sur l’alignement entre les processus du SMSI et du PNM plutôt que sur leur intégration. Par exemple, si la feuille de route conjointe PNM-SMSI visait initialement à « intégrer les engagements du PNM dans l’architecture du SMSI », elle devrait désormais « viser à renforcer la cohérence entre les processus du SMSI et du PNM ». Des ajustements correspondants sont également reflétés dans les rôles du Conseil économique et social et de la Commission de la science et de la technologie pour le développement.

Quelle est la prochaine étape ? Les inscriptions sont désormais ouvertes pour la prochaine consultation virtuelle des parties prenantes du SMSI+20, prévue le lundi 8 décembre 2025, afin de recueillir des commentaires sur le projet de document final révisé (Rev2). Les participants doivent s’inscrire avant le dimanche 7 décembre à 23 h 59 (heure de l’Est).

Une première réponse au Rev2 et un document-cadre guideront la session et seront publiés dès qu’ils seront disponibles.

Cette consultation s’inscrit dans le cadre des préparatifs de la réunion de haut niveau de l’Assemblée générale consacrée à l’examen global de la mise en œuvre des résultats du Sommet mondial sur la société de l’information (SMSI+20), qui se tiendra les 16 et 17 décembre 2025.

Programmation et climat : les enjeux de l’IA et du numérique à la COP 30

La COP 30, la 30e conférence annuelle des Nations unies sur le climat, s’est officiellement achevée vendredi dernier, le 21 novembre. Alors que le calme revient à Belém, nous examinons de plus près les résultats obtenus et leurs implications pour les technologies numériques et l’IA.

Dans le domaine de l’agriculture, la dynamique est clairement en train de s’accélérer. Le Brésil et les Émirats arabes unis ont dévoilé AgriLLM, le premier modèle linguistique open source de grande envergure spécialement conçu pour l’agriculture, développé avec le soutien de partenaires internationaux dans le domaine de la recherche et de l’innovation. L’objectif est de fournir aux gouvernements et aux organisations locales une base numérique commune pour créer des outils permettant de fournir aux agriculteurs des conseils pertinents et adaptés à leur situation locale. Parallèlement, l’initiative AIM for Scale vise à fournir des services de conseil numériques, notamment des prévisions climatiques et des informations sur les cultures, à 100 millions d’agriculteurs.

 Person, Book, Comics, Publication, Cleaning, Clothing, Hat, Outdoors

Les villes et les infrastructures s’engagent également davantage dans la transformation numérique. Grâce au Fonds de développement de la résilience des infrastructures, les assureurs, les banques de développement et les investisseurs privés mettent en commun leurs capitaux pour financer des infrastructures résilientes au changement climatique dans les économies émergentes, qu’il s’agisse de systèmes d’énergie propre et d’approvisionnement en eau ou de réseaux numériques nécessaires pour maintenir les communautés connectées et protégées en cas de chocs climatiques.

Le programme numérique le plus explicite a vu le jour dans le cadre des « facilitateurs et accélérateurs ». Le Brésil et ses partenaires ont lancé la première infrastructure numérique pour l’action climatique au monde, une initiative mondiale visant à aider les pays à adopter des biens publics numériques ouverts dans des domaines tels que la réponse aux catastrophes, la gestion de l’eau et l’agriculture résiliente au climat. Le défi de l’innovation qui l’accompagne soutient déjà de nouvelles solutions conçues pour être déployées à grande échelle.

Le Green Digital Action Hub a également été lancé et aidera les pays à mesurer et à réduire l’empreinte environnementale de la technologie, tout en élargissant l’accès aux outils qui utilisent la technologie au service de la durabilité.

La formation et le renforcement des capacités ont fait l’objet d’une attention particulière grâce au nouveau AI Climate Institute, qui aidera les pays du Sud à développer et à déployer des applications d’IA adaptées aux besoins locaux, en particulier des modèles légers et économes en énergie.

Le Nature’s Intelligence Studio, basé en Amazonie, soutiendra l’innovation inspirée par la nature et introduira des outils d’IA ouverts qui aideront à relever les défis réels en matière de durabilité grâce à des solutions biosourcées.

Enfin, la COP 30 a réalisé une première en inscrivant fermement l’intégrité de l’information à l’ordre du jour de l’action climatique.

La désinformation et la mésinformation étant reconnues comme un risque mondial majeur, les gouvernements et les partenaires ont lancé une déclaration et un nouveau processus multipartite visant à renforcer la transparence, la responsabilité partagée et la confiance du public dans les informations climatiques, y compris les plateformes numériques qui les façonnent.

Vue d’ensemble. Dans tous les domaines, la COP 30 a envoyé un message clair : la dimension numérique de l’action climatique n’est pas facultative, elle fait partie intégrante de la mise en œuvre de l’action climatique.

De l’Australie à l’Union européenne : de nouvelles mesures protègent les enfants contre les dangers en ligne

Les interdictions visant les moins de 16 ans se généralisent à l’échelle mondiale, et l’Australie est le pays qui va le plus loin dans cette voie. Les autorités de régulation australiennes ont désormais élargi le champ d’application de l’interdiction pour inclure des plateformes telles que Twitch, qui est considérée comme soumise à une restriction d’âge en raison de ses fonctionnalités d’interaction sociale. Meta a commencé à informer les utilisateurs australiens présumés âgés de moins de 16 ans que leurs comptes Facebook et Instagram seront désactivés à compter du 4 décembre, une semaine avant l’entrée en vigueur officielle de la loi le 10 décembre.

Afin d’accompagner les familles dans cette transition, le gouvernement a mis en place un groupe consultatif de parents, réunissant des organisations représentant divers types de ménages, afin d’aider les parents à guider leurs enfants en matière de sécurité en ligne, de communication et de connexion numérique sécurisée.

L’interdiction a déjà suscité une opposition. Les principales plateformes de réseaux sociaux ont critiqué cette interdiction, mais ont indiqué qu’elles s’y conformeraient, YouTube étant la dernière à se rallier. Cependant, l’interdiction est désormais contestée devant la Haute Cour par deux jeunes de 15 ans, soutenus par le groupe de défense Digital Freedom Project. Ils affirment que la loi limite de manière injuste la capacité des moins de 16 ans à participer au débat public et à l’expression politique, réduisant ainsi au silence les jeunes sur des questions qui les concernent directement.

La Malaisie prévoit également d’interdire les comptes sur les réseaux sociaux aux personnes de moins de 16 ans à partir de 2026. Le gouvernement a approuvé cette mesure afin de protéger les enfants contre les dangers en ligne tels que le cyberharcèlement, les escroqueries et l’exploitation sexuelle. Les autorités envisagent des approches telles que la vérification électronique de l’âge à l’aide de cartes d’identité ou de passeports, bien que la date exacte de mise en œuvre n’ait pas encore été fixée.

Les législateurs européens ont proposé des protections similaires. Le Parlement européen a adopté un rapport non législatif appelant à une harmonisation de l’âge minimum à 16 ans au sein de l’UE pour les réseaux sociaux, les plateformes de partage de vidéos et les assistants IA. L’accès des 13-16 ans ne serait autorisé qu’avec le consentement parental. Les députés soutiennent le développement d’une application européenne de vérification d’âge et du portefeuille d’identité numérique européen (eID), tout en insistant sur le fait que ces outils ne dispensent pas les plateformes de concevoir des services sûrs par défaut.

Au-delà des restrictions d’âge, l’UE renforce les mesures de protection de manière plus générale. Les États membres ont approuvé une position du Conseil en faveur d’un règlement visant à prévenir et à combattre les abus sexuels sur enfants en ligne qui définit des obligations concrètes et exécutoires pour les fournisseurs de services en ligne. Les plateformes devront procéder à des évaluations formelles des risques afin d’identifier comment leurs services pourraient être utilisés pour diffuser du matériel pédopornographique ou pour solliciter des enfants, puis mettre en place des mesures d’atténuation, allant de paramètres de confidentialité par défaut plus sûrs pour les enfants et d’outils de signalement pour les utilisateurs à des garanties techniques. Les États membres désigneront des autorités nationales de coordination et compétentes qui pourront examiner ces évaluations des risques, obliger les fournisseurs à mettre en œuvre des mesures d’atténuation et, si nécessaire, imposer des sanctions financières en cas de non-respect.

Il est important de noter que le Conseil introduit une classification des risques à trois niveaux pour les services en ligne (élevé, moyen, faible). Les services jugés à haut risque, sur la base de critères concrets tels que le type de service, peuvent être tenus non seulement d’appliquer des mesures d’atténuation plus strictes, mais aussi de contribuer au développement de technologies visant à réduire ces risques. Les moteurs de recherche peuvent être contraints de supprimer des résultats ; les autorités compétentes peuvent exiger la suppression ou le blocage de l’accès aux contenus pédopornographiques. Cette position maintient et vise à rendre permanente une exemption temporaire existante qui permet aux fournisseurs (par exemple, les services de messagerie) de scanner volontairement les contenus à la recherche de pédopornographie — une exemption qui expire le 3 avril 2026.

Afin de mettre en œuvre et de coordonner l’application de la réglementation, celle-ci prévoit la création d’un nouvel organe régulateur, le Centre de l’UE sur les abus sexuels sur enfants.  Le Centre traitera et évaluera les informations et signalements transmis par les plateformes, gérera une base de données contenant les rapports des fournisseurs et ainsi qu’une base d’indicateurs d’abus sexuels, que les entreprises pourront utiliser pour leurs activités de détection volontaire, soutiendra les victimes à obtenir le retrait ou le blocage de l’accès aux contenus les mettant en scène, et partagera les informations pertinentes avec Europol et les services répressifs nationaux. Le siège du Centre n’a pas encore été fixé ; il fera l’objet de négociations avec le Parlement européen. L’accord trouvé au sein du Conseil marque une étape décisive. Les négociations formelles en « trilogue » (discussions entre le Conseil, le Parlement et la Commission) peuvent désormais débuter, le Parlement ayant déjà adopté sa propre position en novembre 2023.

Le rapport du Parlement européen s’attaque également aux risques numériques du quotidien. Les députés appellent à l’interdiction des pratiques addictives les plus nocives, notamment : le défilement infini (infinite scroll), la lecture automatique (autoplay), les boucles de récompense (reward loops), le mécanisme de “tirer pour rafraîchir” (pull-to-refresh). D’autres fonctionnalités addictives devraient être désactivées par défaut pour les mineurs. Le Parlement demande instamment l’interdiction des algorithmes de recommandation basés sur l’engagement pour les jeunes utilisateurs, tout en exigeant que les règles claires du règlement sur les services numériques (DSA) soient étendues aux plateformes de partage de vidéos. Le rapport cible également les mécanismes de jeu qui imitent les jeux de hasard. Les « loot boxes » (coffres à butin), les récompenses aléatoires intégrées aux applications et les mécaniques de « pay-to-progress » (payer pour progresser) devraient être proscrites afin de protéger les plus jeunes de l’engrenage financier et psychologique. Enfin, le texte aborde l’exploitation commerciale, demandant instamment l’interdiction pour les plateformes de proposer des incitations financières pour le « kidfluencing », c’est-à-dire l’utilisation d’enfants comme influenceurs. 

Les députés européens ont pointé du doigt les risques liés à l’IA générative : deepfakes, chatbots de compagnie, agents d’IA et applications de nudité par IA créant des images manipulées non consensuelles, appelant à une action juridique et éthique urgente. La rapporteure Christel Schaldemose a présenté ces mesures comme le tracé d’une ligne rouge claire : les plateformes ne sont « pas conçues pour les enfants » et l’expérimentation consistant à laisser des designs addictifs et manipulateurs cibler les mineurs doit prendre fin.

Une nouvelle initiative multilatérale est également en cours : le commissaire à la sécurité électronique (Australie), l’Ofcom (Royaume-Uni) et la DG CNECT de la Commission européenne vont coopérer afin de protéger les droits, la sécurité et la vie privée des enfants en ligne.

Les instances de régulation appliqueront les lois relatives à la sécurité en ligne, exigeront des plateformes qu’elles évaluent et atténuent les risques pour les enfants, encourageront les technologies préservant la vie privée telles que la vérification de l’âge, et s’associeront à la société civile et au monde universitaire afin que les approches réglementaires restent ancrées dans la réalité. 

Un nouveau groupe technique trilatéral sera créé afin d’étudier comment les systèmes de vérification de l’âge peuvent fonctionner de manière fiable et interopérable, renforçant ainsi les preuves pour de futures mesures réglementaires.

Les députés européens ont mis en avant les risques liés à l’IA générative (deepfakes, chatbots de compagnie, agents IA et applications de nudité alimentées par l’IA qui créent des images modifiées sans consentement) et ont appelé à une action juridique et éthique urgente. La rapporteure Christel Schaldemose a présenté ces mesures comme une ligne claire : les plateformes ne sont « pas conçues pour les enfants » et l’expérience consistant à laisser des designs addictifs et manipulateurs cibler les mineurs doit cesser.

Une nouvelle initiative multilatérale est également en cours : le commissaire à la sécurité électronique (Australie), l’Ofcom (Royaume-Uni) et la DG CNECT de la Commission européenne vont coopérer afin de protéger les droits, la sécurité et la vie privée des enfants en ligne.

Les instances de régulation appliqueront les lois relatives à la sécurité en ligne, exigeront des plateformes qu’elles évaluent et atténuent les risques pour les enfants, encourageront les technologies préservant la vie privée telles que la vérification de l’âge, et s’associeront à la société civile et au monde universitaire afin que les approches réglementaires restent ancrées dans la réalité.

Un nouveau groupe technique trilatéral sera créé afin d’étudier comment les systèmes de vérification de l’âge peuvent fonctionner de manière fiable et interopérable, renforçant ainsi les preuves pour de futures mesures réglementaires.

L’objectif global est d’aider les enfants et les familles à utiliser Internet de manière plus sûre et plus confiante, en favorisant la culture numérique, l’esprit critique et en rendant les plateformes en ligne plus responsables.

Cloud en panne : le grand désert numérique.

Le 18 novembre, Cloudflare, l’infrastructure invisible qui soutient des millions de sites web, a connu une panne que l’entreprise qualifie de « la plus grave depuis 2019 ». Les utilisateurs du monde entier ont reçu des messages d’erreur interne du serveur lorsque des services tels que X et ChatGPT ont été temporairement hors ligne.

La cause était une erreur de configuration interne. Une modification de routine des autorisations dans une base de données ClickHouse a entraîné la création d’un « fichier de fonctionnalités » mal formé utilisé par l’outil de gestion des robots de Cloudflare. Ce fichier a doublé de taille de manière inattendue et, lorsqu’il a été diffusé sur le réseau mondial de Cloudflare, il a dépassé les limites intégrées, provoquant des pannes en cascade.

Alors que les ingénieurs s’empressaient d’isoler le fichier défectueux, le trafic a progressivement repris. En milieu d’après-midi, Cloudflare a interrompu la propagation, remplacé le fichier corrompu et redémarré les systèmes clés ; le réseau a été entièrement rétabli quelques heures plus tard.

Une vision plus large. Cet incident n’est pas isolé. Le mois dernier, Microsoft Azure a subi une panne de plusieurs heures qui a perturbé les clients professionnels en Europe et aux États-Unis, tandis qu’Amazon Web Services (AWS) a connu des interruptions intermittentes affectant les plateformes de streaming et les sites de commerce électronique. Ces événements, combinés à la panne de Cloudflare, soulignent la fragilité de l’infrastructure cloud mondiale.

Cette panne survient à un moment politiquement sensible dans le débat sur la politique européenne en matière de cloud. Les régulateurs bruxellois enquêtent déjà sur AWS et Microsoft Azure afin de déterminer s’ils doivent être désignés comme « gardiens » au titre de la loi européenne sur les marchés numériques (DMA). Ces enquêtes visent à évaluer si leur position dominante dans l’infrastructure cloud leur confère un contrôle disproportionné, même si, techniquement, ils ne répondent pas aux critères habituels de taille fixés par la loi.

Ce schéma récurrent met en évidence une vulnérabilité majeure de l’Internet moderne, née d’une dépendance excessive à l’égard d’une poignée de fournisseurs essentiels. Lorsqu’un de ces piliers centraux vacille, que ce soit en raison d’une mauvaise configuration, d’un bug logiciel ou d’un problème régional, les effets se répercutent à tous les niveaux. La concentration même des services, qui permet l’efficacité et l’évolutivité, crée également des points de défaillance uniques avec des conséquences en cascade.

Le mois dernier à Genève

Le monde de la gouvernance numérique a été très actif à Genève en novembre. Voici ce que nous avons essayé de suivre.

 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

Le CERN présente sa stratégie en matière d’intelligence artificielle pour faire progresser la recherche et les opérations

Le CERN a approuvé une stratégie globale en matière d’IA afin de guider son utilisation dans les domaines de la recherche, des opérations et de l’administration. Cette stratégie rassemble différentes initiatives au sein d’un cadre cohérent visant à promouvoir une IA responsable et efficace au service de l’excellence scientifique et opérationnelle.

Elle s’articule autour de quatre objectifs principaux : accélérer les découvertes scientifiques, améliorer la productivité et la fiabilité, attirer et développer les talents, et permettre le déploiement à grande échelle de l’IA grâce à des partenariats stratégiques avec l’industrie et les États membres.

Des outils communs et des expériences partagées entre les différents secteurs renforceront la communauté du CERN et garantiront un déploiement efficace.

La mise en œuvre impliquera des plans prioritaires et une collaboration avec les programmes de l’UE, l’industrie et les États membres afin de renforcer les capacités, d’obtenir des financements et de développer les infrastructures. Les applications de l’IA soutiendront les expériences de physique des hautes énergies, les futurs accélérateurs, les détecteurs et la prise de décision fondée sur les données.

Le groupe intersessions 2025-2026 de la Commission des Nations Unies pour la science et la technologie au service du développement (CSTD)

La Commission de la science et de la technique au service du développement (CSTD) de l’ONU a tenu sa réunion intersessions 2025-2026 le 17 novembre au Palais des Nations à Genève. L’ordre du jour était axé sur la science, la technologie et l’innovation à l’ère de l’IA, avec des contributions d’experts issus du monde universitaire, d’organisations internationales et du secteur privé. Les délégations ont également examiné les progrès réalisés dans la mise en œuvre du SMSI avant le processus SMSI+20 et ont reçu des informations actualisées sur la mise en œuvre du Pacte numérique mondial (PNM) et les travaux en cours sur la gouvernance des données au sein du groupe de travail dédié de la CSTD. Les conclusions et recommandations du panel seront examinées lors de la vingt-neuvième session de la Commission en 2026.

Quatrième réunion du groupe de travail multipartite de la CSTD des Nations Unies sur la gouvernance des données à tous les niveaux

Le groupe de travail multipartite de la CSTD sur la gouvernance des données à tous les niveaux s’est réuni pour la quatrième fois les 18 et 19 novembre. Le programme a débuté par des allocutions d’ouverture et l’adoption officielle de l’ordre du jour. Le secrétariat de la CNUCED a ensuite présenté un aperçu des contributions soumises depuis la dernière session, en soulignant les nouveaux domaines de convergence et de divergence entre les parties prenantes. La réunion s’est poursuivie par des délibérations de fond organisées autour de quatre axes couvrant les dimensions clés de la gouvernance des données : les principes applicables à tous les niveaux ; l’interopérabilité entre les systèmes ; le partage des avantages des données ; et la mise en place de flux de données sûrs, sécurisés et fiables, y compris au-delà des frontières. Ces discussions ont pour objectif d’explorer les approches pratiques, les défis existants et les voies possibles vers un consensus.

Après la pause déjeuner, les délégués se sont réunis à nouveau pour une séance plénière qui a duré tout l’après-midi afin de poursuivre les échanges thématiques, avec des occasions d’interaction entre les États membres, le secteur privé, la société civile, le monde universitaire, la communauté technique et les organisations internationales.

La deuxième journée a été consacrée aux étapes importantes du groupe de travail. Les délégations ont examiné les grandes lignes, le calendrier et les attentes concernant le rapport d’étape à présenter à l’Assemblée générale, ainsi que le processus de sélection du prochain président du groupe de travail. La session s’est conclue par un accord sur le calendrier des prochaines réunions et sur toute question supplémentaire soulevée par les participants.

Dialogue sur l’innovation 2025 : Les neurotechnologies et leurs implications pour la paix et la sécurité internationales

On 24 November, UNIDIR hosted its Innovations Dialogue on neurotechnologies and their implications for international peace and security in Geneva and online. Experts from neuroscience, law, ethics, and security policy discussed developments such as brain-computer interfaces and cognitive enhancement tools, exploring both their potential applications and the challenges they present, including ethical and security considerations. The event included a poster exhibition on responsible use and governance approaches.

14e Forum des Nations Unies sur les entreprises et les droits de l’Homme
Le 14e Forum des Nations Unies sur les entreprises et les droits de l’Homme s’est tenu du 24 au 26 novembre à Genève et en ligne, sur le thème « Accélérer l’action en faveur des entreprises et des droits de l’Homme face aux crises et aux transformations ». Le forum a abordé des questions clés, notamment la protection des droits de l’Homme à l’ère de l’IA et l’exploration des droits de l’Homme et du travail sur les plateformes dans la région Asie-Pacifique dans le contexte de la transformation numérique en cours. En marge de l’événement, une session a également examiné de près “le travail de l’ombre” derrière l’intelligence artificielle.


Weekly #241 Australia’s social media ban: Making it work

 Logo, Text

28 November-5 December 2025


HIGHLIGHT OF THE WEEK

Australia’s social media ban: Making it work

Australia’s under-16 social-media ban is moving from legislation to enforcement, and the first signs of impact are already visible. Ahead of the 10 December deadline, Meta has begun blocking teen users, warning that accounts flagged as belonging to under-16s will be restricted or shut down. Those mistakenly removed can appeal by submitting a government-issued ID or a video selfie age check—a process that is already prompting complaints about privacy and accuracy. YouTube, meanwhile, has criticised the framework as unrealistic and potentially harmful, arguing that overly rigid age controls could push young people toward far less safe online spaces. However, the platform will, ultimately, comply with the ban.

The Australian government remains confident the world will follow its lead, framing the ban as a model for global child-safety regulation. But with implementation underway, many are asking a basic question: how will the ban actually work in practice?

 Book, Publication, Comics, Person, Animal, Bird, Face, Head, Bus Stop, Outdoors, Bench, Furniture

Australia’s Online Safety Amendment (Social Media Minimum Age) Act 2024 bans anyone under 16 from creating or maintaining accounts on major social-media platforms. 

  • Companies such as Meta must take ‘reasonable steps’ to verify users’ ages or face fines of up to AUD$50 million. 
  • Platforms can choose from various verification methods, including government-issued ID checks, third-party age-assurance tools, facial-analysis systems, or data-based age inference. 
  • If an account appears to belong to someone under 16, platforms must restrict it, request verification, or close it. Users who are wrongly flagged can file appeals; however, the process varies. 
  • The law applies broadly to apps with social-networking features, while some smaller platforms fall outside the scope. 

Overall, enforcement relies heavily on industry compliance and emerging age-verification technologies.

Public and expert reactions reflect this tension between intention and reality.

Supporters argue the ban protects children from cyberbullying, harmful content, and addictive platform design, while setting a global precedent for stricter regulation of Big Tech. Some mental-health professionals cautiously welcome reduced exposure to high-risk environments. 

Critics warn that the policy may isolate teenagers, limit self-expression, and disproportionately harm vulnerable groups who rely on online communities. Age-verification trials reveal accuracy problems, with systems misclassifying teens and adults, raising concerns about wrongful account closures. Privacy advocates object to the increased collection of sensitive data, including IDs and facial images. Human rights groups say the ban restricts young people’s freedoms and may create a false sense of security. Tech companies publicly question the feasibility, but many have signalled compliance to avoid heavy fines. Meanwhile, many Australian teens reportedly plan to migrate to lesser-regulated apps, potentially exposing them to greater risks. Overall, the debate centres on safety versus autonomy, privacy, and effectiveness.

Yet signs of strain are already emerging. Teens are rapidly migrating to smaller or less regulated platforms, using VPNs, borrowing adult devices, or exploiting loopholes in verification systems. So what happens next? Will the government spend its time chasing teenagers across an ever-expanding maze of apps, mirrors, clones and VPNs? Because if the history of the internet teaches anything, it’s this: once something is banned, it rarely disappears — it simply moves, mutates, and comes back wearing a different username.

IN OTHER NEWS LAST WEEK

This week in AI governance

Australia. Australia has unveiled a new National AI Plan designed to harness AI for economic growth, social inclusion and public-sector efficiency — while emphasising safety, trust and fairness in adoption. The plan mobilises substantial investment: hundreds of millions of AUD are channelled into research, infrastructure, skills development and programmes to help small and medium enterprises adopt AI; the government also plans to expand nationwide access to the technology.

Practical steps include establishing a national AI centre, supporting AI adoption among businesses and nonprofits, enhancing digital literacy through schools and community training, and integrating AI into public service delivery. 

Part of the planned steps is the establishment of the AI Safety Institute (AISI), which we wrote about last week. 

Uzbekistan. Uzbekistan has announced the launch of the ‘5 million AI leaders’ project to develop its domestic AI capabilities. As part of this plan, the government will integrate AI-focused curricula into schools, vocational training and universities; train 4.75 million students, 150,000 teachers and 100,000 public servants; and launch large-scale competitions for AI startups and talent.

The programme also includes building high-performance computing infrastructure (in partnership with a major tech company), establishing a national AI transfer office abroad, and creating state-of-the-art laboratories in educational institutions — all intended to accelerate adoption of AI across sectors.

The government frames this as central to modernising public administration and positioning Uzbekistan among the world’s top 50 AI-ready countries.

The country will also adopt Rules and principles of ethics in the development and use of artificial intelligence technologies, a framework which will introduce unified standards for developers, implementers and users across the country. Developers must ensure algorithmic transparency, safeguard personal data, assess risks and avoid harmful use; users must comply with legislation, respect rights, and handle data responsibly. Any harm to human rights, national security or the environment will trigger legal liability.

Belgium. Belgium joins a growing number of countries and public-sector organisations that have restricted or blocked China’s DeepSeek over security concerns. All Belgian federal government officials must cease using DeepSeek, effective 1 December, and all instances of DeepSeek must be removed from official devices.

The move follows a warning from Centre for Cybersecurity Belgium, which identified serious data-protection risks associated with the tool and flagged its use as problematic for handling sensitive government information.

Canada. Canada has formally adopted the world’s first national standard for accessible and equitable AI with the release of CAN-ASC-6.2 – Accessible and Equitable Artificial Intelligence Systems. The standard aims to ensure AI systems are designed to be accessible, inclusive and fair — in particular for people with disabilities — embedding accessibility and equity throughout the AI lifecycle. It provides guidance for organisations and developers on how to prevent exclusion, guarantee equitable benefits, and avoid discriminatory or exclusionary system designs. The standard was developed with input from a diverse committee, including persons with disabilities and members of equity-deserving groups. Its publication marks a major step toward ensuring that AI boosts social inclusion and digital accessibility, rather than reinforcing inequality.

The EU. The European Commission has launched a formal antitrust investigation into whether Meta’s new restrictions on AI providers’ access to WhatsApp violate EU competition rules. 

Under a Meta policy introduced in October 2025, AI companies are barred from using the WhatsApp Business Solution if AI is their primary service, although limited support functions, such as automated support, remain allowed. The policy will take effect for existing AI providers on 15 January 2026 and has already been applied to newcomers since 15 October 2025. 

The Commission fears this could shut out third-party AI assistants from reaching users in the European Economic Area (EEA), while Meta’s own Meta AI would continue to operate on the platform.

The probe—covering the entire EEA except Italy, which has been conducting its own investigation since July—will examine whether Meta is abusing its dominant position in breach of Article 102 TFEU and Article 54 of the EEA Agreement.

European regulators say outcomes could guide future oversight as generative AI becomes woven into essential communications. The case signals growing concern about the concentration of power in rapidly evolving AI ecosystems.

Google DeepMind CEO Demis Hassabis stated that ‘AGI, probably the most transformative moment in human history, is on the horizon’. 


Revision 2 of the WSIS+20 outcome document released

A revised version of the WSIS+20 outcome document, Revision 2, was published on 3 December by the co-facilitators of the intergovernmental process. 

Revision 2 introduces several noteworthy changes compared to Revision 1. In the introduction, new commitments include catalysing women’s economic agency and highlighting the importance of applying a human-centric approach throughout the lifecycle of digital technologies. 

The section on digital divides is strengthened through a shift in title from bridging to closing digital divides, a new recognition that such divides pose particular challenges for developing countries, an explicit call to integrate accessibility-by-design in digital development, and a clarification that the internet and digital services need to become both fully accessible and affordable.

In the digital economy section, previous language about governments’ concerns with safeguarding employment rights and welfare has been removed. The section on social and economic development now includes new language on the need for greater international cooperation to promote digital inclusion and digital literacy, including capacity building and financial mechanisms. 

Environmental provisions are expanded: Revision 2 introduces new language emphasising responsible mining and processing practices for critical mineral resources (although it removes a reference to equitable access to such resources), and it brings back a paragraph on e-waste, restoring calls for improved data gathering, collaboration on safe and efficient waste management, and sharing of technology and best practices.

Several changes also appear in areas related to security, financing, and AI. The section on building confidence and security in the use of ICTs clarifies that such efforts must be consistent with international human rights law (not just human rights), and it restores language from the Zero Draft recognising the need to counter violence occurring or amplified by technology, as well as hate speech, discrimination, misinformation, cyberbullying, and child sexual exploitation and abuse, together with commitments to establish robust risk-mitigation and redress measures. 

The paragraph on future financial mechanisms for digital development is revised to clarify that a potential task force would examine such mechanisms, and that the Secretary-General would consider establishing it within existing mandates and resources and in coordination with WSIS Action Line facilitators and other relevant UN entities. It also notes that the task force would build on and complement ongoing financing initiatives and mechanisms involving all stakeholders. 

In the AI section, requests to establish an AI research programme and an AI capacity-building fellowship are now directed to the UN Inter-Agency Working Group on AI, with the fellowship explicitly dedicated to increasing AI research expertise.

Several changes were made to the paragraphs on the Internet Governance Forum (IGF). Revision 2 adds language specifying that, in making the IGF a permanent UN forum, its secretariat would continue to be ensured by UN DESA, and that the Forum should have a stable and sustainable basis with appropriate staffing and resources, in accordance with UN budgetary procedures. This is reflected in a strengthened request for the Secretary-General to submit a proposal to the General Assembly to ensure sustainable funding for the Forum through a mix of core UN funding and voluntary contributions (whereas previous language only asked for proposals on future funding). In the Follow-up section, the request for the Secretary-General’s report on WSIS follow-up – which also incorporates updates on the Global Digital Compact (GDC) implementation – is now set on a biennial basis, with a clear request for both the CSTD and ECOSOC to consider the report, marking a notable shift in the follow-up and review framework.


UN launches Digital Cooperation Portal to accelerate GDC action 

The UN has launched the Digital Cooperation Portal, a new platform designed to accelerate collective action on the Global Digital Compact (GDC). The portal maps digital initiatives, connects partners worldwide, and tracks progress on key priorities, including AI governance, digital public infrastructure, human rights online, and inclusive digital economies.

An integrated AI Toolbox allows users to explore AI applications that help them analyse, connect, and enhance their digital cooperation projects. By submitting initiatives, stakeholders can increase visibility, support global coordination, and join a growing network working toward an inclusive digital future.

The Portal is open to all stakeholders from September 2024 through the GDC’s high-level review in 2027.


LAST WEEK IN GENEVA
 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

On Wednesday (3 December), Diplo, UNEP, and Giga are co-organising an event at the Giga Connectivity Centre in Geneva, titled ‘Digital inclusion by design: Leveraging existing infrastructure to leave no one behind’. The event looked at realities on the ground when it comes to connectivity and digital inclusion, and at concrete examples of how community anchor institutions like posts, schools, and libraries can contribute significantly to advancing meaningful inclusion. There was also a call for policymakers at national and international levels to keep these community anchor institutions in mind when designing inclusion strategies or discussing frameworks, such as the GDC and WSIS+20.

Organisations and institutions are invited to submit event proposals for the second edition of Geneva Security Week. Submissions are open until 6 January 2026. Co-organised once again by the UN Institute for Disarmament Research (UNIDIR) and the Swiss Federal Department of Foreign Affairs (FDFA), Geneva Security Week 2026 will take place from 4 to 8 May 2026 under the theme ‘Advancing Global Cooperation in Cyberspace’.

LOOKING AHEAD
 Person, Face, Head, Binoculars

OHCHR will hold its consultation on the rights-compatible use of digital tools in stakeholder engagement on 8–9 December, focusing on where technology can support engagement processes and where human involvement remains essential. The outcomes will feed into OHCHR’s report to the Human Rights Council in 2026.

The Inter-Parliamentary Union is organising ‘Navigating health misinformation in the age of AI’, a webinar that brings together parliamentarians, experts, and civil society to explore how misinformation and AI intersect to shape access to essential health services, particularly for women, children and adolescents. 

Registration is open for the WSIS+20 virtual consultation on revision 2 of the draft outcome document, to be held on 8 December. The session, organised by the Informal Multistakeholder Sounding Board, will gather targeted, paragraph-based input by stakeholders and provide process updates ahead of the General Assembly’s high-level meeting. Informal negotiations on Rev.2 will continue on 9, 10 and 11 December.



READING CORNER
UNDP 2

Rapid advances in AI are reshaping global development, raising urgent questions about whether all countries are prepared to benefit, or risk falling further behind.

computer 3923644 1280

Gaming and professional esports are rapidly emerging as powerful tools of global diplomacy, revealing how digital competition and shared virtual worlds can connect cultures, influence international relations, and empower new generations to shape the narratives that transcend traditional borders.

nov monthly

This month’s edition takes you from Washington to Geneva, COP30 to the WSIS+10 negotiations — tracing the major developments that are reshaping AI policy, online safety, and the resilience of the digital infrastructure we rely on every day.

Digital Watch newsletter – Issue 105 – November 2025

November 2025 in retrospect

This month’s edition takes you from Washington to Geneva, COP30 to the WSIS+20 negotiations — tracing the major developments that are reshaping AI policy, online safety, and the resilience of the digital infrastructure we rely on every day.

Here’s what we unpacked in this edition.

Is the AI bubble about to burst? — Will the AI bubble burst? Is AI now ‘too big to fail’? Will the US government bail out AI giants – and what would that mean for the global economy?

The global struggle to govern AI — Governments are racing to define rules, from national AI strategies to emerging global frameworks. We outline the latest moves

WSIS+20 Rev 1 highlights — A look inside the document now guiding negotiations among UN member states ahead of the high-level meeting of the General Assembly on 16–17 December 2025.

Code meets climate — What the UN member states discussed in terms of AI and digital at COP30.

Child safety online — From Australia to the EU, governments are rolling out new safeguards to protect children from online harms. We examine their approaches.

Digital draught — The Cloudflare outage exposed fragile dependencies in the global internet. We unpack what caused it — and what the incident reveals about digital resilience.

Last month in Geneva — Catch up on the discussions, events, and takeaways shaping international digital governance.

GLOBAL GOVERNANCE

France and Germany hosted a Summit on European Digital Sovereignty in Berlin to accelerate Europe’s digital independence. They presented a roadmap with seven priorities: simplifying regulation (including delaying some AI Act rules), ensuring fair cloud and digital markets, strengthening data sovereignty, advancing digital commons, expanding open-source digital public infrastructure, creating a Digital Sovereignty Task Force, and boosting frontier AI innovation. Over €12 billion in private investment was pledged. A major development accompanying the summit was the launch of the European Network for Technological Resilience and Sovereignty (ETRS) to reduce reliance on foreign technologies—currently over 80%—through expert collaboration, technology-dependency mapping, and support for evidence-based policymaking.

TECHNOLOGIES

The Dutch government has suspended its takeover of Nexperia, a Netherlands-based chipmaker owned by China’s Wingtech, following constructive talks with Chinese authorities. China has also begun releasing stockpiled chips to ease the shortage.

Baidu unveiled two in-house AI chips, the M100 for efficient inference on mixture-of-experts models (due early 2026) and the M300 for training trillion-parameter multimodal models (2027). It also outlined clustered architectures (Tianchi256 in H1 2026; Tianchi512 in H2 2026) to scale inference via large interconnects.IBM unveiled two quantum chips: Nighthawk (120 qubits, 218 tunable couplers) enabling ~30% more complex circuits, and Loon, a fault-tolerance testbed with six-way connectivity and long-range couplers.

INFRASTRUCTURE

Six EU states — Austria, France, Germany, Hungary, Italy and Slovenia — have jointly urged that the Digital Networks Act (DNA) be reconsidered, arguing that core elements of the proposal — including harmonised telecom-style regulation, network-fee dispute mechanisms and broader merger rules — should instead remain under national control.

CYBERSECURITY

Roblox will roll out mandatory facial age estimation (starting December in select countries, expanding globally in January) and segment users into strict age bands to block chats with adult strangers. Under-13s remain barred from private messages unless parents opt in.

Eurofiber confirmed a breach of its French ATE customer platform and ticketing system via third-party software, saying services stayed up and banking data was safe.

The FCC is set to vote on rescinding January rules under CALEA Section 105 that required major carriers to harden networks against unauthorised access and interception, measures adopted after the Salt Typhoon cyber-espionage campaign exposed telecom vulnerabilities.
The UK plans a Cyber Security and Resilience Bill to harden critical national infrastructure and the wider digital economy against rising cyber threats. About 1,000 essential service providers (health, energy, IT) would face strengthened standards, with potential expansion to 200+ data centres.

ECONOMIC

The UAE completed its first government transaction using the Digital Dirham, a CBDC pilot under the Central Bank’s Financial Infrastructure Transformation programme. In addition, the UAE’s central bank approved Zand AED, the country’s first regulated, multi-chain AED-backed stablecoin, issued by licensed bank Zand.

The Czech National Bank created a $1 million digital-assets test portfolio, holding bitcoin, a USD stablecoin, and a tokenised deposit, to gain hands-on experience with operations, security, and AML, with no plan for active investing.

Romania completed its first EU Digital Identity Wallet (EUDIW) real-money pilot, with Banca Transilvania and BPC enabling a cardholder to authenticate a purchase via the wallet instead of SMS OTP or card readers.

The European Commission has opened a DMA probe into whether Google Search unfairly demotes news publishers via its ‘site reputation abuse’ policy, which can lower rankings for outlets hosting partner content.

In terms of digital strategies, the European Commission’s 2030 Consumer Agenda outlines a plan to enhance protection, trust, and competitiveness while simplifying regulations for businesses.
Turkmenistan passed its first comprehensive virtual assets law, effective 1 Jan 2026, legalising crypto mining and permitting exchanges under strict state registration.

HUMAN RIGHTS

The Council of the EU has adopted new measures to accelerate the handling of cross-border data protection complaints, with harmonised admissibility criteria and strengthened procedural rights for both citizens and companies. A simplified cooperation process for straightforward cases will also reduce administrative burdens and enable faster resolutions.

India has begun enforcing its Digital Personal Data Protection Act 2023 through newly approved rules that set up initial governance structures, including a Data Protection Board, while granting organisations extra time to meet full compliance obligations.

LEGAL

OpenAI is resisting a narrowed legal demand from The New York Times for 20 million ChatGPT conversations—part of the Times’ lawsuit over alleged misuse of its content, warning that sharing the data could expose sensitive information and set far-reaching precedents for how AI platforms handle user privacy, data retention, and legal accountability.

A US judge let the Authors Guild’s lawsuit against OpenAI proceed, rejecting dismissal and allowing claims that ChatGPT’s summaries unlawfully replicate authors’ tone, plot, and characters.

Ireland’s media regulator has opened its first DSA investigation into X, probing whether users have accessible appeals and clear outcomes when content-removal requests are refused.

In a setback for the FTC, a US judge ruled that Meta does not currently wield monopoly power in social networking, scuttling a bid that could have forced the divestiture of Instagram and WhatsApp.

SOCIOCULTURAL

The European Commission launched the Culture Compass for Europe, a framework to put culture at the core of EU policy, foster identity and diversity, and bolster creative sectors. 

China’s cyberspace regulators launched a crackdown on AI deepfakes impersonating public figures in livestream shopping, ordering platform cleanups and marketer accountability.

DEVELOPMENT

West and Central African ministers adopted the Cotonou Declaration to accelerate digital transformation by 2030, targeting a Single African Digital Market, widespread broadband, interoperable digital infrastructure, and harmonised rules for cybersecurity, data governance, and AI. The initiative emphasises human capital and innovation, aiming to equip 20 million people with digital skills, create two million digital jobs, and boost African-led AI and regional digital infrastructure.

ITU’s Measuring digital development: Facts and Figures 2025 report finds that while global connectivity continues to expand—with nearly 6 billion people online in 2025—2.2 billion still remain offline, predominantly in low- and middle-income countries. Major gaps persist in connection quality, data usage, affordability, and digital skills, leaving many unable to fully benefit from the digital world.

Switzerland has formally associated to Horizon Europe, Digital Europe, and Euratom R&T, giving Swiss researchers EU-equivalent status to lead projects and win funding across all pillars from 1 January 2025.
Uzbekistan now grants full legal validity to personal data on the my.gov.uz public services portal, equating it with paper documents (effective 1 November). Citizens can access, share, and manage records entirely online.

Australia. Australia has unveiled a new National AI Plan designed to harness AI for economic growth, social inclusion and public-sector efficiency — while emphasising safety, trust and fairness in adoption. The plan mobilises substantial investment: hundreds of millions of AUD are channelled into research, infrastructure, skills development and programmes to help small and medium enterprises adopt AI; the government also plans to expand nationwide access to the technology.

Practical steps include establishing a national AI centre, supporting AI adoption among businesses and nonprofits, enhancing digital literacy through schools and community training, and integrating AI into public service delivery. 

To ensure responsible use, the government will establish the AI Safety Institute (AISI), a national centre tasked with consolidating AI safety research, coordinating standards development, and advising both government and industry on best practices. The Institute will assess the safety of advanced AI models, promote resilience against misuse or accidents, and serve as a hub for international cooperation on AI governance and research.

The report highlights Bangladesh’s relative strengths: a growing e-government infrastructure and generally high public trust in digital services. However, it also candidly maps structural challenges: uneven connectivity and unreliable power supply beyond major urban areas, a persistent digital divide (especially gender and urban–rural), limited high-end computing capacity, and insufficient data protection, cybersecurity and AI-related skills in many parts of society.

As part of its roadmap, the country plans to prioritise governance frameworks, capacity building, and inclusive deployment — especially ensuring that AI supports public-sector services in health, education, justice and social protection.

Belgium. Belgium joins a growing number of countries and public-sector organisations that have restricted or blocked China’s DeepSeek over security concerns. All Belgian federal government officials must cease using DeepSeek, effective 1 December, and all instances of DeepSeek must be removed from official devices.

The move follows a warning from Centre for Cybersecurity Belgium, which identified serious data-protection risks associated with the tool and flagged its use as problematic for handling sensitive government information.

Russia. At Russia’s premier AI conference (AI Journey), President Vladimir Putin announced the formation of a national AI task force, framing it as essential for minimising dependence on foreign AI. The plan includes building data centres (even powered by small-scale nuclear power) and using these to host generative AI models that protect national interests. Putin also argued that only domestically developed models should be used in sensitive sectors — like national security — to prevent data leakage.

Singapore. Singapore has launched a Global AI Assurance Sandbox, now open to companies worldwide that want to run real-world pilot tests for AI systems.

This sandbox is guided by 11 governance principles aligned with international standards — including NIST’s AI Risk Management Framework and ISO/IEC 42001. By doing this, Singapore hopes to bridge the gap between fragmented national AI regulations and build shared benchmarks for safety and trust.

The EU. A big political storm is brewing in the EU. The European Commission has rolled out what it calls the Digital Omnibus, a package of proposals aimed at simplifying its digital lawbook — a move welcomed by some as needed to improve the competitiveness of the EU’s digital actors, and criticised by others over potentially negative implications in areas such as digital rights. The package consists of the Digital Omnibus Regulation Proposal and the Digital Omnibus on AI Regulation Proposal.

On a separate, but related note, the European Commission has launched an AI whistle-blower tool, providing a secure and confidential channel for individuals across the EU to report suspected breaches of the AI Act, including unsafe or high‑risk AI deployments. With the launch of the tool, the EU aims to close gaps in the enforcement of the EU AI Act, increase the accountability of developers and deployers, and foster a culture of responsible AI usage across member states. The tool is also intended to foster transparency, allowing regulators to react faster to potential violations without relying just on audits or inspections.

What’s making waves in the EU? The Digital Omnibus on AI Regulation Proposal delays the implementation of ‘high-risk’ rules under the EU’s AI Act until 2027, giving Big Tech more time before stricter oversight takes effect. The entry into force of high-risk AI rules will now align with the availability of support tools, giving companies up to 16 months to comply. SMEs and small mid-cap companies will benefit from simplified documentation, broader access to regulatory sandboxes, and centralised oversight of general-purpose AI systems through the AI Office.

Cybersecurity reporting is also being simplified with a single-entry interface for incidents under multiple laws, while privacy rules are being clarified to support innovation without weakening protections under the GDPR. Cookie rules will be modernised to reduce repetitive consent requests and allow users to manage preferences more efficiently.

Data access will be enhanced through the consolidation of EU data legislation via the Data Union Strategy, targeted exemptions for smaller companies, and new guidance on contractual compliance. The measures aim to unlock high-quality datasets for AI and strengthen Europe’s innovation potential, while saving businesses billions and improving regulatory clarity.

The Digital Omnibus Regulation Proposal has implications for data protection in the EU. Proposed changes to the General Data Protection Regulation (GDPR) would redefine the definition of personal data, weakening the safeguards on when companies can use it — especially for AI training. Meanwhile, cookie-consent is being simplified into a ‘one click’ model that lasts up to six months.

Privacy and civil rights groups expressed concern that the proposed GDPR changes disproportionately benefit large technology firms. A coalition of 127 organisations has issued a public warning that this could become ‘the biggest rollback of digital fundamental rights in EU history.’ 

These proposals must go through the EU’s co-legislative process — Parliament and Council will debate, amend, and negotiate them. Given the controversy (support from industry, pushback from civil society), the final outcome could look very different from the Commission’s initial proposal.

The UK. The UK government has launched a major AI initiative to drive national AI growth, combining infrastructure investment, business support, and research funding. An immediate £150 million GPU deployment in Northamptonshire kicks off a £18 billion programme over five years to build sovereign AI capacity. Through an advanced-market commitment of £100 million, the state will act as a first customer for domestic AI hardware startups, helping de-risk innovation and boost competitiveness.

The plan includes AI Growth Zones, with a flagship site in South Wales expected to create over 5,000 jobs, and expanded access to high-performance computing for universities, startups, and research organisations. A dedicated £137 million “AI for Science” strand will accelerate breakthroughs in drug discovery, clean energy, and advanced materials, ensuring AI drives both economic growth and public value outcomes.

The USA. The shadow of regulation-limiting politics looms large in the USA. Trump-aligned Republicans have again pushed for a moratorium on state-level AI regulation. The idea is to block states from passing their own AI laws, arguing that a fragmented regulatory landscape would hinder innovation. One version of the proposal would tie federal broadband funding to states’ willingness to forego AI rules — effectively punishing any state that tries to regulate. Yet this pushback isn’t unopposed: more than 260 state lawmakers from across the US, Republican and Democrat alike, have decried the moratorium.

The President has formally established the Genesis Mission by Executive Order on 24 November 2025, tasking the US Department of Energy (DOE) with leading a nationwide AI-driven scientific research effort. The Mission will build a unified ‘American Science and Security Platform,’ combining the DOE’s 17 national laboratories’ supercomputers, federal scientific datasets that have accumulated over decades, and secure high-performance computing capacity — creating what the administration describes as ‘the world’s most complex and powerful scientific instrument ever built.’

Under the plan, AI will generate ‘scientific foundation models’ and AI agents capable of automating experiment design, running simulations, testing hypotheses and accelerating discoveries in key strategic fields: biotechnology, advanced materials, critical minerals, quantum information science, nuclear fission and fusion, space exploration, semiconductors and microelectronics.

The initiative is framed as central to energy security, technological leadership and national competitiveness — the administration argues that despite decades of rising research funding, scientific output per dollar has stagnated, and AI can radically boost research productivity within a decade.

To deliver on these ambitions, the Executive Order sets a governance structure: the DOE Secretary oversees implementation; the Assistant to the President for Science and Technology will coordinate across agencies; and DOE may partner with private sector firms, academia and other stakeholders to integrate data, compute, and infrastructure.

UAE and Africa. The AI for Development Initiative has been announced to advance digital infrastructure across Africa, backed by a US$1 billion commitment from the UAE. According to official statements, the initiative plans to channel resources to sectors such as education, agriculture, climate adaptation, infrastructure and governance, helping African governments to adopt AI-driven solutions even where domestic AI capacity remains limited.

Though full details remain to be seen (e.g. selection of partner countries, governance and oversight mechanisms), the scale and ambition of the initiative signal the UAE’s aspiration to act not just as an AI adoption hub, but as a regional and global enabler of AI-enabled development.

Uzbekistan. Uzbekistan has announced the launch of the ‘5 million AI leaders’ project to develop its domestic AI capabilities. As part of this plan, the government will integrate AI-focused curricula into schools, vocational training and universities; train 4.75 million students, 150,000 teachers and 100,000 public servants; and launch large-scale competitions for AI startups and talent.

The programme also includes building high-performance computing infrastructure (in partnership with a major tech company), establishing a national AI transfer office abroad, and creating state-of-the-art laboratories in educational institutions — all intended to accelerate adoption of AI across sectors.

The government frames this as central to modernising public administration and positioning Uzbekistan among the world’s top 50 AI-ready countries.

The AI bubble is inflating to the point of bursting. There are five causes of the current situation and five future scenarios that inform how to prevent or deal with a potential burst.

The frenzy of AI investment did not happen in a vacuum. Several forces have contributed to our tendency toward overvaluation and unrealistic expectations.

1st cause: The media hype machine. AI has been framed as the inevitable future of humanity. This narrative has created a powerful Fear of Missing Out (FOMO), prompting companies and governments to invest heavily in AI, often without a sober reality check.

2nd cause: Diminishing returns on computing power and data. The dominant, simple formula of the past few years has been: More compute (read: more Nvidia GPUs) + more data = better AI. This belief has led to massive AI factories: hyper-scale data centres and an alarming electricity and water footprint. Simply stacking more GPUs now yields incremental improvements.

3rd cause: Large language models’ (LLMs’) logical and conceptual limits. LLMs are encountering structural limitations that cannot be resolved simply by scaling data and compute. Despite the dominant narrative of imminent superintelligence, many leading researchers are sceptical that today’s LLMs can simply be ‘grown’ into human-level Artificial General Intelligence (AGI).

4th cause: Slow AI transformation. Most AI investments are still based on potential, not on realised, measurable value. The technology is advancing faster than society’s ability to absorb it. Previous AI winters in the 1970s and late 1980s followed periods of over-promising and under-delivering, leading to sharp cuts in funding and industrial collapse.

5th cause: Massive cost discrepancies. The latest wave of open-source models has shown that open-source models, at a few million dollars, can match or beat models costing hundreds of millions. This raises questions about the efficiency and necessity of current proprietary AI spending. 

 Balloon, Book, Comics, Publication, Person, Adult, Male, Man, Face, Head

Five scenarios outline how the current hype may resolve.

1st Scenario: The rational pivot (the textbook solution). A market correction could push AI development away from the assumption that more compute automatically yields better models. Instead, the field would shift toward smarter architectures, deeper integration with human knowledge and institutions, and smaller, specialised, often open-source systems. Policy is already moving this way: the US AI Action Plan frames open-weight models as strategic assets. Yet this pivot faces resistance from entrenched proprietary models, dependence on closed data, and unresolved debates about how creators of human knowledge should be compensated.

2nd Scenario: ‘Too big to fail’ (the 2008 bailout playbook). Another outcome treats major AI companies as essential economic infrastructure. Industry leaders already warn of “irrationality” in current investment levels, suggesting that a weak quarter from a key firm could rattle global markets. In this scenario, governments provide implicit or explicit backstops—cheap credit, favourable regulation, or public–private infrastructure deals—on the logic that AI giants are systemically important.

3rd Scenario: Geopolitical justification (China ante portas). Competition with China could become the dominant rationale for sustained public investment. China’s rapid progress, including low-cost open models like DeepSeek R1, is already prompting “Sputnik moment” comparisons. Support for national champions is then framed as a matter of technological sovereignty, shifting risk from investors to taxpayers.

4th Scenario: AI monopolisation (the Wall Street gambit). If smaller firms fail to monetise, AI capabilities could consolidate into a handful of tech giants, mirroring past monopolisation in search, social media, and cloud. Nvidia’s dominance in AI hardware reinforces this dynamic. Open-source models slow but do not prevent consolidation.

5th Scenario: AI winter and new digital toys. Finally, a mild AI winter could emerge as investment cools and attention turns to new frontiers—quantum computing, digital twins, immersive reality. AI would remain a vital infrastructure but no longer the centre of speculative hype.

The next few years will show whether AI becomes another over-priced digital toy – or a more measured, open, and sustainable part of our economic and political infrastructure.

This text was adapted from Dr Jovan Kurbalija’s article Is the AI bubble about to burst? Five causes and five scenarios. Read the original article below.

BLOG featured image 2025 AI bubble
www.diplomacy.edu

This text outlines five causes and five scenarios around the AI bubble and potential burst.

A revised version of the WSIS+20 outcome document – Revision 1 – was published on 7 November by the co-facilitators of the intergovernmental process. The document will serve as the basis for continued negotiations among UN member states ahead of the high-level meeting of the General Assembly on 16–17 December 2025.

While maintaining the overall structure of the Zero Draft released in August, Revision 1 introduces several changes and new elements. 

The new text includes revised – and in several places stronger – language emphasising the need to close rather than bridge digital divides, presented as multidimensional challenges that must be addressed to achieve the WSIS vision. 

At the same time, some issues were deprioritised: for instance, references to e-waste and the call for global reporting standards on environmental impacts were removed from the environment section.

Several new elements also appear. In the enabling environments section, states are urged to take steps towards avoiding or refraining from unilateral measures inconsistent with international law. 

There is also a new recognition of the importance of inclusive participation in standard-setting. 

The financial mechanisms section invites the Secretary-General to consider establishing a task force on future financial mechanisms for digital development, with outcomes to be reported to the UNGA at its 81st session. 

The internet governance section now includes a reference to the NetMundial+10 Guidelines.

Language on the Internet Governance Forum (IGF) remains largely consistent with the Zero Draft, including with regard to making the forum a permanent one and requesting the Secretary-General to make proposals concerning the IGF’s future funding. New text invites the IGF to further strengthen the engagement of governments and other stakeholders from developing countries in discussions on internet governance and emerging technologies. 

Several areas saw shifts in tone. Language in the human rights section has been softened in parts (e.g. references to surveillance safeguards and threats to journalists now being removed).

And there is a change in how the interplay between WSIS and the GDC is framed – the emphasis is now on alignment between WSIS and GDC processes rather than integration. For instance, if the GDC-WSIS joint implementation roadmap was initially requested to ‘integrate GDC commitments into the WSIS architecture’, it should now ‘aim to strengthen coherence between WSIS and GDC processes’. Corresponding adjustments are also reflected in the roles of the Economic and Social Council and the Commission on Science and Technology for Development.

What’s next? Registration is now open for the next WSIS+20 virtual stakeholder consultation, scheduled for Monday, 8 December 2025, to gather feedback on the revised draft outcome document (Rev2). Participants must register by Sunday, 7 December at 11:59 p.m. EST.

An initial Rev2 response and framework document will guide the session and will be posted as soon as it is available.

This consultation forms part of the preparations for the High-level Meeting of the General Assembly on the overall review of the implementation of the outcomes of the World Summit on the Information Society (WSIS+20), which will take place on 16 and 17 December 2025.

COP30, the 30th annual UN climate meeting, officially wrapped up last Friday, 21 November. As the dust settles in Belém, we take a closer look at the outcomes with implications for digital technologies and AI.

In agriculture, momentum is clearly building. Brazil and the UAE unveiled AgriLLM, the first open-source large language model designed specifically for agriculture, developed with support from international research and innovation partners. The goal is to give governments and local organisations a shared digital foundation to build tools that deliver timely, locally relevant advice to farmers. Alongside this, the AIM for Scale initiative aims to provide digital advisory services, including climate forecasts and crop insights, to 100 million farmers.

 Person, Book, Comics, Publication, Cleaning, Clothing, Hat, Outdoors

Cities and infrastructure are also stepping deeper into digital transformation. Through the Infrastructure Resilience Development Fund, insurers, development banks, and private investors are pooling capital to finance climate-resilient infrastructure in emerging economies — from clean energy and water systems to the digital networks needed to keep communities connected and protected during climate shocks. 

The most explicit digital agenda surfaced under the axis of enablers and accelerators. Brazil and its partners launched the world’s first Digital Infrastructure for Climate Action, a global initiative to help countries adopt open digital public goods in areas such as disaster response, water management, and climate-resilient agriculture. The accompanying innovation challenge is already backing new solutions designed to scale.

The Green Digital Action Hub was also launched and will help countries measure and reduce the environmental footprint of technology, while expanding access to tools that use technology for sustainability. 

Training and capacity building received attention through the new AI Climate Institute, which will help the Global South develop and deploy AI applications suited to local needs — particularly lightweight, energy-efficient models.

The Nature’s Intelligence Studio, grounded in the Amazon, will support nature-inspired innovation and introduce open AI tools that help match real-world sustainability challenges with bio-based solutions.

Finally, COP30 marked a first by placing information integrity firmly on the climate action agenda. With mis- and disinformation recognised as a top global risk, governments and partners launched a declaration and new multistakeholder process aimed at strengthening transparency, shared accountability, and public trust in climate information — including the digital platforms that shape it.

The big picture. Across all strands, COP30 sent a clear message: the digital layer of climate action is not optional — it is embedded in the core delivery of climate action.

The bans on under‑16s are advancing globally, and Australia has gone the farthest in this effort. Regulators there have now widened the scope of the ban to include platforms like Twitch, which is classified as age-restricted due to its social interaction features. Meta has begun notifying Australian users believed to be under 16 that their Facebook and Instagram accounts will be deactivated starting 4 December, a week before the law officially takes effect on 10 December.

To support families through the transition, the government has established a Parent Advisory Group, bringing together organisations representing diverse households to help carers guide children on online safety, communication, and safe ways to connect digitally.

The ban has already provoked opposition. Major social media platforms have criticised the ban, but have signalled they will comply, with YouTube being the latest to fall in line. However, the ban is now constitutionally challenged in the High Court by two 15‑year-olds, backed by the advocacy group Digital Freedom Project. They argue the law unfairly limits under‑16s’ ability to participate in public debate and political expression, effectively silencing young voices on issues that affect them directly.

Malaysia also plans to ban social media accounts for people under 16 starting in 2026. The Cabinet approved the measure to protect children from online harms such as cyberbullying, scams, and sexual exploitation. Authorities are considering approaches such as electronic age verification using ID cards or passports, although the exact enforcement date has not been set.

EU lawmakers have proposed similar protections. The European Parliament adopted a non-legislative report calling for a harmonised EU minimum age of 16 for social media, video-sharing platforms, and AI companions, with access for 13–16-year-olds allowed only with parental consent. Lawmakers back the Commission’s work on an EU age-verification app and the European digital identity wallet (eID), but stress that age-assurance must be accurate and privacy-preserving and does not absolve platforms from designing services that are safe by default.

Beyond age restrictions, the EU is strengthening broader safeguards. Member states have agreed on a Council position for a regulation to prevent and combat child sexual abuse online, setting out concrete, enforceable obligations for online providers. Platforms will have to carry out formal risk assessments to identify how their services could be used to disseminate child sexual abuse material (CSAM) or to solicit children, and then put in place mitigation measures — from safer default privacy settings for children and user reporting tools, to technical safeguards. Member states will designate coordinating and competent national authorities that can review those risk assessments, force providers to implement mitigations and, where necessary, impose penalty payments for non-compliance.

Importantly, the Council introduces a three-tier risk classification for online services (high, medium, low). Services judged high-risk — determined against objective criteria such as service type — can be required not only to apply stricter mitigations but also to contribute to the development of technologies to reduce those risks. Search engines can be ordered to delist results; competent authorities may require the removal or blocking of access to CSAM. The position keeps and aims to make permanent an existing temporary exemption that allows providers (for example, messaging services) to voluntarily scan content for CSAM — an exemption currently due to expire on 3 April 2026.

To operationalise and coordinate enforcement, the regulation foresees the creation of a new EU agency — the EU Centre on Child Sexual Abuse. The Centre will process and assess information reported by providers, operate a database of provider reports and a database of child-sexual-abuse indicators that companies can use for voluntary detection activities, help victims request removal or disabling of access to material depicting them, and share relevant information with Europol and national law enforcement. The Centre’s seat is not yet decided; that will be negotiated with the European Parliament. With the Council’s position agreed, formal trilogue negotiations with Parliament can begin (the Parliament adopted its position in November 2023).

The European Parliament’s report also addresses everyday online risks, calling for bans on the most harmful addictive practices (infinite scroll, autoplay, reward loops, pull-to-refresh) and for default disabling of other addictive features for minors; they urge outlawing engagement-based recommender algorithms for minors and extending clear DSA rules to online video platforms. Gaming features that mimic gambling — loot boxes, in-app randomised rewards, pay-to-progress mechanics — should be banned. The report also tackles commercial exploitation, urging prohibitions on platforms offering financial incentives for kidfluencing (children acting as influencers).

MEPs singled out generative AI risks — deepfakes, companionship chatbots, AI agents and AI-powered nudity apps that create non-consensual manipulated imagery — calling for urgent legal and ethical action. Rapporteur Christel Schaldemose framed the measures as drawing a clear line: platforms are ‘not designed for children’ and the experiment of letting addictive, manipulative design target minors must end.

A new multilateral initiative is also underway: the eSafety Commissioner (Australia), Ofcom (UK) and the European Commission’s DG CNECT will cooperate to protect children’s rights, safety and privacy online. 

The regulators will enforce online safety laws, require platforms to assess and mitigate risks to children, promote privacy-preserving technologies such as age verification, and partner with civil society and academia to keep regulatory approaches grounded in real-world dynamics.

A new trilateral technical group will be established to explore how age-assurance systems can work reliably and interoperably, strengthening evidence for future regulatory action.

The overall goal is to support children and families in using the internet more safely and confidently — by fostering digital literacy, critical thinking and by making online platforms more accountable.

On 18 November, Cloudflare — the invisible backbone behind millions of websites — went down, in what the company calls its most serious outage since 2019. Users around the world saw internal-server-error messages as services like X and ChatGPT temporarily went offline.

The culprit was an internal misconfiguration. A routine permissions change in a ClickHouse database led to a malformed ‘feature file’ used by Cloudflare’s Bot Management tool. That file unexpectedly doubled in size and, when pushed across Cloudflare’s global network, exceeded built‑in limits — triggering cascading failures.

As engineers rushed to isolate the bad file, traffic slowly returned. By mid‑afternoon, Cloudflare halted propagation, replaced the corrupted file, and rebooted key systems; full network recovery followed hours later.

The bigger picture. The incident is not isolated. Only last month, Microsoft Azure suffered a multi-hour outage that disrupted enterprise clients across Europe and the US, while Amazon Web Services (AWS) experienced intermittent downtime affecting streaming platforms and e-commerce sites. These events, combined with the Cloudflare blackout, underscore the fragility of global cloud infrastructure.

The outage comes at a politically sensitive moment in Europe’s cloud policy debate. Regulators in Brussels are already probing AWS and Microsoft Azure to determine whether they should be designated as ‘gatekeepers’ under the EU’s Digital Markets Act (DMA). These investigations aim to assess whether their dominance in cloud infrastructure gives them outsized control — even though, technically, they don’t meet the Act’s usual size thresholds. 

This recurring pattern highlights a major vulnerability in the modern internet, one born from an overreliance on a handful of critical providers. When one of these central pillars stumbles, whether from a misconfiguration, software bug, or regional issue, the effects ripple outward. The very concentration of services that enables efficiency and scale also creates single points of failure with cascading consequences.

The digital governance scene has been busy in Geneva in November. Here’s what we have tried to follow. 

 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

CERN unveils AI strategy to advance research and operations

CERN has approved a comprehensive AI strategy to guide its use across research, operations, and administration. The strategy unites initiatives under a coherent framework to promote responsible and impactful AI for science and operational excellence.

It focuses on four main goals: accelerating scientific discovery, improving productivity and reliability, attracting and developing talent, and enabling AI at scale through strategic partnerships with industry and member states.

Common tools and shared experiences across sectors will strengthen CERN’s community and ensure effective deployment.

Implementation will involve prioritised plans and collaboration with EU programmes, industry, and member states to build capacity, secure funding, and expand infrastructure. Applications of AI will support high-energy physics experiments, future accelerators, detectors, and data-driven decision-making.

The UN Commission on Science and Technology for Development (CSTD) 2025–2026 inter-sessional panel

The UN Commission on Science and Technology for Development (CSTD) held its 2025–2026 inter-sessional panel on 17 November at the Palais des Nations in Geneva. The agenda focused on science, technology and innovation in the age of AI, with expert contributions from academia, international organisations, and the private sector. Delegations also reviewed progress on WSIS implementation ahead of the WSIS+20 process, and received updates on the implementation of the Global Digital Compact (GDC) and ongoing data governance work within the dedicated CSTD working group. The findings and recommendations of the panel will be considered at the twenty-ninth session of the Commission in 2026.

Fourth meeting of the UN CSTD multi-stakeholder working group on data governance at all levels

The CSTD’s multi-stakeholder working group on data governance at all levels met for the fourth time from 18 to 19 November. The programme began with opening remarks and the formal adoption of the agenda. The UNCTAD secretariat then provided an overview of inputs submitted since the last session, highlighting emerging areas of alignment and divergence among stakeholders. 

The meeting moved into substantive deliberations organised around four tracks covering key dimensions of data governance: principles applicable at all levels; interoperability between systems; sharing the benefits of data; and enabling safe, secure and trusted data flows, including across borders. These discussions are designed to explore practical approaches, existing challenges, and potential pathways toward consensus.

Following a lunch break, delegates reconvened for a full afternoon plenary to continue track-based exchanges, with opportunities for interaction among member states, the private sector, civil society, academia, the technical community, and international organisations.

The second day focused on the Working Group’s upcoming milestones. Delegations considered the outline, timeline, and expectations for the progress report to the General Assembly, as well as the process for selecting the next Chair of the Working Group. The session concluded with agreement on the scheduling of future meetings and any additional business raised by participants.

Innovations Dialogue 2025: Neurotechnologies and their implications for international peace and security

On 24 November, UNIDIR hosted its Innovations Dialogue on neurotechnologies and their implications for international peace and security in Geneva and online. Experts from neuroscience, law, ethics, and security policy discussed developments such as brain-computer interfaces and cognitive enhancement tools, exploring both their potential applications and the challenges they present, including ethical and security considerations. The event included a poster exhibition on responsible use and governance approaches.

14th UN Forum on Business and Human Rights

The 14th UN Forum on Business and Human Rights was held from 24 to 26 November in Geneva and online, under the theme ‘Accelerating action on business and human rights amidst crises and transformations.’ The forum addressed key issues, including safeguarding human rights in the age of AI and exploring human rights and platform work in the Asia-Pacific region amid the ongoing digital transformation. Additionally, a side event took a closer look at the labour behind AI.

Weekly #240 Code meets climate: AI and digital at COP30

 Logo, Text

21-28 November 2025


HIGHLIGHT OF THE WEEK

Code meets climate: AI and digital at COP30

COP30, the 30th annual UN climate meeting, officially wrapped up last Friday, 21 November. As the dust settles in Belém, we take a closer look at the outcomes with implications for digital technologies and AI.

In agriculture, momentum is clearly building. Brazil and the UAE unveiled AgriLLM, the first open-source large language model designed specifically for agriculture, developed with support from international research and innovation partners. The goal is to give governments and local organisations a shared digital foundation to build tools that deliver timely, locally relevant advice to farmers. Alongside this, the AIM for Scale initiative aims to provide digital advisory services, including climate forecasts and crop insights, to 100 million farmers.

 Person, Book, Comics, Publication, Cleaning, Clothing, Hat, Outdoors

Cities and infrastructure are also stepping deeper into digital transformation. Through the Infrastructure Resilience Development Fund, insurers, development banks, and private investors are pooling capital to finance climate-resilient infrastructure in emerging economies — from clean energy and water systems to the digital networks needed to keep communities connected and protected during climate shocks. 

The most explicit digital agenda surfaced under the axis of enablers and accelerators. Brazil and its partners launched the world’s first Digital Infrastructure for Climate Action, a global initiative to help countries adopt open digital public goods in areas such as disaster response, water management, and climate-resilient agriculture. The accompanying innovation challenge is already backing new solutions designed to scale.

The Green Digital Action Hub was also launched and will help countries measure and reduce the environmental footprint of technology, while expanding access to tools that use technology for sustainability. 

Training and capacity building received attention through the new AI Climate Institute, which will help the Global South develop and deploy AI applications suited to local needs — particularly lightweight, energy-efficient models.

The Nature’s Intelligence Studio, grounded in the Amazon, will support nature-inspired innovation and introduce open AI tools that help match real-world sustainability challenges with bio-based solutions.

Finally, COP30 marked a first by placing information integrity firmly on the climate action agenda. With mis- and disinformation recognised as a top global risk, governments and partners launched a declaration and new multistakeholder process aimed at strengthening transparency, shared accountability, and public trust in climate information — including the digital platforms that shape it.

The big picture. Across all strands, COP30 sent a clear message: the digital layer of climate action is not optional — it is embedded in the core delivery of climate action.

IN OTHER NEWS LAST WEEK

This week in AI governance

The USA. The Genesis Mission was formally established by Executive Order on 24 November 2025, tasking the US Department of Energy (DOE) with leading a nationwide AI-driven scientific research effort. The Mission will build a unified ‘American Science and Security Platform,’ combining the DOE’s 17 national laboratories’ supercomputers, federal scientific datasets that have accumulated over decades, and secure high-performance computing capacity — creating what the administration describes as ‘the world’s most complex and powerful scientific instrument ever built.’

Under the plan, AI will generate ‘scientific foundation models’ and AI agents capable of automating experiment design, running simulations, testing hypotheses and accelerating discoveries in key strategic fields: biotechnology, advanced materials, critical minerals, quantum information science, nuclear fission and fusion, space exploration, semiconductors and microelectronics. 

The initiative is framed as central to energy security, technological leadership and national competitiveness — the administration argues that despite decades of rising research funding, scientific output per dollar has stagnated, and AI can radically boost research productivity within a decade. 

To deliver on these ambitions, the Executive Order sets a governance structure: the DOE Secretary oversees implementation; the Assistant to the President for Science and Technology will coordinate across agencies; and DOE may partner with private sector firms, academia and other stakeholders to integrate data, compute, and infrastructure.

The UK. The UK government has launched a major AI initiative to drive national AI growth, combining infrastructure investment, business support, and research funding. An immediate £150 million GPU deployment in Northamptonshire kicks off a £18 billion programme over five years to build sovereign AI capacity. Through an advanced-market commitment of £100 million, the state will act as a first customer for domestic AI hardware startups, helping de-risk innovation and boost competitiveness.

The plan includes AI Growth Zones, with a flagship site in South Wales expected to create over 5,000 jobs, and expanded access to high-performance computing for universities, startups, and research organisations. A dedicated £137 million “AI for Science” strand will accelerate breakthroughs in drug discovery, clean energy, and advanced materials, ensuring AI drives both economic growth and public value outcomes.

The report highlights Bangladesh’s relative strengths: a growing e-government infrastructure and generally high public trust in digital services. However, it also candidly maps structural challenges: uneven connectivity and unreliable power supply beyond major urban areas, a persistent digital divide (especially gender and urban–rural), limited high-end computing capacity, and insufficient data protection, cybersecurity and AI-related skills in many parts of society. 

As part of its roadmap, the country plans to prioritise governance frameworks, capacity building, and inclusive deployment — especially ensuring that AI supports public-sector services in health, education, justice and social protection. 

Australia. Australia has launched the AI Safety Institute (AISI), a national centre tasked with consolidating AI safety research, coordinating standards development, and advising both government and industry on best practices. The Institute will assess the safety of advanced AI models, promote resilience against misuse or accidents, and serve as a hub for international cooperation on AI governance and research.

The EU. The European Commission has launched an AI whistle-blower tool, providing a secure and confidential channel for individuals across the EU to report suspected breaches of the AI Act, including unsafe or high‑risk AI deployments. The tool allows submissions in any EU official language, supports anonymity, and offers follow-up tracking, aiming to strengthen oversight and enforcement of EU AI regulations. 

With the launch of the tool, the EU aims to close gaps in the enforcement of the EU AI Act, increase the accountability of developers and deployers, and foster a culture of responsible AI usage across member states. The tool is also intended to foster transparency, allowing regulators to react faster to potential violations without relying just on audits or inspections.

United Arab Emirates. The AI for Development Initiative has been announced to advance digital infrastructure across Africa, backed by a US$1 billion commitment from the UAE. According to official statements, the initiative plans to channel resources to sectors such as education, agriculture, climate adaptation, infrastructure and governance, helping African governments to adopt AI-driven solutions even where domestic AI capacity remains limited. 

Though full details remain to be seen (e.g. selection of partner countries, governance and oversight mechanisms), the scale and ambition of the initiative signal the UAE’s aspiration to act not just as an AI adoption hub, but as a regional and global enabler of AI-enabled development.


From Australia to the EU: New measures shield children from online harms

The bans on under‑16s are advancing globally, and Australia has gone the farthest in this effort. Regulators there have now widened the scope of the ban to include platforms like Twitch, which is classified as age-restricted due to its social interaction features. Meta has begun notifying Australian users believed to be under 16 that their Facebook and Instagram accounts will be deactivated starting 4 December, a week before the law officially takes effect on 10 December.

To support families through the transition, the government has established a Parent Advisory Group, bringing together organisations representing diverse households to help carers guide children on online safety, communication, and safe ways to connect digitally.

The ban has already provoked opposition. Less than two weeks before enforcement, two 15‑year-olds, backed by the advocacy group Digital Freedom Project, filed a constitutional challenge in the High Court. They argue the law unfairly limits under‑16s’ ability to participate in public debate and political expression, effectively silencing young voices on issues that affect them directly.

Malaysia also plans to ban social media accounts for people under 16 starting in 2026. The Cabinet approved the measure to protect children from online harms such as cyberbullying, scams, and sexual exploitation. Authorities are considering approaches such as electronic age verification using ID cards or passports, although the exact enforcement date has not been set.

EU lawmakers have proposed similar protections. The European Parliament adopted a non-legislative report calling for a harmonised EU minimum age of 16 for social media, video-sharing platforms, and AI companions, with access for 13–16-year-olds allowed only with parental consent. They support accurate, privacy-preserving age verification via the EU age-verification app and eID wallet, but emphasise that platforms must still design services that are safe by default.

Beyond age restrictions, the EU is strengthening broader safeguards. Member states have agreed on a Council position for a regulation to prevent and combat child sexual abuse online, requiring platforms to block child sexual abuse material (CSAM) and child solicitation, assess risks, and implement mitigation measures, including safer default settings, content controls, and reporting tools. National authorities will oversee compliance and may impose penalties, while high-risk platforms could also contribute to developing technologies to reduce risks. A new EU Centre on Child Sexual Abuse would support enforcement, maintain abuse material databases, and assist victims in removing exploitative images.

The European Parliament’s report also addresses everyday online risks, calling for bans on addictive features—such as infinite scrolling, autoplay, pull-to-refresh, reward loops, engagement-based recommendation algorithms, and gambling-like game elements like loot boxes. It urges action against kidfluencing, commercial exploitation, and generative AI risks, including deepfakes, AI chatbots, and nudity apps producing non-consensual content. Enforcement measures include fines, platform bans, and personal liability for senior managers in cases of serious or persistent breaches.


G20 leaders set digital priorities for a more inclusive global future 

This past weekend, G20 leaders convened in Africa for the first time at their annual Leaders’ Summit. Discussions focused on AI, emerging digital technologies, bridging digital divides, and the role of critical minerals

In their joint declaration, G20 leaders emphasised the transformative potential of AI and emerging digital technologies for sustainable development and reducing inequalities. They stressed the need for international cooperation to ensure AI benefits are equitably shared and that associated risks—including human rights, transparency, accountability, safety, privacy, data protection, and ethical oversight—are carefully managed. The declaration recognised the UN as a central forum for promoting responsible AI governance globally.

The leaders welcomed initiatives launched under South Africa’s presidency, including the Technology Policy Assistance Facility (TPAF) by UNESCO, which supports countries in shaping AI policy through global experiences and research. They also highlighted the AI for Africa Initiative, designed to strengthen the continent’s AI ecosystem by expanding computing capacity, developing talent, creating representative datasets, enhancing infrastructure, and fostering Africa-centric sovereign AI capabilities, supported through long-term partnerships and voluntary contributions.

The declaration reaffirmed G20 commitments to bridging digital divides, including halving the gender digital divide by 2030, promoting universal and meaningful connectivity, and building inclusive, safe, and resilient digital economies. Leaders emphasised the role of digital public infrastructure, modernised education systems, teacher empowerment, and skills development in equipping societies for the digital age. Tourism innovation, enhanced air connectivity, market access, and digital tools for MSMEs were also noted as priorities for sustainable and inclusive economic growth.

The rising global demand for critical minerals driven by sustainable transitions, digitisation, and industrial innovation was highlighted in the declaration. Leaders acknowledged challenges faced by producer countries, including underinvestment, limited value addition, technological gaps, and socio-environmental pressures. They welcomed the G20 Critical Minerals Framework, a voluntary blueprint promoting investment, local beneficiation, governance, and resilient value chains.


Ongoing Nexperia saga: Netherlands’ chip seizure meets China’s legal challenge

Two weeks ago, the Netherlands temporarily suspended its takeover of Nexperia, the Dutch chipmaker owned by China’s Wingtech, following constructive talks with Chinese authorities. 

However, tensions have persisted. Wingtech has challenged the Dutch intervention in court, while Beijing continues to press for a full reversal. Meanwhile, Nexperia’s Dutch management has urged its Chinese units to cooperate to restore disrupted supply chains, which remain fragile after the earlier intervention. Wingtech now accuses the Dutch government of trying to permanently sever its control, leaving the situation unresolved: the company’s ownership and the stability of critical chip flows between Europe and China are still in dispute, with potential knock-on effects for global industries such as the automotive industry.


LAST WEEK IN GENEVA
 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

The 14th UN Forum on Business and Human Rights was held from Monday to Wednesday in Geneva and online, under the theme ‘Accelerating action on business and human rights amidst crises and transformations.’ The forum addressed key issues such as safeguarding human rights in the age of AI and exploring human rights and platform work in the Asia-Pacific region amid the ongoing digital shift. Additionally, a side event took a closer look at the labour behind AI.

LOOKING AHEAD
 Person, Face, Head, Binoculars

A reminder: Civil society organisations have until 30 November 2025 (this Sunday) to apply for the CADE Capacity Development Programme 2025–2026. The programme helps CSOs strengthen their role in digital governance through a mix of technical courses, diplomatic skills training, and expert guidance. Participants can specialise in AI, cybersecurity, or infrastructure policy, receive on-demand helpdesk support, and the most engaged will join a study visit to Geneva. Fully funded by the EU, the programme offers full scholarships to selected organisations, with a special welcome to those from the Global South and women-led groups.

The 2025 International AI Standards Summit will be held on 2–3 December in Seoul, jointly organised by the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the International Telecommunication Union (ITU), with hosting support from the Korean Agency for Technology and Standards (KATS). The summit will bring together policymakers, industry leaders, and experts to advance global AI standards, with a focus on interoperability, transparency, and human rights. By fostering international dialogue and cooperation, the event aims to lay the groundwork for responsible AI development and deployment worldwide. The event is by invitation only.

Next Wednesday (3 December), Diplo, UNEP, and Giga are co-organising an event at the Giga Connectivity Centre in Geneva, titled ‘Digital inclusion by design: Leveraging existing infrastructure to leave no one behind’. The event will explore how community anchor institutions—such as post offices, schools, and libraries—can help close digital divides by offering connectivity, digital skills, and access to essential online services. The session will feature the launch of the new UPU Digital Panorama report, showcasing how postal networks are supporting inclusive digital transformation, along with insights from Giga on connecting schools worldwide. Looking ahead to WSIS+20 and the Global Digital Compact, the discussion will consider practical next steps toward meaningful digital inclusion. The event will be held in situ, in Geneva, Switzerland.

Also on Wednesday, (3 December) Diplo will be hosting an online webinar, ‘Gaming and Africa’s youth: Opportunities, challenges, and future pathways’. The session will explore how gaming can support education, mental health, and cross-border business opportunities, while addressing risks such as addiction and regulatory gaps. Participants will discuss policies, investment, and capacity-building strategies to ensure ethical and inclusive growth in Africa’s gaming sector.



READING CORNER
AI and learning blog

Dismissing AI in education is futile. How we can use technology to enhance, rather than replace, genuine learning and critical thinking skills?

Weekly #239 Digital draught: When the cloud goes offline

 Logo, Text

14-21 November 2025


HIGHLIGHT OF THE WEEK

Digital draught: When the cloud goes offline

On 18 November, Cloudflare — the invisible backbone behind millions of websites — went down in what the company calls its most serious outage since 2019. Users around the world saw internal-server-error messages as services like X and ChatGPT temporarily went offline.

The culprit was an internal misconfiguration. A routine permissions change in a ClickHouse database led to a malformed ‘feature file’ used by Cloudflare’s Bot Management tool. That file unexpectedly doubled in size and, when pushed across Cloudflare’s global network, exceeded built‑in limits — triggering cascading failures. 

As engineers rushed to isolate the bad file, traffic slowly returned. By mid‑afternoon, Cloudflare halted propagation, replaced the corrupted file, and rebooted key systems; full network recovery followed hours later.

 Book, Comics, Publication, Person, Advertisement, Poster, Face, Head, People, Octagón

The bigger picture. The incident is not isolated. Only last month, Microsoft Azure suffered a multi-hour outage that disrupted enterprise clients across Europe and the US, while Amazon Web Services (AWS) experienced intermittent downtime affecting streaming platforms and e-commerce sites. These events, combined with the Cloudflare blackout, underscore the fragility of global cloud infrastructure.

The outage comes at a politically sensitive moment in Europe’s cloud policy debate. Regulators in Brussels are already probing AWS and Microsoft Azure to determine whether they should be designated as ‘gatekeepers’ under the EU’s Digital Markets Act (DMA). These investigations aim to assess whether their dominance in cloud infrastructure gives them outsized control — even though, technically, they don’t meet the Act’s usual size thresholds. 

This recurring pattern highlights a major vulnerability in the modern internet, one born from an overreliance on a handful of critical providers. When one of these central pillars stumbles, whether from a misconfiguration, software bug, or regional issue, the effects ripple outward. The very concentration of services that enables efficiency and scale also creates single points of failure with cascading consequences.

IN OTHER NEWS LAST WEEK

This week in AI and data governance

Singapore. Singapore has launching a Global AI Assurance Sandbox, now open to companies worldwide that want to run real-world pilot tests for AI systems. 

This sandbox is guided by 11 governance principles aligned with international standards — including NIST’s AI Risk Management Framework and ISO/IEC 42001. By doing this, Singapore hopes to bridge the gap between fragmented national AI regulations and build shared benchmarks for safety and trust. 

Russia. At Russia’s premier AI conference (AI Journey), President Vladimir Putin announced the formation of a national AI task force, framing it as essential for minimising dependence on foreign AI. The plan includes building data centres (even powered by small-scale nuclear power), and using these to host generative AI models that protect national interests. Putin also argued that only domestically developed models should be used in sensitive sectors — like national security — to prevent data leakage. 

The USA. The shadow of regulation-limiting politics looms large in the USA. Trump-aligned Republicans have again pushed for a moratorium on state-level AI regulation. The idea is to block states from passing their own AI laws, arguing that a fragmented regulatory landscape would hinder innovation. 

One version of the proposal would tie federal broadband funding to states’ willingness to forego AI rules — effectively punishing any state that tries to regulate. Yet this pushback isn’t unopposed: more than 260 state lawmakers from across the US, Republican and Democrat alike, have decried the moratorium. 

The EU. A big political storm is brewing in the EU. The European Commission has rolled out what it calls the Digital Omnibus, a package of proposals aimed at simplifying its digital lawbook — a move welcomed by some as needed to improve the competitiveness of the EU’s digital actors, and criticised by others over potentially negative implications in areas such as digital rights. The package consists of the Digital Omnibus Regulation Proposal and the Digital Omnibus on AI Regulation Proposal.

What’s making waves., The Digital omnibus on AI Regulation Proposal delays the implementation of ‘high-risk’ rules under the EU’s AI Act until 2027, giving Big Tech more time before stricter oversight takes effect. The entry into force of high-risk AI rules will now align with the availability of support tools, giving companies up to 16 months to comply. SMEs and small mid-cap companies will benefit from simplified documentation, broader access to regulatory sandboxes, and centralised oversight of general-purpose AI systems through the AI Office.

Cybersecurity reporting is also being simplified with a single-entry interface for incidents under multiple laws, while privacy rules are being clarified to support innovation without weakening protections under the GDPR. Cookie rules will be modernised to reduce repetitive consent requests and allow users to manage preferences more efficiently.

Data access will be enhanced through the consolidation of EU data legislation via the Data Union Strategy, targeted exemptions for smaller companies, and new guidance on contractual compliance. The measures aim to unlock high-quality datasets for AI and strengthen Europe’s innovation potential, while saving businesses billions and improving regulatory clarity.

The Digital Omnibus Regulation Proposal has implications for data protection for the EU. Proposed changes to the General Data Protection Regulation (GDPR) a would redefine the definition of personal data, weakening the safeguards on when companies can use it — especially for AI training. Meanwhile, cookie-consent is being simplified into a ‘one click’ model that lasts up to six months. 

Ring the alarm. Privacy and civil rights groups expressed concern that the proposed GDPR changes disproportionately benefit large technology firms. A coalition of 127 organisations has issued a public warning that this could become ‘the biggest rollback of digital fundamental rights in EU history.’ 

These proposals must go through the EU’s co-legislative process — Parliament and Council will debate, amend, and negotiate them. Given the controversy (support from industry, pushback from civil society), the final outcome could look very different from the Commission’s initial proposal.


Privacy in motion

In the EU, policymakers are making adjustments to improve the implementation of the General Data Protection Regulation (GDPR). The Council of the EU has adopted new measures to accelerate the handling of cross-border data protection complaints. Among the key changes is the introduction of harmonised criteria for determining whether a complaint is admissible, ensuring that citizens receive the same treatment no matter where they file a GDPR complaint. The rules also strengthen the rights of both complainants and companies under investigation, including clearer procedures for participation in the case and access to preliminary findings. To reduce administrative burdens, the regulation introduces a simplified cooperation procedure for straightforward cases, allowing authorities to close cases more quickly without relying on the full cooperation framework. 

India has begun implementing its new national data protection system, marking a significant step toward a more structured privacy framework. The Digital Personal Data Protection Act 2023 is now in force, following the approval of implementing rules that outline how the law will be applied. These rules establish initial institutional and procedural requirements, including the creation of a Data Protection Board, while giving organisations additional time to comply with other obligations such as consent management and breach reporting.

The common thread. Regulators in the EU and India are moving from writing the data protection rules to the rather complex task of implementing them.


Europe’s push for digital independence 

France and Germany jointly hosted the Summit on European Digital Sovereignty in Berlin to accelerate action on Europe’s digital independence. The two countries introduced a joint roadmap highlighting seven strategic priorities:

  1. Regulatory simplification: A more innovation-friendly EU framework, including a proposed 12-month postponement of AI Act high-risk requirements and targeted GDPR simplification.
  2. Fairer digital markets: Continued efforts to ensure contestable cloud and digital markets, including the European Commission’s market investigation into cloud hyperscalers.
  3. Data sovereignty: Stronger safeguards for sensitive data, protection from non-EU extraterritorial risks, and mandatory privacy-enhancing technologies aligned with cybersecurity rules.
  4. Digital commons: Advancement of the Digital Commons-EDIC initiative with partner countries.
  5. Digital public infrastructure & open source: Support for the European Digital Identity Wallet and wider deployment of open-source tools in public administrations.
  6. Digital Sovereignty Task Force: A new Franco-German body to define sovereignty indicators and propose concrete policy measures by 2026.
  7. Frontier AI: Actions to create a world-leading European environment for breakthrough AI innovation.

The summit also served as a catalyst for major private-sector commitments, with more than €12 billion in investment pledged toward key digital technologies.

German Chancellor Friedrich Merz called the summit ‘an important milestone’ toward a more sovereign and competitive digital Europe, while French President Emmanuel Macron emphasised a ‘historic convergence’ of European digital ambition.

A major development accompanying the summit was the launch of the European Network for Technological Resilience and Sovereignty (ETRS). This new coalition of leading think tanks and experts aims to enhance Europe’s capacity for innovation and reduce its reliance on foreign technologies. Founding members include Bertelsmann Stiftung (Germany), CEPS (Belgium), the AI & Society Institute (France), and the Polish Economic Institute.

The ETRS will act as a shared knowledge engine connecting academia, civil society, industry, and public institutions to support evidence-driven policymaking. From 2026 onward, it will launch expert workshops, strategic mapping of technology dependencies, and an international pool of specialists focused on digital sovereignty. The network is open to new participants, with more than a dozen already joining.

The gist of it. ‘Europe currently relies on the United States and China for more than 80 percent of its critical digital technologies, ranging from cloud computing and artificial intelligence to semiconductors,’ ETRS noted in a statement. That figure illustrates why the EU is taking this approach. For years, the EU has been a world leader in rule-making — but it has also been playing a high-stakes game of catch-up in developing the technologies it regulates. Now, the bloc is aiming to become a master of both.


Cotonou Declaration sets ambitious 2030 digital goals for West and Central Africa

West and Central African digital economy ministers have launched an ambitious initiative for digital transformation with the Cotonou Declaration, adopted at a regional summit in Cotonou, Benin, on November 17–18, 2025. The Declaration sets targets for 2030, including a Single African Digital Market, 90% broadband coverage, interoperable digital infrastructures like IDs and payments, doubled intra-African e-commerce, and harmonised frameworks for cybersecurity, data governance, and AI.

Human capital is central: 20 million people are expected to gain basic digital skills, and two million new digital jobs or entrepreneurial opportunities will be created, especially for youth and women. Ministers also pledged to strengthen innovation ecosystems and promote African-led AI, building regional cloud and data infrastructure to drive economic growth.

Investment coordination will be facilitated through national digital compacts aligning reforms, funding, and partnerships.


Dutch retreat calms Nexperia chip dispute with China

The Dutch government has suspended its takeover of Nexperia, a Netherlands-based chipmaker owned by China’s Wingtech, following constructive talks with Chinese authorities.

Dutch Economy Minister Vincent Karremans stated that the pause in the takeover is intended as a goodwill gesture, and that dialogue with China will continue — the decision was made in consultation with the EU and international partners.

The EU’s trade chief, Maroš Šefčovič, welcomed the move, saying it could help stabilise chip supply chains.

China has also begun releasing stockpiled chips to ease the shortage.

The heart of the matter. The episode underscored the fragility of global semiconductor supply chains.


LAST WEEK IN GENEVA
 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

The UN Commission on Science and Technology for Development (CSTD) held its 2025–2026 inter-sessional panel on 17 November at the Palais des Nations in Geneva. The agenda focused on science, technology and innovation in the age of AI, with expert contributions from academia, international organisations, and the private sector. Delegations also reviewed progress on WSIS implementation ahead of the WSIS+20 process, and received updates on the implementation of the Global Digital Compact (GDC) and ongoing data governance work within the dedicated CSTD working group. The findings and recommendations of the panel will be considered at the twenty-ninth session of the Commission in 2026.

The CSTD’s multi-stakeholder working group on data governance at all levels met for the fourth time from 18 to 19 November. Delegates reviewed recent inputs, discussed principles, interoperability, benefit-sharing, and secure data flows, and planned next steps for the Working Group’s progress report to the UN General Assembly.

LOOKING AHEAD
 Person, Face, Head, Binoculars

The G20 Leaders’ Summit is scheduled for 22 and 23 November in Johannesburg, South Africa. This annual meeting brings together the heads of state and government from the 19 member countries, plus the African Union and the EU. Expected on the agenda: AI. data governance, and critical minerals. Not expected: the USA’s attendance.

The second half of ITU’s World Telecommunication Development Conference 2025 (WTDC‑25) in Baku, Azerbaijan, will be held next week. The conference brings together governments, regulators, industry, civil society, and other stakeholders to discuss strategies for universal, meaningful, and affordable connectivity. Participants are also reviewing policy frameworks, investment priorities, and digital development initiatives. They will also adopt a Declaration, Strategic Plan, and Action Plan to guide global ICT development through 2029.

On 24 November, UNIDIR will host its Innovations Dialogue on neurotechnologies and their implications for international peace and security in Geneva and online. Experts from neuroscience, law, ethics, and security policy will discuss developments such as brain-computer interfaces and cognitive enhancement tools, exploring both their potential applications and the challenges they present, including ethical and security considerations. The event includes a poster exhibition on responsible use and governance approaches.

Several national and regional IGFs will also be held next week: the Polish IGF (24-25 November), the North African IGF (24-26 November), and the Nigerian IGF (24-27 November).

DiploFoundation is inviting civil society organisations to apply by 30 November 2025 for the CADE Capacity Development Programme 2025–2026. The programme helps CSOs strengthen their role in digital governance through a mix of technical courses, diplomatic skills training, and expert guidance. Participants can specialise in AI, cybersecurity, or infrastructure policy, receive on-demand helpdesk support, and the most engaged will join a study visit to Geneva. Fully funded by the EU, the programme offers full scholarships to selected organisations, with a special welcome to those from the Global South and women-led groups.



READING CORNER
FF2025 Social share

ITU’s Measuring digital development: Facts and Figures 2025 offers a snapshot of the most important ICT indicators, including estimates for the current year.

gdpr general data protection regulation concept computer with a gdpr icon on the digital tablet screen stockpack istock scaled 1

EU data protection law and competitiveness are struggling to work cohesively, according to senior Union leaders. That is why the GDPR and ePD are next on the digital omnibus’ chopping…

AI in schools

Is AI destroying the value of learning? From elementary classrooms to universities, educators face a new reality: students using AI to bypass critical thinking. Read Part 1 of this series

650 312

The alarmism that followed the 1972 ‘Limits of Growth’ report, which predicted civilisation’s collapse by c. 2040, pushed policy towards centralisation, token gestures, and factional debate. Aldo Matteucci analyses.