Weekly #261 Pulling at the threads of AI governance

 Logo, Text

1 – 8 May 2026


HIGHLIGHT OF THE WEEK

Pulling at the threads of AI governance 

AI governance increasingly resembles a tangled ball of yarn: regulation, cybersecurity, infrastructure, labour markets, semiconductors, and geopolitics pulling on one another simultaneously.

There is an older word for such a ball of thread: clew. Historically, a clew was not just yarn, but a guide through a maze — the thread used in mythology to navigate complexity and find a way out. It is also the root of the modern word clue.

This week in AI governance felt like standing at the entrance to a maze with a clew in hand, faced with multiple threads.

 Rope, Adult, Male, Man, Person, Face, Head

Europe’s AI Act. Europe’s AI rulebook edged forward this week after negotiators reached a provisional agreement on the latest phase of the EU AI Act omnibus discussions, which aims to simplify parts of the Union’s digital rulebook and ease implementation burdens. The provisional agreement sets new application dates of 2 December 2027 for stand-alone high-risk AI systems and 2 August 2028 for high-risk AI systems embedded in products.

The agreement also extends certain simplification measures beyond SMEs to small mid-caps, while keeping some safeguards. The agreement also reinforces the AI Office’s powers. The text must still be endorsed by both the Council and the European Parliament before formal adoption.

National AI strategies. National AI strategies are also becoming more assertive — and more tied to economic sovereignty. Papua New Guinea has outlined a national approach to AI focused on data sovereignty, trusted public infrastructure, and new legislation, underpinned by four elements of the framework: Strengthening existing digital foundations such as SevisPass and SevisDEx, establishing a National AI Register, adopting sovereign data governance, introducing new laws, including a National Artificial Intelligence Act and a Data Governance and Protection Act. Kazakhstan reviewed proposals to expand AI deployment across all sectors as part of its digital transformation agenda. Canada moved to strengthen domestic photonic semiconductor and AI capabilities by spinning off the National Research Council of Canada (NRC)’s Canadian Photonics Fabrication Centre (CPFC) into a commercial entity. The UAE launched a national AI security laboratory focused on certification and cyber resilience.

These initiatives vary widely in ambition and capacity, but they share a common premise: AI infrastructure, data, and chips are now viewed as strategic assets.

AI diplomacy. At the same time, AI diplomacy is accelerating. Australia and Japan expanded cooperation on economic security and critical technologies. The EU and Japan advanced joint work on AI governance and cross-border data flows. South Korea and the Netherlands discussed semiconductor and AI cooperation. India and France have discussed expanding cooperation in space, AI, applied mathematics and advanced technologies. Norway joined the Pax Silica initiative, which focuses on securing semiconductor supply chains.

Even geopolitical rivals appear to be cautiously reopening channels on AI governance. According to reports, the USA and China are considering launching formal discussions on AI, signalling recognition that some degree of coordination may become unavoidable as frontier systems grow more capable and globally consequential. 

Cybersecurity. The security picture darkens as multiple warnings are issued in one week.

The UK’s National Cyber Security Centre warned that AI systems could dramatically accelerate the discovery and exploitation of software vulnerabilities, compressing the time between disclosure and attack. Separate guidance from the NCSC examined the growing risks posed by adversarial machine learning attacks, including model manipulation, prompt injection, and data poisoning techniques designed to undermine AI systems themselves. 

Swisscom similarly warned that AI and geopolitical tensions are reshaping the cyber threat landscape, with automation, influence operations, and AI-enhanced cyber capabilities becoming increasingly intertwined. 

The Australian Securities and Investments Commission (ASIC) has urged regulated entities to strengthen cyber resilience, warning that frontier AI could intensify cyber risks by exposing vulnerabilities at greater speed, scale and sophistication. ASIC said licensees and market participants should act now to improve their cybersecurity fundamentals rather than wait as advanced AI tools reshape the threat environment.

In the USA, Microsoft, Google, and xAI agreed to provide advanced AI models for government-led security stress testing. The initiative is designed to support pre-deployment evaluations and targeted research intended to improve understanding of frontier AI capabilities and their national security implications. 

Other threads. Canada found OpenAI non-compliant in a privacy probe — scraping public data for training was overbroad and lacked consent. Meta fought an EU order in a closed hearing, seeking to avoid letting rival AI chatbots onto WhatsApp for free

The user’s POV. For compliance teams, the EU deal offers breathing room, but not a free pass — the transparency deadlines actually tightened. For security professionals, the NCSC warnings are a call to audit ML pipelines now, not later. For everyone watching geopolitics, the US-China AI talks would be the first real signal that both capitals see cooperation as necessary. The question is whether they can agree on anything beyond the need to talk. 

The bottom line. The challenge for governments, companies, and users alike is no longer simply building AI systems. It is learning which strands matter, which knots are tightening, and which clews still lead out of the maze. 

IN OTHER NEWS LAST WEEK

Meta on trial(s)

Meta Platforms is facing growing legal and regulatory pressure both in the USA and Europe over claims that its social media platforms contribute to youth addiction and mental health problems. 

In New Mexico, the state is seeking $3.7 billion and asking the court to declare Meta a public nuisance. The lawsuit alleges that Facebook, Instagram, and WhatsApp were designed in ways that encourage addictive behaviour among minors. It also claims that these platforms failed to adequately protect young users from harmful content and exploitation. The state is requesting major changes to the platforms, including age verification and restrictions on features such as autoplay and infinite scroll for minors. Meta claims the case concerns individual users, rather than the harm to the public as a whole. 

Meta is also attempting to overturn a California jury verdict that found the company negligent in the design of its platforms and awarded damages to a young plaintiff who claimed that social media use contributed to her depression. Meta argues that the claims are barred by Section 230 of the Communications Decency Act and that the alleged harms were connected to online content rather than the platforms’ design features.

Why does it matter? Both cases are considered important because they may influence many similar lawsuits currently pending against social media companies.


Dutch court backs DigiD contract despite US data access fears

The District Court of The Hague has rejected an attempt by three Dutch citizens to block the government from renewing its contract with Solvinity, the company responsible for hosting and technically managing systems linked to DigiD.

The plaintiffs argued that Solvinity’s planned acquisition by US-based IT provider Kyndryl could place sensitive data from more than 16 million DigiD users under US jurisdiction, potentially exposing it to US authorities and creating risks to critical public services such as healthcare, pensions, taxes, and unemployment systems.

Despite these concerns, the court ruled in favour of the Dutch State, allowing the agreement to proceed. Judges did not accept arguments that the deal would immediately threaten data security or justify halting the contract.

What’s next? The decision leaves further scrutiny to the Investment Assessment Office, which is reviewing national security risks linked to the acquisition. 

Why does it matter? The case highlights ongoing tensions around digital sovereignty and data protection in the Netherlands.


End-to-end encrypted messaging on Instagram ends as wider encryption battles grow

As of 8 May, end-to-end encrypted messaging on Instagram is officially over. Meta has switched off the feature globally, abandoning plans to expand the privacy technology across the platform after years of promoting encrypted communication as the future of messaging. 

At the same time, Apple and Meta are opposing Canada’s proposed Bill C-22, which they say could force companies to weaken encryption or build government-access mechanisms into their products. Canadian authorities argue the bill would help law enforcement respond more quickly to security threats.  

Why does it matter? End-to-end encryption is widely seen as a core privacy protection because it limits access to message content, including by the platform itself. This week’s developments underline the questions about how major platforms prioritise privacy features, user safety, product complexity and interoperability across their messaging services.



LAST WEEK IN GENEVA
 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

WTO resumes talks as 19 members back e-commerce moratorium pledge

The WTO’s General Council has met in Geneva for the first time since the 14th Ministerial Conference (MC14), after negotiators in Yaoundé narrowly missed agreements on several major files, including the future of the long-running e-commerce moratorium and broader WTO reform.

The newly elected chair, Ambassador Clare Kelly, said members remained committed to preserving the careful balance reached during negotiations in Cameroon and avoiding a return to earlier positions. 

The discussions follow the expiry on 31 March of the WTO moratorium on customs duties on electronic transmissions, a temporary arrangement first adopted in 1998 that prevented countries from imposing tariffs on digital trade flows such as software downloads, streaming services, and other online transmissions. Ministers at MC14 failed to agree on another extension, exposing deep divisions over how long the moratorium should continue and how governments should respond to rapid technological developments such as AI and 3D printing.

Despite the lapse, a group of 19 WTO members announced they would continue not imposing customs duties on electronic transmissions among themselves. In a joint statement circulated at the WTO, the group said the arrangement would provide predictability and certainty for businesses and consumers while multilateral negotiations continue. The group includes Argentina, Australia, Costa Rica, Ecuador, Guatemala, Iceland, Israel, Japan, South Korea, Malaysia, Mexico, New Zealand, Norway, Panama, Paraguay, Singapore, Separate Customs Territory of Taiwan, Penghu, Kinmen and Matsu, the USA, and Uruguay.

Türkiye also signalled new flexibility during the General Council meeting, announcing it would not block consensus on a temporary extension of the moratorium. However, Brazil maintained its opposition to a four-year extension of the moratorium.

What’s next? The chair announced further consultations on e-commerce and WTO reform, with plans to report back to members in July.

Geneva Cyber Week 2026

The UN Institute for Disarmament Research (UNDIR) and the Swiss Federal Department of Foreign Affairs (FDFA) are co-hosting Geneva Cyber Week from 4 to 8 May 2026, bringing policymakers, diplomats, technical experts, industry leaders, academics, and civil society representatives to venues across Geneva and online for a week of discussions on cyber stability, resilience, governance, digitalisation, and the security implications of emerging technologies, including AI.

Returning after its inaugural edition, the event is being positioned as a response to a more fragile cyber and geopolitical environment. Held under the theme ‘Advancing Global Cooperation in Cyberspace’, Geneva Cyber Week 2026 comes at a moment of mounting cyber insecurity, intensifying geopolitical tension, and rapid technological change. The programme features nearly 90 events and reinforced Geneva’s role as a centre for cyber diplomacy, international cooperation, and digital governance.

As part of Geneva Cyber Week, UNIDIR organised the Cyber Stability Conference 2026, on 4–5 May in Geneva and online, bringing together governments, international organisations, industry, academia, and civil society to discuss ICT security and cyber governance. Under the theme ‘Cyber governance in an era of technological revolution: Past lessons, present realities and future frontiers,’ discussions explored how international cyber stability frameworks are adapting to rapid technological change, including AI and quantum computing, while reflecting on lessons from past cyber diplomacy processes and current security challenges.

Multi-year expert meeting on investment, innovation and entrepreneurship for productive capacity-building and sustainable development, 12th session

UNCTAD’s Multi-year Expert Meeting on Investment, Innovation and Entrepreneurship for Productive Capacity-building and Sustainable Development met for its twelfth session 4-5 May. The experts warned that AI and other strategic technologies are reshaping global investment patterns, concentrating capital in a handful of sectors and countries while leaving many developing economies behind.  Discussions at the meeting focused on how developing countries can compete in AI-related sectors, strengthen domestic innovation ecosystems and ensure that AI-driven investment translates into broader development gains. 

OSCE Conference ‘Anticipating technologies – for a safe and humane future’ opens in Geneva

The Swiss Chairpersonship of the Organization for Security and Co-operation in Europe opened a two-day high-level conference on anticipatory technologies in Geneva on 7 May. The event is examining how foresight, dialogue, and international cooperation can help reduce misunderstandings, build trust, and strengthen security across the OSCE region amid rapid technological change. 

The programme includes discussions on anticipating technological change and its geopolitical impact, water and energy security in the digital age, and the role of AI in early warning and conflict prevention. 

The conference also highlights Geneva’s role as a meeting point for science and diplomacy, including through institutions such as CERN, the Geneva Science and Diplomacy Anticipator, and the Open Quantum Institute

The event forms part of the Chairpersonship’s priority to connect scientific and technological anticipation with policy action.


READING CORNER
UNESCO quantum

Quantum research access, UNESCO says, remains concentrated in wealthier economies.

undp logo

Public sector transformation requires more than technology, demanding systemic reform in governance, procurement, and talent to meet citizens’ real needs.

learning education ideas insight intelligence study concept 1

Global labour markets face rapid transformation as the ILO highlights growing AI and skills challenges.


OPPORTUNITY

Opportunity: Become a Knowledge Fellow

Diplo is pleased to launch a new call for applications for Digital Watch Knowledge Fellows (2026), the team of collaborators behind the Digital Watch Observatory (DW). Knowledge Fellows (KF) are central to the observatory’s ability to provide comprehensive, accurate, and up-to-date coverage of specific areas of digital governance. More details on what we are looking for and what we offer in return are available here. Interested applicants are invited to apply by 31 May 2026.

 Art, Modern Art, Graphics, Cutlery, Fork, Text

Opportunity: Become a Knowledge Fellow

 Art, Modern Art, Graphics, Cutlery, Fork, Text

Opportunity: Become a Knowledge Fellow

DiploFoundation is pleased to launch a new call for applications for Digital Watch Knowledge Fellows (2026).

What is the Digital Watch Observatory?

The Digital Watch Observatory (DW) is a comprehensive observatory and one-stop shop source of information on digital governance. It tracks latest developments, provides policy overviews and analysis, and curates information on key topics, technologies, processes, policy players, events, and resources. 

DW is designed for diplomats, policymakers, researchers, civil society actors, business representatives, and other stakeholders who need reliable, structured, impartial, and up-to-date information on digital governance issues. 

Its content is organised around:

  • Topics, from cybercrime and freedom of expression, to data governance and critical infrastructure.
  • Technologies such as artificial intelligence, quantum computing, and semiconductors.
  • Processes including the UN Global Mechanism on ICT Security, the Internet Governance Forum, the Global Digital Compact process, and more.
  • Policy players such as countries, technical entities, business associations, UN entities, and other international and regional organisations.
  • Resources, including conventions, resolutions, laws and regulations, reports, and more.
  • Events, such as meetings, negotiations, conferences, and consultations.

This structure is complemented by:

Daily updates, regular analyses, and weekly and monthly newsletters that track and explain the most relevant developments across the digital governance landscape. 

What is the role of a Knowledge Fellow?

Knowledge Fellows (KF) are central to the observatory’s ability to provide comprehensive, accurate, and up-to-date coverage of specific areas of digital governance. 

Each KF is expected to cover one or more areas of expertise and help ensure that DW remains accurate, relevant, and complete, and impartial. This means:

  • Monitoring and analysing developments related to the assigned area(s) of expertise  and ensuring these are reflected in daily updates and regular analyses.
  • Keeping assigned DW  pages accurate, up-to-date, and substantively strong.
  • Tracking events relevant to their area(s) of expertise and helping ensure that important meetings, negotiations and discussions are reflected in DW. 
  • Identifying key resources relevant to their area(s) of expertise such as UN resolutions and other intergovernmentally agreed documents, laws, regulations, reports, and policy papers.
  • Supporting stronger coverage of organisations, countries, and other key actors in digital governance. 
  • Contributing, when relevant, to newsletters, policy and research papers, and other knowledge products.

Knowledge Fellows may also have opportunities to contribute to Diplo’s wider knowledge ecology, including courses, discussions, and thematic initiatives.

Who should apply?

At a time when the public space is abundant with AI-generated content, we are looking for more than just someone who can use AI to summarise news or rewrite online resources. 

KF will have access to custom-made AI tools to support them in their work, but the role requires subject expertise, critical judgement, and the ability to identify what is important, what is missing, and what deserves deeper analysis.

Specifically, we are looking for applicants who:

  • Have a strong expertise in digital governance, grounded in professional experience, academic research, policy engagement, or a combination of these.
  • Are interested in continuing to develop this expertise. 
  • Know where to look and what to look for in order to ensure a comprehensive coverage of assigned topics, technologies, processes, etc.
  • Can identify major developments, policy controversies, key debates, and emerging trends in the digital governance landscape, and cover them accurately and impartially.

This means combining subject expertise with editorial judgement, policy awareness, and a strong sense of knowledge curation.

Applicants must also have: 

  • Availability to contribute on a regular basis. The fellowship is conducted online, with an expected commitment of at least 8 hours per week.
  • Strong analytical and writing skills in English.
  • Basic skills in using web and social media, as well as familiarity with generative AI tools.

What we offer

Digital Watch Knowledge Fellows will benefit from:

  • Onboarding and guidance on Digital Watch’s editorial and curation approach.
  • Training on observatory workflows and digital/AI tools.
  • Remuneration.
  • Visibility for their work among DW users (diplomatic communities in Geneva and other diplomatic centres, professionals from across all stakeholder groups dealing with digital topics, etc.)
  • Opportunities to promote their digital governance-related research through DW and Diplo networks.
  • Membership in a global community of experts and professionals working on digital governance.

Fellows are engaged on a consultancy/fee basis; The role does not constitute employment with DiploFoundation.

How to apply

Interested applicants are invited to complete the application form below.


Application deadline: 31 May 2026

Questions: dwapplications@diplomacy.edu


Digital Watch newsletter – Issue 109 – April 2026

April 2026 in retrospect

Dear readers,

Visionary or outlandish statements about the future are a feature of tech industry discourse. But the rapid acceleration of generative AI seems to have shortened the timeline for many of these claims. April brought another wave of high-profile predictions. While some might be tempted to dismiss them as mere hype, there’s a strong reason to assess them: the quiet danger of designing the future without meaningful public input.

South Africa unveiled its first draft national AI policy and was quickly forced to withdraw it after reviewers found a critical flaw: it was riddled with fake sources and non-existent citations, likely generated by AI. This incident is rather illustrative of a broader problem with AI-generated laws. In this issue of the newsletter, we examine how to prevent fake laws governing real life.

In earty April, Anthropic announced Claude Mythos Preview, its most capable AI model to date, alongside the explicit decision not to make it publicly available. We look at the model’s capabilities, the reason behind restricting access to the model, as well as the governance questions the model has brought up.

We invite interested readers to join our team of Knowledge Fellows. Knowledge Fellows are central to the observatory’s ability to provide comprehensive, accurate, and up-to-date coverage of specific areas of digital governance. More details on what we are looking for and what we offer in return are available in the newsletter.

Plus: April’s top digital policy developments and a Geneva wrap-up.

Technologies

 The EU and the USA have launched a coordinated framework to strengthen resilience in critical minerals supply chains, combining a strategic Memorandum of Understanding (MoU) with an Action Plan.  partnership aims to secure diversified and sustainable supply chains through joint project development in the EU, US, and third countries, supported by coordinated investment tools, risk reduction mechanisms, and improved business linkages.

Canada and Finland have set out a new agenda for cooperation on sovereign technology and AI, positioning advanced digital capabilities as central to economic resilience, security, and strategic autonomy in a contested global environment. Announced after talks in Ottawa, the agenda spans AI adoption across government and industry, high-performance computing, telecommunications, AI gigafactories (including support for Nokia’s AI gigafactory), quantum research, critical minerals, and trusted supply chains. Both countries plan to deepen coordination on sovereign AI infrastructure, reduce technological dependencies, support small and medium-sized enterprises, and expand telecom opportunities through initiatives such as the Global Coalition on Telecommunications.

Canada is increasing support for its quantum research ecosystem through new funding announced by the Natural Sciences and Engineering Research Council of Canada, aiming to strengthen the country’s scientific capacity, innovation base, and long-term leadership in a strategically important field. The initiative will back researchers, projects, and cross-institutional collaboration, advancing both fundamental science and applied development while helping translate quantum research into practical technological progress.

The UK government has identified six frontier technologies – AI, cybersecurity, advanced connectivity, engineering biology, quantum technologies, and semiconductors – as the pillars of its 2025 Modern Industrial Strategy and Digital and Technologies Sector Plan, aiming to strengthen digital capability, economic growth, national resilience, and long-term competitiveness. The agenda prioritises investment in next-generation telecoms, including 5G and future 6G, alongside expanded compute capacity, supercomputing infrastructure, and workforce development to reinforce the UK’s position as a leading EU AI hub.

Australian researchers have used a Wikipedia-based AI model to identify 100 emerging technologies gaining momentum ahead of 2026, offering a data-driven alternative to traditional forecasting methods often shaped by expert judgement. Drawing on thousands of Wikipedia entries, the analysis mapped more than 23,000 technologies to produce the ‘Momentum 100’ list, led by reinforcement learning and followed by blockchain, 3D printing, soft robotics, augmented reality, and other fast-developing fields.

Infrastructure

European technology providers Cubbit, SUSE, Elemento, and StorPool Storage have launched a joint Disaster Recovery Pack to help organisations maintain data access and operational continuity during disruptions caused by external technology dependencies. Presented at the European Data Summit in Berlin, the solution combines storage, compute, orchestration, networking, identity, observability, and management into a single deployable cloud software stack designed to reduce fragmentation and simplify recovery planning. By enabling critical workloads to be transferred to European-based infrastructure with limited disruption, the initiative seeks to meet practical disaster recovery needs while supporting wider efforts to reduce reliance on non-European cloud providers.

A new report, citing research by the Brussels-based Future of Technology Institute, warns that most of the EU defence agencies remain heavily dependent on US cloud and technology providers, raising concerns over exposure to a potential ‘kill switch’ scenario in which critical services could be restricted or disabled during political or strategic tensions. Open contracting data reviewed by the institute suggests that 23 of 28 EU and UK countries rely on US firms, either directly or through EU suppliers, using American cloud infrastructure, with 16 countries classified as high risk, including Germany, Finland, Poland, Denmark, Estonia, and the UK. Google Cloud, Microsoft, and Oracle are described as dominant providers in sensitive defence systems, while Austria is presented as a lower-risk case due to apparent reliance on sovereign alternatives.

Panthalassa has raised $140 million in Series B funding, led by Peter Thiel, to develop offshore systems that harness ocean wave energy to power AI computing as demand for data centre capacity accelerates. The company plans to build wave-powered nodes that generate electricity at sea, run AI computing on board, and transmit data through low-Earth-orbit satellites, offering a potential response to land-based data centres’ growing constraints on power supply, cooling, and infrastructure.

Security

By pairing AI-driven discovery with industry coordination, $100 million in usage credits, and funding for open-source security, the project Glasswing brings together major technology, cybersecurity, finance, and open-source actors, including AWS, Apple, Google, Microsoft, NVIDIA, Cisco, CrowdStrike, JPMorgan Chase, and the Linux Foundation, in a coordinated effort to use advanced AI to defend critical software infrastructure. Led by Anthropic’s Claude Mythos Preview model, the initiative aims to detect complex vulnerabilities at scale, with early findings uncovering thousands of previously unknown flaws across operating systems, browsers, and core digital infrastructure, some of which had remained hidden for decades. 

A joint CISA advisory warns that Iranian-affiliated cyber actors are targeting internet-facing programmable logic controllers across US critical infrastructure, including Rockwell Automation and Allen-Bradley CompactLogix and Micro850 devices used in government, water, energy, and industrial systems. Active since at least March 2026, the campaign has disrupted PLC functions, manipulated project files, and altered HMI and SCADA displays, causing operational and financial damage.

Canada has introduced Level 1 of the Canadian Program for Cyber Security Certification, setting a baseline of cyber requirements for suppliers working on defence contracts as cyber threats increasingly target contractors, sensitive data, and critical supply chains. Phased implementation will begin in summer 2026, with certification required at the contract award stage, giving industry time to adapt while strengthening procurement trust and operational readiness.

Europol’s 2026 Internet Organised Crime Threat Assessment warns that the EU’s cybercrime landscape is becoming more complex, industrialised, and difficult to disrupt as criminal networks exploit encryption, proxies, fragmented online spaces, and AI-enabled tools. The report identifies cybercrime enablers, online fraud, cyber-attacks, and online child sexual exploitation as major areas of concern, with AI making scams, deception, and abuse more scalable and convincing.

Norway has announced plans to introduce a ban on social media use for children under 16, placing responsibility for age verification on technology companies.

Greece is moving to tighten restrictions on minors’ use of social media, with legislation expected later this year that would introduce a ban for children under 15. The measure is set to take effect on 1 January 2027 and is intended to be a framework that changes how platforms operate.  Platforms would be required to implement robust age verification mechanisms, including the re-verification of existing accounts, with oversight provided by national regulators such as the Hellenic Telecommunications and Post Commission (EETT). 

French President Emmanuel Macron is convening EU leaders, including Spanish Prime Minister Pedro Sanchez and representatives of Italy, the Netherlands and Ireland, to align national approaches to restricting minors’ access to social media and to press for faster EU-level action.

The UK’s Children’s Wellbeing and Schools Bill is set to expand ministers’ powers to shape how online services protect children, including by restricting access to risky platforms, features, or functions and by targeting design elements such as contact settings, live communication, location visibility, and time spent online. The draft would also bring Ofcom into a stronger advisory role, introduce a six-month timeline for regulations or a progress update, and give ministers new authority over children’s data consent, age assurance, and enforcement. The regulatory package remains unsettled for now, with Parliament still negotiating key provisions and no final law yet in place.

The European Commission has developed a standardised age-verification app intended to work across member states. The app allows users to confirm they meet age requirements to access social media platforms by providing their passport or ID number. It is designed to integrate into national digital wallets or operate as a standalone app, with a coordinated EU framework to ensure interoperability and avoid fragmented national systems. The app is open source and available for both public and private implementation, but is subject to common technical and privacy requirements. The Commission plans to establish an EU-level coordination mechanism to oversee rollout, accreditation, and cross-border usability. The rollout has faced scrutiny by security researchers. Reported weaknesses include locally stored authentication data that can be reset or modified, allowing users to bypass PIN protections, disable biometric checks, and reset rate-limiting mechanisms by editing configuration files. This effectively enables the reuse of verified identity data under altered access controls. The criticism has triggered broader concerns among developers about the app’s architecture, including why secure hardware features were not used, and whether elements like expiring age credentials are logically necessary.

The European Commission has also recently taken preliminary action against Meta, finding that Facebook and Instagram have not effectively prevented users under 13 from accessing their services, largely because age checks can be bypassed with false birthdates and weak verification systems. 

Australia’s child-safety push is widening from social media to gaming, as regulators intensify scrutiny of how platforms protect minors from harm. On 21 April, the eSafety Commissioner issued legally enforceable transparency notices to Roblox, Minecraft, Fortnite and Steam, demanding details on how they handle risks, including child sexual exploitation, cyberbullying, hate and extremist material on services widely used by children. 

The UK Information Commissioner’s Office has launched a campaign to help parents and carers speak with primary school-aged children about online privacy, after research found that many children are sharing personal details online, while families often feel unsure how to respond. The ICO says 24% of children have shared their real name or address online, 22% have disclosed information such as health details to AI tools, and 21% of parents have never discussed online privacy with them.

Economic

The European Commission has issued a supplementary charge sheet to Meta (called Supplementary Statement of Objections), outlining concerns over potential restrictions on third-party AI assistants’ access to WhatsApp. Previously, Meta decided to reinstate access to WhatsApp for third-party AI assistants for a fee. However, the Commission has preliminarily found that these measures remain anticompetitive and has now issued interim measures to prevent these policy changes from causing serious harm on the market. The interim measures would stay in effect until the Commission concludes its investigation and issues a final decision on Meta’s conduct.

UNCTAD reports that global trade grew by $2.5 trillion in 2025 to reach $35 trillion, reflecting continued expansion in goods and services but also a more fragile and uneven economic landscape. Rising geopolitical tensions, disrupted shipping routes, conflicts in the Middle East, and instability in key maritime corridors are driving up energy, transport, and import costs, placing heavier pressure on developing economies with limited fiscal space. Services growth has slowed, while much of the recent trade increase stems from higher prices rather than stronger volumes. East Asia and Africa remain important drivers through South–South trade and shifting supply chains, yet fragmentation, US–China decoupling, inflation, debt, and protectionism are expected to weigh on 2026 prospects.

The International Labour Organisation warns that social protection systems are failing to keep pace with fast-changing labour markets shaped by climate change, technological disruption, demographic shifts, and evolving forms of work. Its new report highlights major gaps in coverage, adequacy, and financing, leaving many workers exposed during unemployment, illness, retirement, or job transitions.

Russia is moving to criminalise large-scale unauthorised cryptocurrency activity, after a government legislative commission approved amendments that create prison sentences for organising the circulation of digital currency without central bank authorisation. The proposed Article 171.7 of the Criminal Code would punish cases involving significant harm, major illicit income, or damage to individuals, organisations, or the state with a sentence of 4 to 7 years in prison. Expected to take effect on 1 July 2027, the measure marks a sharper enforcement turn in Russia’s digital asset sector.

The European Commission has updated its technology transfer competition rules to reflect better data-driven innovation, digital markets, and modern licensing practices across the EU. The revised framework clarifies how companies can license patents, software, know-how, and data-related technologies while staying within competition law, aiming to protect collaboration and legal certainty without allowing agreements that restrict market access or innovation. Greater attention is given to digital ecosystems, standard-essential technologies, and licensing arrangements that may shape control over data, interoperability, and downstream competition.

Canada has announced C$23.8 million for the Digital Skills for Youth programme, aiming to help young people gain practical experience as AI, cybersecurity, big data, automation, and broader digital transformation reshape the labour market. Led by Industry Minister Mélanie Joly, the two-year initiative will fund training and work placements for post-secondary graduates by linking them with employers across emerging technology sectors. Eligible recipients include businesses, non-profits, public institutions, Indigenous organisations, and provincial or territorial bodies, with flexible access for participants in Yukon, the Northwest Territories, and Nunavut.

Human rights

Brazil has inaugurated its first Center for Access, Research and Innovation in Assistive Technology (Capta) at the Benjamin Constant Institute in Rio de Janeiro. Run by the Ministry of Science, Technology and Innovation (MCTI) under the National Plan for the Rights of People with Disabilities, the centre aims to foster the development, experimentation, and dissemination of assistive technologies that enhance autonomy, inclusion, and quality of life for people with disabilities. The launch marks the first of several planned centres nationwide to expand access to these technologies.

UNESCO warns that students with disabilities continue to face deep barriers in education, including inaccessible infrastructure, limited assistive technologies, insufficient teacher training, stigma, and weak data systems that leave many learners invisible in policy planning. Its findings show that exclusion often begins early and is reinforced by poverty, gender inequality, displacement, and other overlapping disadvantages, limiting access to quality learning and future opportunities. UNESCO urges governments to move beyond narrow inclusion measures by investing in accessible schools, inclusive curricula, trained educators, reliable data, and meaningful participation by persons with disabilities.

The Philippines and Bermuda have signed a memorandum of understanding to strengthen cross-border cooperation on personal data protection, linking the Philippines’ National Privacy Commission with Bermuda’s Office of the Privacy Commissioner. The agreement enables information sharing, mutual assistance in investigations, and closer coordination on data breach cases that cross jurisdictions. Beyond enforcement, the partnership supports compatible data protection mechanisms, certification frameworks, trusted data flows, training, and knowledge exchange on emerging privacy challenges.

Legal

A unanimous US Supreme Court ruling has narrowed the circumstances under which an internet service provider (ISP) can be held liable for users’ copyright infringement. Writing for the Court, Justice Clarence Thomas said an ISP is liable only if its service was designed for unlawful activity or if it actively induced infringement. 

French authorities have summoned Elon Musk and former X chief Linda Yaccarino to give voluntary interviews in relation to a criminal investigation into whether X enabled the spread of child sexual abuse material, AI-generated deepfakes, Holocaust denial content, and other harmful or unlawful material. However,  Musk appears to have refused by not showing up. The confrontation widened when reports emerged that the US Justice Department had declined to assist the French inquiry, arguing that the case risked crossing into the regulation of protected speech and that it would unfairly target a US company. French authorities, however, have framed the matter as a legitimate enforcement action under national law.

In the federal multidistrict litigation (MDL) pending in the Northern District of California involving Meta, Google (YouTube), ByteDance (TikTok), and Snap Inc. the court denied motions to dismiss filed by several school districts. That moves the case out of the pleading stage and into bellwether proceedings, where selected cases will test core liability and damages theories. The plaintiffs’ main argument is product design-based. They claim the platforms were engineered to maximise engagement among minors despite internal awareness of mental health risks. They link this to reported increases in anxiety, depression, and behavioural disruption in school environments. The causal chain is disputed, but that is the core theory being advanced. The MDL is large, with over 2,300 related actions across six states, making it one of the more significant litigations in this area. The upcoming June bellwether trial is expected to be the first real test of these claims and will likely influence both settlement pressure and the broader direction of the MDL.

Raine v. OpenAI is proceeding as a standalone case in California, not part of any MDL. The complaint alleges that Adam Raine’s use of ChatGPT shifted from academic purposes to emotional reliance, with escalating mental health disclosures allegedly met by responses that reinforced dependence rather than directing him outward. The plaintiffs argue this was a foreseeable result of engagement-oriented design. They bring claims including wrongful death and seek injunctive relief for stronger safeguards. While no trial date has been formally set, the case remains in its early procedural stage in California and may proceed toward trial in late 2026 or 2027, depending on pretrial developments.

Sociocultural

The European Commission has launched a Mediterranean digital transformation programme for North African and Middle Eastern countries, marking the first digital initiative under the Pact for the Mediterranean. The programme aims to support inclusive and sustainable growth by improving access to digital services, aligning telecommunications regulation with the EU standards, and strengthening national regulatory authorities. Cybersecurity is a core priority, with support for stronger national frameworks, institutional capacity, and coordinated responses to digital threats.

The European Commission’s first monitoring results under the revised Code of Conduct on Countering Illegal Hate Speech Online+ show that major platforms are making progress in reviewing reported illegal hate speech within 24 hours, while gaps remain in accuracy, consistency, and reporting practices. Based on independent monitoring and company data, the assessment found that many notifications were handled within agreed timelines, but a notable share of cases were disputed or wrongly classified. Linked to the Digital Services Act’s co-regulatory model, the exercise acts as a practical test of platform accountability, transparency, and compliance with the EU and national law.

The UK government is planning measures that could make senior technology executives face criminal charges, including prison sentences, if their companies fail to remove non-consensual intimate images when required by regulators. The move builds on existing obligations that already require platforms to take down such material within strict timeframes or face significant penalties, including fines of up to 10% of global turnover or even service blocking. The latest step goes further: instead of relying solely on corporate sanctions, it introduces personal criminal accountability at the executive level. This type of liability is likely to accelerate compliance in ways that financial penalties alone have not, and may serve as an example to other jurisdictions. The policy is part of a broader tightening of the UK’s online safety framework, driven by persistent concerns over revenge porn and the rapid proliferation of AI-generated intimate imagery.

National plans and initiatives

India. India has set up a Technology and Policy Expert Committee under the Ministry of Electronics and Information Technology to help shape the country’s AI governance framework and advise the new AI Governance and Economic Group. Bringing together government, academia, industry, and policy expertise, the body is meant to translate fast-moving technical and regulatory issues into practical guidance, bringing a more structured and adaptive approach to AI governance aligned with India’s economic and social priorities.

South Africa. South Africa has withdrawn its draft national AI policy after it was discovered that the document contained fake, AI-generated citations, undermining the credibility of the proposed framework. The government said the lapse occurred due to a failure to verify references and stressed that stronger human oversight is required in policy processes involving AI tools. The withdrawal delays plans to establish new AI governance institutions and incentives, and the policy will now be redrafted.

Sovereignty 

The UK. The government is planning to back British strengths in the parts of the AI stack where the UK can build real leverage, Liz Kendall, the UK’s Secretary of State for Science, Innovation and Technology, stated. Kendall rejected technological isolationism, instead championing AI sovereignty for Britain: reducing over-dependencies, backing domestic firms with a £500 million Sovereign AI fund, and launching a new AI Hardware Plan in June 2026 to capture chip market share. Kendall also advocated collaboration with other middle powers, including on setting the standards for how AI is deployed.   

The government has also launched a £500 million Sovereign AI Fund to accelerate domestic AI startups and strengthen national technological autonomy. The initiative combines direct equity investment with access to national compute infrastructure, fast-tracked visas for global talent, and procurement pathways into public services. It targets early-stage to growth companies in areas such as AI infrastructure, life sciences and advanced computing, with the explicit goal of ensuring that high-potential firms scale and remain anchored in the UK rather than relocating abroad. 

Papua New Guinea. The government has issued new guidance on AI and data sovereignty, setting out principles for ensuring that national data assets remain under domestic control. The framework emphasises governance over data storage, processing and cross-border transfers, particularly where public-sector or sensitive datasets are involved.  

Russia. Russia is advancing a draft AI regulatory framework that would formalise oversight of AI development and deployment, aligning with broader efforts to strengthen digital sovereignty and state control over emerging technologies. The proposals focus on risk management, national standards and reducing dependence on foreign AI systems, while supporting domestic innovation. The move fits into Moscow’s broader strategy to tighten control over digital infrastructure and cross-border data flows.

Partnerships

South Korea–France. South Korea and France are deepening cooperation through a new strategic AI and technology partnership, aimed at strengthening joint research, industrial collaboration and standard-setting across emerging technologies. The initiative reflects a broader effort to align capabilities in semiconductors, data infrastructure and advanced computing, while positioning both countries more competitively in the global AI landscape.

The EU-Morocco. The European Commission and Morocco have launched a digital dialogue to deepen strategic cooperation on emerging technologies, digital transformation, and innovation-led development. Focused on AI, digital infrastructure, start-up support, research collaboration, and stronger AI ecosystems, the initiative aims to turn digital technologies into drivers of economic and social progress. Greater interoperability of digital public services and expanded knowledge exchange are also central to the partnership, reflecting a shared interest in more connected, efficient, and inclusive digital governance.

Legal

The USA. A federal appeals court in Washington, D.C. has declined to block the Pentagon’s national-security blacklisting of Anthropic, allowing the designation to remain in force while litigation continues. The ruling contrasts with a separate decision by a California judge who had earlier blocked part of the government’s action, highlighting a growing judicial split over the unprecedented move.

Paraguay. Paraguay has adopted new rules for the use of AI in its courts, with UNESCO support, marking a notable step in judicial AI governance. The framework, approved by the Supreme Court of Justice, limits AI to a supporting role in data processing, information management, and assisted decision-making, while requiring human oversight, transparency, accountability, and disclosure when AI tools influence judicial processes. The rules align Paraguay’s approach with UNESCO’s guidance on AI in courts and underscore a wider trend toward rights-based, trust-focused AI deployment in public institutions.

Belgium. Belgium’s data protection authority has released a new information brochure titled ‘The Impact of Artificial Intelligence (AI) on Privacy’, providing guidance on risks such as bias, privacy violations and misuse of generative AI systems. The document is intended to raise awareness among organisations and the public, and to support compliance with EU data protection and AI governance frameworks.

Safety and security

The EU. EU member states and European Parliament lawmakers have failed to reach an agreement on revisions to the EU Artificial Intelligence Act, after 12 hours of negotiations over proposed changes under the Commission’s Digital Omnibus package. Disagreements centred on whether sectors already covered by existing product and safety regulations should be exempt from certain parts of the AI framework. Lawmakers warned that the latest deadlock risks creating legal uncertainty for companies already preparing for compliance, while privacy and civil society groups cautioned that proposed relaxations could weaken core safeguards. Talks will, however, resume in May.

Kazakhstan. Kazakhstan has introduced mandatory audits for high-risk AI systems, requiring developers to obtain a positive audit assessment before their systems can be listed as ‘trusted’ by sectoral authorities. The government will publish and regularly update official lists of approved systems, based on applications that include documentation on ownership, functionality and use conditions, reviewed within strict timelines. The move aims to build trust and standardise best practices in AI deployment, signalling a more structured and compliance-driven approach to high-risk AI governance.

New Zealand, the UK, Singapore. New Zealand’s National Cyber Security Centre, the UK National Cyber Security Centre, and Singapore’s Cyber Security Agency have issued coordinated warnings that frontier AI is reshaping the cyber threat landscape by lowering barriers to sophisticated attacks, accelerating vulnerability discovery, and compressing the window between disclosure and exploitation. All three stress the dual-use nature of AI, urging organisations to reassess outdated risk models and prioritise rapid patching, continuous monitoring, stronger identity and access controls, and reduced attack surfaces to counter increasingly automated and faster-moving cyber threats across both public and private sectors.

The USA. US cybersecurity officials are considering reducing the patching deadline for actively exploited flaws to just three days, citing the accelerating speed at which AI systems can identify and weaponise vulnerabilities. The proposed shift would initially apply to federal civilian agencies but could redefine baseline incident response expectations across government and critical infrastructure, with agencies arguing that traditional patch cycles are no longer compatible with current exploit timelines, while industry warns that such compressed deadlines may exceed the capacity of complex and legacy IT environments. 

Meanwhile, Washington is quietly reversing course on its standoff with Anthropic. The White House is drafting executive guidance that would allow federal agencies to work with Anthropic again, despite the company previously being labelled a supply-chain risk by the Pentagon. The shift reflects internal fractures: while parts of the defence establishment remain wary, others see excluding frontier models like Mythos as strategically costly.

Mythos. Anthropic has launched an investigation after a small group of users gained unauthorised access to its powerful Mythos AI model via a third-party contractor environment. The access reportedly occurred just as the company began rolling out a limited preview of the model to selected organisations under Project Glasswing. The unauthorised users are believed to have operated through a private Discord group, using a mix of tactics, including contractor access and open-source intelligence tools, to gain access to the system. Mythos was intentionally restricted due to its ability to accelerate cyberattacks and was provided to a limited number of partners, yet it appears to have leaked almost immediately through the partner ecosystem rather than through a direct breach. The window during which Mythos’ capabilities remain contained may prove far shorter than anticipated.

Content governance

China. The Cyberspace Administration of China has warned several ByteDance-owned platforms, including CapCut, Catbox and the Dreamina AI system, over failures to properly label AI-generated and synthetic content. The Cyberspace Administration of China said inspections found violations of cybersecurity and generative AI regulations, prompting enforcement measures such as mandatory rectification, warnings and disciplinary action against responsible personnel.

Development

Ghana. The Ghanaian Ministry of Communication, Digital Technology and Innovations has launched a public-sector AI capacity development programme in collaboration with the Government of Japan and the UN Development Programme. The programme is designed to equip public officials with knowledge of AI and its applications in governance. It focuses on improving decision-making and service delivery, drawing on experience from the UN and Japan.

UNESCO-Latin America & Caribbean. UNESCO has launched a regional AI in Education Observatory for Latin America and the Caribbean, designed to support evidence-based policymaking and track the impact of AI on education systems. The initiative aims to build capacity, share best practices and guide responsible integration of AI tools in schools and learning environments. 

UNESCO–Oxford. UNESCO and the University of Oxford have launched a global AI course for courts. The programme trains judges and legal professionals to assess algorithmic tools, identify bias, and ensure compliance with human rights standards in increasingly digitalised judicial processes. It introduces practical frameworks for evaluating AI outputs in legal contexts, with a strong focus on maintaining judicial independence, transparency and accountability as AI becomes embedded in evidence handling and decision-support systems.

Commonwealth. The Commonwealth Secretariat has launched a capacity-building programme on the use of AI in election management, training electoral officials from member states on how AI tools can support voter education, administrative efficiency and data analysis while safeguarding electoral integrity. The initiative focuses on practical applications of AI in electoral processes, including risks such as misinformation, bias and automation of sensitive decision-support functions. It emphasises that AI should remain assistive rather than substitutive in democratic processes, with human oversight positioned as central to maintaining trust, legitimacy and accountability in elections.

Australia. Under its national AI workforce strategy, Australia is expanding targeted upskilling programmes for learners and workers to address structural skill gaps created by AI-driven labour market shifts. The approach prioritises integration of AI literacy into education and vocational pathways, alongside employer-linked training to support adaptation in high-exposure sectors. It frames AI as a general-purpose technology requiring continuous reskilling rather than one-off training, with policy attention on inclusion, transition support and alignment between education systems and emerging digital economy demands.

Pakistan. Pakistan has approved the establishment of an AI Education Authority alongside plans for virtual schools. The reforms aim to scale AI-driven learning systems, support personalised education delivery and standardise digital curricula across regions. The initiative is framed within broader efforts to modernise the education sector, strengthen digital access, and build national capacity for AI adoption in public education, while addressing disparities in learning outcomes through technology-enabled delivery models.

Last month, South Africa unveiled its first draft national AI policy, aiming to position the country as a continental leader in innovation. The plan included ambitious new institutions: a National AI Commission, an Ethics Board, and tax breaks for private sector collaboration.

But just days later, the celebration turned sour.

According to Reuters, South Africa’s government was forced to withdraw the draft after reviewers discovered a fatal flaw: the policy was riddled with fake resources and citations that didn’t exist. The research supporting the country’s AI strategy had likely been generated by an AI.

This isn’t a minor typo. AI hallucinated policies and supporting resources. It is not surprising, as LLMs are advanced guessing machines, not providers of verified facts. Even when they are fake, texts can look perfectly correct and legitimate.

South Africa’s Minister of Communications and Digital Technologies, Solly Malatsi, acknowledged the failure with refreshing honesty:

‘The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened.’

He noted that this lapse ‘has compromised the integrity and credibility of the draft policy.’

Why does it matter? We are not highlighting South Africa to single it out or cause embarrassment. We are shining a spotlight on the problem with AI-generated laws. South Africa’s incident is not an exception. As policymakers rush to keep up with technology, we are seeing more examples of AI-drafted regulations being submitted for review. For instance, in the USA, a federal judge in California sanctioned two law firms for submitting a legal brief containing a fake citation generated by AI.

The problem isn’t that AI is used. The danger lies in how it is being used.

Legal documents and policies require precision, grounding, and contextualisation. Generic AI models often fail at all three:

  1. Lack of precision: AI frequently provides vague, generic answers to specific legal questions. Laws need pointed, solid definitions; AI prefers probabilistic guesswork.
  2. No grounding: Most AI models cannot provide a verifiable link to the exact sentence of a law or regulation. Often, they mix up jurisdictions across countries and jurisdictions.
  3. Zero context: AI frequently lacks the specific political, social, or historical context of a policy and regulations. Temporal context is also missing, which shows how legal issues evolved over the course of drafting and negotiations.

How to fix a problem (without banning AI). The solution lies in a two-pronged approach: developing institutional AI and increasing AI literacy.

If South Africa had had institutional AI anchored into local knowledge and context, such a hallucination could have been avoided. Moreover, AI would be a genuine and useful tool reflecting the topical and temporal context of policy development and law drafting.

But more importantly, we need to build AI competencies among policymakers. This requires a shift in pedagogy. We cannot teach policymakers to simply use AI; they must understand how it works.

As Minister Malatsi stated: 

‘This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It’s a lesson we take with humility.’

If we fail to build precise, grounded AI tools and train policymakers to use them properly, we won’t just have fake citations in a draft. We will have fake laws governing real people.

Read the original ‘When AI writes the rules: How to avoid fake laws governing real life’ blog post by Dr Jovan Kurbalija.

Visionary or outlandish statements about the future are a feature of tech industry discourse. But the rapid acceleration of generative AI seems to have shortened the timeline for many of these claims. 

April brought another wave of high-profile predictions. While some might be tempted to dismiss them as mere hype, there’s a strong reason not to. These ideas come from people who are not only building the platforms or technologies we rely so much on, but are also spending capital to transform into reality their visions of what the future should look like. When ‘tech leaders’ float their ideas, they begin to steer real-world resources and regulatory conversations. And there we come to the quiet danger: designing the future without meaningful public input.

The co-founder of Palantir, Alex Karp, and Palantir’s Head of Corporate Affairs, Nicholas W. Zamiska, published a set of 22 propositions drawn from their upcoming book, Technological Republic. It did not arrive quietly. Critics called it ‘technofascism’ and ‘what evil would tweet.’ 

Their vision is organised around duty, hard power, and scepticism toward modern democratic culture. They argue that Silicon Valley owes a moral debt to the country that made its rise possible, and that the engineering elite has an affirmative obligation to participate in national defence. They question the all‑volunteer force, suggesting that national service should be a universal duty so that the next war involves shared risk. Soft power and soaring rhetoric, they writes, have been exposed as insufficient. Free societies need hard power, and in this century, hard power will be built on software. 

When it comes to AI weapons, Karp and Zamiska are blunt: they will be built regardless of Western debates. The only question is by whom and for what purpose. The authors also defend Elon Musk against what he sees as cultural snickering, arguing that we should applaud those who attempt to build where the market has failed to act. At the same time, they reject what they calls vacant pluralism, insisting that not all cultures are equally productive and that elite intolerance of religious belief is a sign of intellectual closure. 

What Kapr and Zamiska do not offer is much economic policy. Their technological republic is organised around security and technological power, not redistribution. The state exists to be defended. The individual exists to serve.

 Book, Comics, Publication, People, Person, Adult, Male, Man, Face, Head, Jury, Crowd, José Luis Álvarez Álvarez

Around the same time, OpenAI released its own policy document, Industrial Policy for the Intelligence Age. It is longer, somewhat softer, full of phrases like ‘public wealth fund’ and ‘right to AI.’ It asks for a democratic conversation about AI industrial policy, regulation, ethics and economy. 

OpenAI’s document starts from a different problem. Superintelligence—AI systems capable of outperforming the smartest humans, even when those humans are assisted by AI—is coming. Market forces alone cannot manage the transition, OpenAI argues. Drawing parallels to the Progressive Era and the New Deal, the company proposes ambitious public‑private collaboration. 

On the economic side, this includes giving workers a formal voice in how AI is deployed in workplaces, microgrants to help workers become AI‑first entrepreneurs, a right to AI as foundational access comparable to literacy or electricity, shifting taxation from payroll to capital gains and automated labour, creating a Public Wealth Fund to give citizens a direct stake in AI‑driven growth, and converting efficiency gains into shorter workweeks or better benefits. 

On the resilience side, OpenAI proposes safety systems for cyber and biological risks, an AI trust stack for verification, auditing regimes for frontier models, model‑containment playbooks for dangerous AI, and guardrails for government use. The company acknowledges it does not have all the answers and invites feedback. 

Similarities and differences. Where Karp and Zamiska talk about duty and war, OpenAI talks about transitions and safety nets. Yet both reject the current political order as inadequate. Both see technology as the primary vector of power. And both propose new forms of obligation—national service in one case, a right to AI and portable benefits in the other.

Taken together, these two documents are not opposing manifestos. They are different dialects of the same emerging language: tech leaders no longer see themselves as toolmakers. They see themselves as institutional designers. And a courtroom battle between Elon Musk and Sam Altman is about to decide how enforceable their original promises really are.

Promises, promises. Elon Musk is suing Sam Altman over whether OpenAI was fraudulently diverted from its original nonprofit mission. Musk argues that he was misled and that OpenAI’s leadership abandoned its promise to serve humanity, pivoting instead toward commercialisation through partnerships and products like ChatGPT. He seeks to remove Altman and President Greg Brockman, force structural changes to OpenAI’s governance, and potentially award up to $150 billion in damages to its nonprofit arm. OpenAI rejects this narrative, framing the case as a competitive dispute—Musk raised objections, they say, only after OpenAI’s success and the emergence of his own AI venture, xAI, which has filed for an IPO. OpenAI itself is rumoured to be considering an IPO in late 2026 or 2027. The court will have to weigh early emails, funding discussions, and conflicting interpretations of what “open” and “nonprofit” were supposed to mean.

If the court rules that shifting toward profit violated founding principles, many similar hybrid organisations may need to restructure. If the current model is upheld, it will solidify the reality that market logic and commercial interest drive AI development. Because advanced AI is expensive to build and operate, companies need pricing tiers to cover costs and make a profit. And because the underlying models and infrastructure are valuable competitive assets, firms have incentives to lock users in and limit disclosure to maintain their advantage. That means that users could be facing more tiered access, stronger platform lock‑in, and less visibility into how systems operate.

So what could societies do? Karp and Zamiska, and OpenAI share a premise that is rarely stated outright: that the existing legal and political order is too slow or too confused to manage the technologies now emerging. 

If we assume they are even only partially right,  the solution cannot be handing design authority to the same firms that profit from those technologies. Three measured steps are worth considering. 

First, separate policy design from corporate strategy. Any company that holds major public contracts in areas such as defence, health, or border control should not be the source of the policies used to regulate that company’s activities. 

Second, codify accountability. If AI developers claim public-interest missions, those claims need legal and regulatory grounding, not just branding. The Musk-OpenAI case may accelerate this, but policymakers cannot outsource the task to courts.

Third, broaden participation. OpenAI’s call for public input points in the right direction, but mechanisms matter. Without meaningful inclusion—across labour, civil society, and smaller economies participation risks becoming procedural rather than substantive.

We are not about to wake up in a technological republic overnight. But it is already clear that tech oligarchs are no longer just building products; they are articulating political and social orders. Modern societies will have to get right what type of legal and policy order is needed and how to deal with the growing power of tech companies and their leaders.  

On 7 April 2026, Anthropic announced Claude Mythos Preview, its most capable AI model to date, alongside the explicit decision not to make it publicly available. Claude Mythos Preview is a general-purpose, unreleased frontier model that, in Anthropic’s own words, reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans in finding and exploiting software vulnerabilities.

Anthropic’s published benchmarks show Mythos Preview scored 93.9% on the SWE-bench Verified test, 97.6% on the USAMO 2026 mathematics evaluation, and and significantly outperformed all previously released models in cybersecurity-specific assessments. The SWE-bench Verified score is roughly double the 2024 state of the art and was achieved in an agentic context, where the model autonomously resolved real software engineering issues from production codebases.

On the USAMO 2026 evaluation, Mythos Preview scored 55 percentage points higher than Opus 4.6, which scored 42.3%. On GPQA Diamond, a graduate-level scientific reasoning benchmark, Mythos Preview scored 94.6%. On Terminal-Bench 2.0, which evaluates system administration and command-line proficiency, it scored 82.0%, a 16.6-point lead over Opus 4.6. On the cybersecurity benchmark Cybench, the model scored 100% on the first attempt, making it no longer useful as a discriminating evaluation.

Cybersecurity capabilities

The decision not to release Mythos Preview publicly is linked to concerns about its advanced capabilities, particularly in high-risk domains such as cybersecurity, as well as broader considerations related to safety and potential misuse.

Notably, these capabilities are not the result of targeted training. Anthropic did not explicitly train Mythos Preview to have these capabilities. They emerged as a downstream consequence of general improvements in code, reasoning, and autonomy. The same improvements that make the model substantially more effective at patching vulnerabilities also make it substantially more effective at exploiting them.

During internal testing, Mythos Preview identified thousands of zero-day vulnerabilities across every major operating system and every major web browser, as well as other critical software, many of them high severity and previously undetected for years. Anthropic engineers with no formal security training could ask Mythos to find remote code execution vulnerabilities overnight and have a complete, working exploit the following morning. This accessibility dimension poses a distinct governance concern. Traditionally, sophisticated cyberattacks have required highly skilled teams, extensive planning, and deep technical expertise. Models with these capabilities may lower those barriers substantially, including smaller state actors and non-state actors.

Anthropic has disclosed only a fraction of what it says it has found during internal testing. Over 99% of the vulnerabilities discovered by Mythos remained unpatched at the time of the 7 April announcement.

Project Glasswing

Anthropic launched Project Glasswing as a structured access mechanism to use Claude Mythos Preview for defensive cybersecurity purposes. The initiative brings together Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks as launch partners, with access also extended to over 40 additional organisations that build or maintain critical software infrastructure.

Project Glasswing partners will receive access to Claude Mythos Preview to find and fix vulnerabilities in their foundational systems, with work expected to focus on local vulnerability detection, black box testing of binaries, securing endpoints, and penetration testing. Anthropic is committing up to $100M in usage credits for Mythos Preview across these efforts. Following the initial research preview period, access to the model will be available to participants at $25 per million input tokens and $125 per million output tokens across the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry.

 Art, Graphics, Advertisement, Poster, Text

Anthropic has also donated $2.5M to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation to enable open-source software maintainers to respond to the changing cybersecurity landscape.

Within 90 days, Anthropic has committed to reporting publicly on what it has learned, as well as the vulnerabilities fixed and improvements made that can be disclosed. The company also intends to collaborate with leading security organisations to produce practical recommendations covering vulnerability disclosure processes, software update processes, open-source and supply-chain security, and patching automation, among other areas.

Anthropic has stated that Project Glasswing is a starting point, and that in the medium term, an independent, third-party body bringing together private and public sector organisations might be the ideal home for continued work on large-scale cybersecurity projects.

Project Glasswing raises a governance question for the industry, as cyber-capable AI systems may become useful security tools and a source of misuse risk at the same time. Project Glasswing’s structure also reveals tensions, as it concentrates several roles including discovery, disclosure coordination, and capability gatekeeping in a single organisation. Entities such as Anthropic and major cloud providers control critical components of the Glasswing ecosystem, raising questions about power and governance that, for financial institutions in particular, translate into systemic risk.

We also wrote about the Glasswing project and its implications in our Weekly newsletter in early April.

Geopolitical dimensions

Claude Mythos has sharpened attention on the competitive and geopolitical dimensions of frontier AI development. Project Glasswing’s launch partners exclude Anthropic’s rival OpenAI, which is reported to be approximately six months behind Anthropic in developing a model with comparable offensive cyber capabilities.

Senior policy voices have positioned Mythos within the broader competition between Western AI companies and China’s rapidly evolving AI ecosystem, with implications for national security, enterprise adoption, and technological leadership. A security researcher assessed a concurrent source code leak from Anthropic as a geopolitical accelerant, noting that such exposures compress the timeline for adversaries to replicate technological advantages currently held by Western laboratories.

Many defence organisations still rely on legacy software and infrastructure not designed with AI-driven threats in mind. Models capable of autonomously identifying hidden flaws in older code may expose weaknesses in critical defence networks around the world. The difficulty of containment at the geopolitical level is reflected in usage patterns. Access restriction at the laboratory level does not translate reliably into containment across jurisdictions when the same underlying models are accessible via cloud infrastructure spanning multiple countries and regulatory environments.

The limits of voluntary AI governance

The Claude Mythos case has clarified, with considerable precision, what voluntary AI governance can and cannot achieve. A responsible laboratory can make a unilateral decision not to release a dangerous system. It can support coordinated vulnerability disclosure, engage governments proactively, and produce detailed public documentation of a model’s capabilities and risks. All of these have occurred with Mythos, and represent meaningful progress relative to the governance environment of a few years ago.

What voluntary frameworks cannot do is bind competitors who operate under different assumptions. Anthropic’s RSP version 3.0 acknowledges this directly by removing the commitment to withhold unsafe models if another laboratory releases a comparable model first. The competitive structure of the AI industry means that restraint by one actor does not prevent the underlying capability from eventually proliferating. Voluntary governance frameworks work best when they generate shared norms across an industry. When the industry is structured around intense competition among a small number of organisations, voluntary restraint by a single actor does not resolve the broader question of access.

Analysts note that what Mythos does today in a restricted environment, publicly available models are likely to replicate within one to two model generations. The next phase of the EU AI Act takes effect in August 2026, introducing automated audit trails, cybersecurity requirements for AI systems classified as high risk, incident reporting obligations, and penalties of up to 3% of global revenue. The EU framework represents a shift toward binding governance, but its scope relative to the pace and international distribution of frontier AI development remains to be demonstrated.

The way forward

Anthropic acknowledges that capabilities like those demonstrated by Mythos will proliferate beyond actors committed to deploying them safely, with potential fallout for economies, public safety, and national security. The company’s response, taken in aggregate, reflects a serious attempt to manage that risk within the constraints of voluntary frameworks and private decision-making. The Responsible Scaling Policy, Project Glasswing, proactive government briefings, and the detailed system card are each substantive contributions. They are also all products of a single private entity’s judgement, operating without binding external accountability.

The Mythos case does not so much call for a different assessment of Anthropic’s conduct as it does a clear-eyed view of what voluntary governance can realistically sustain at the frontier of AI development. Governments on both sides of the Atlantic were briefed informally about a model whose capabilities are consequential for critical infrastructure and national security. No binding notification requirement existed. No independent technical authority had prior access. No international coordination mechanism was in place.

No single organisation can solve these challenges alone. Frontier AI developers, software companies, security researchers, open-source maintainers, and governments all have essential roles to play. The Mythos case has made that observation not merely a statement of aspiration but a policy problem that requires concrete institutional responses. Whether those responses will take shape before the next capability threshold is reached is the question now facing policymakers.

This text is an adaptation of Reyhan Damalan’s text ‘Claude Mythos Preview sets new benchmark for AI capability and raises governance questions‘.

 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

29th session of the CSTD

The 29th session of the Commission on Science and Technology for Development (CSTD) took place from 20 to 24 April 2026 at the Palais des Nations in Geneva, Switzerland.

For its 29th session, the programme addressed the priority theme of ‘Science, Technology and Innovation in the Age of Artificial Intelligence’ and heard presentations on report on technical cooperation activities in science, technology and innovation. 

CSTD members also reviewed progress in implementing and following up to the outcomes of the World Summit on the Information Society at the regional and international levels.

Ultimately, CSTD members adopted two resolutions on WSIS and Science, Technology & Innovation for Development.

 People, Person, Crowd, Indoors, Computer Hardware, Electronics, Hardware, Monitor, Screen, Cinema, Theater, Concert, Audience, Tony Meléndez, Abimael Guzmán, Stephen K. Amos, Li Hongli, Josh Grelle, Weird MC, Ole Anderson, La Parka

Image credit: UNCTAD Innovation X post

Shaping Switzerland’s AI Summit Strategy 

A report intended to inform strategic planning for the AI Summit Geneva 2027, synthesising inputs from a multistakeholder roundtable and 50+ written submissions to shape Switzerland’s strategy for hosting the AI Summit, has been made public.

The core finding of ‘Shaping Switzerland’s AI Summit Strategy’ is that Switzerland’s comparative advantage lies not in technological scale, but in trusted convening, pragmatic governance, and institutional credibility. Its neutrality, strong institutions, research base (e.g. ETH/EPFL), and Geneva’s multilateral ecosystem position it as a facilitator of practical, cross-sector cooperation. However, gaps remain in investment and in scaling innovations to market.

Two priority issue clusters dominate. First, trusted and sovereign AI infrastructure, including open models, interoperability, and reducing dependence on dominant providers—alongside a noted gap in Switzerland’s access to production-grade AI compute. Second, AI’s impact on human rights, security, and humanitarian law, particularly in relation to military use, surveillance, and preservation of human agency. Cross-cutting concerns include AI literacy, SME adoption, public-sector readiness, and equitable access for developing countries. 

Strategically, Geneva 2027 should be framed as a platform for implementation, contributors highlighted, delivering a limited set of practical, internationally reusable tools backed by an inclusive preparatory process and follow-up mechanisms. 

Geneva Cyber Week 2026

The UN Institute for Disarmament Research (UNDIR) and the Swiss Federal Department of Foreign Affairs (FDFA) are co-hosting Geneva Cyber Week from 4 to 8 May 2026, bringing policymakers, diplomats, technical experts, industry leaders, academics, and civil society representatives to venues across Geneva and online for a week of discussions on cyber stability, resilience, governance, digitalisation, and the security implications of emerging technologies, including AI.
Returning after its inaugural edition, the event is being positioned as a response to a more fragile cyber and geopolitical environment. Held under the theme ‘Advancing Global Cooperation in Cyberspace’, Geneva Cyber Week 2026 comes at a moment of mounting cyber insecurity, intensifying geopolitical tension, and rapid technological change. The programme will feature nearly 90 events and reinforce Geneva’s role as a centre for cyber diplomacy, international cooperation, and digital governance

Diplo is pleased to launch a new call for applications for Digital Watch Knowledge Fellows (2026).

What is the Digital Watch Observatory?

The Digital Watch Observatory (DW) is a comprehensive observatory and one-stop shop source of information on digital governance. It tracks latest developments, provides policy overviews and analysis, and curates information on key topics, technologies, processes, policy players, events, and resources. 

DW is designed for diplomats, policymakers, researchers, civil society actors, business representatives, and other stakeholders who need reliable, structured, impartial, and up-to-date information on digital governance issues. 

Its content is organised around:

  • Topics, from cybercrime and freedom of expression, to data governance and critical infrastructure.
  • Technologies such as artificial intelligence, quantum computing, and semiconductors.
  • Processes including the UN Global Mechanism on ICT Security, the Internet Governance Forum, the Global Digital Compact process, and more.
  • Policy players such as countries, technical entities, business associations, UN entities, and other international and regional organisations.
  • Resources, including conventions, resolutions, laws and regulations, reports, and more.
  • Events, such as meetings, negotiations, conferences, and consultations.

This structure is complemented by:

Daily updates, regular analyses, and weekly and monthly newsletters that track and explain the most relevant developments across the digital governance landscape. 

What is the role of a Knowledge Fellow?

Knowledge Fellows (KF) are central to the observatory’s ability to provide comprehensive, accurate, and up-to-date coverage of specific areas of digital governance. 

Each KF is expected to cover one or more areas of expertise and help ensure that DW remains accurate, relevant, and complete, and impartial. This means:

  • Monitoring and analysing developments related to the assigned area(s) of expertise and ensuring these are reflected in daily updates and regular analyses.
  • Keeping assigned DW  pages accurate, up-to-date, and substantively strong.
  • Tracking events relevant to their area(s) of expertise and helping ensure that important meetings, negotiations and discussions are reflected in DW. 
  • Identifying key resources relevant to their area(s) of expertise such as UN resolutions and other intergovernmentally agreed documents, laws, regulations, reports, and policy papers.
  • Supporting stronger coverage of organisations, countries, and other key actors in digital governance. 
  • Contributing, when relevant, to newsletters, policy and research papers, and other knowledge products.

Knowledge Fellows may also have opportunities to contribute to Diplo’s wider knowledge ecology, including courses, discussions, and thematic initiatives.

Who should apply?

At a time when the public space is abundant with AI-generated content, we are looking for more than just someone who can use AI to summarise news or rewrite online resources. 

KF will have access to custom-made AI tools to support them in their work, but the role requires subject expertise, critical judgement, and the ability to identify what is important, what is missing, and what deserves deeper analysis.

Specifically, we are looking for applicants who:

  • Have a strong expertise in digital governance, grounded in professional experience, academic research, policy engagement, or a combination of these.
  • Are interested in continuing to develop this expertise. 
  • Know where to look and what to look for in order to ensure a comprehensive coverage of assigned topics, technologies, processes, etc.
  • Can identify major developments, policy controversies, key debates, and emerging trends in the digital governance landscape, and cover them accurately and impartially.

This means combining subject expertise with editorial judgement, policy awareness, and a strong sense of knowledge curation.

Applicants must also have: 

  • Availability to contribute on a regular basis. The fellowship is conducted online, with an expected commitment of at least 8 hours per week.
  • Strong analytical and writing skills in English.
  • Basic skills in using web and social media, as well as familiarity with generative AI tools.

What we offer

Digital Watch Knowledge Fellows will benefit from:

  • Onboarding and guidance on Digital Watch’s editorial and curation approach.
  • Training on observatory workflows and digital/AI tools.
  • Remuneration.
  • Visibility for their work among DW users (diplomatic communities in Geneva and other diplomatic centres, professionals from across all stakeholder groups dealing with digital topics, etc.)
  • Opportunities to promote their digital governance-related research through DW and Diplo networks.
  • Membership in a global community of experts and professionals working on digital governance.

Fellows are engaged on a consultancy/fee basis; The role does not constitute employment with DiploFoundation.

How to apply

Interested applicants are invited to complete the application form.
Application deadline: 31 May 2026

 Advertisement, Poster, Art, Outdoors, Text, Nature

Weekly #260 Mission, money, and the future of OpenAI

 Logo, Text

24 – 30 April 2026

Note to our readers: This issue comes to your inbox on a Thursday, rather than tomorrow, 1 May, in observance of Labour Day. Expect the next issue next Friday, as is customary.


HIGHLIGHT OF THE WEEK

Mission, money, and the future of OpenAI

For the second week in a row, technocrats take centre stage in our Weekly newsletter. This time, we’re spotlighting a billionaire row: A courtroom battle between Elon Musk and Sam Altman is putting the origins and future of OpenAI under scrutiny. 

At the centre of the dispute is a fundamental question: was OpenAI meant to remain a nonprofit serving humanity, or was a shift toward a profit-driven model always part of the plan?

Musk, a cofounder, argues he was misled. He claims that OpenAI’s leadership abandoned its original mission and pivoted toward commercialisation, particularly through partnerships and products like ChatGPT. His lawsuit seeks sweeping remedies: removing Altman and president Greg Brockman, forcing structural changes to OpenAI’s governance, and potentially awarding up to $150 billion in damages to its nonprofit arm.

OpenAI, backed by Microsoft, rejects this narrative. Its legal team frames the case as a competitive dispute—arguing that Musk raised objections only after OpenAI’s success and the emergence of rival efforts, such as his own AI venture. In court, both sides are leaning heavily on early emails, funding discussions, and conflicting interpretations of what ‘open’ and ‘nonprofit’ were supposed to mean in practice.

 Book, Comics, Publication, Person, Clothing, Shorts, Face, Head, Footwear, Shoe

The big (business) picture. This case could redefine the nonprofit–for-profit hybrid model that underpins much of today’s AI ecosystem. OpenAI’s structure—a nonprofit overseeing a capped-profit entity—has been widely copied or studied. If the court rules that such a transition violated founding principles, it could force a rethink across the industry, especially for organisations balancing public-interest missions with the massive capital demands of AI development.

Second, the trial may set a precedent for AI governance and accountability. Musk’s argument hinges on the idea that AI labs developing potentially transformative—or risky—technologies should be bound by enforceable commitments to the public good. If courts start treating these commitments as legally binding rather than aspirational, companies could face stricter scrutiny over how they deploy and monetise AI.

Third, there are implications for competition in AI markets. OpenAI’s partnerships, particularly with major tech players, have already raised questions about the concentration of power. A ruling that forces structural separation could reshape the competitive landscape.

It does bear saying that Musk’s xAI has filed for an initial public offering. OpenAI is rumoured to be considering an IPO itself, slated for Q4 2026 to mid-to-late 2027.  

The user’s POV.  If OpenAI is forced to prioritise its nonprofit mission more strictly, users might see greater transparency—for example, more openness about how models are trained, how decisions are made, or how risks are managed. On the other hand, limiting commercial incentives could slow down development or reduce the scale of investment, potentially affecting how quickly tools improve.

If the current model is upheld, it will underline that market logic and commercial interest will drive AI development. In practical terms, users could face more tiered access, stronger platform lock-in, and less visibility into how systems operate.  

Beyond that, if the spat amuses you, The Verge has reporters in the courtroom offering coverage and witty commentary.

IN OTHER NEWS LAST WEEK

This week in AI governance

The USA. Washington is quietly reversing course on its standoff with Anthropic. The White House is drafting executive guidance that would allow federal agencies to work with Anthropic again, despite the company previously being labelled a supply-chain risk by the Pentagon. The shift reflects internal fractures: while parts of the defence establishment remain wary, others see excluding frontier models like Mythos as strategically costly.

The UK. The government is planning to back British strengths in the parts of the AI stack where the UK can build real leverage, Liz Kendall, the UK’s Secretary of State for Science, Innovation and Technology, stated. Kendall rejected technological isolationism, instead championing AI sovereignty for Britain: reducing over-dependencies, backing domestic firms with a £500 million Sovereign AI fund, and launching a new AI Hardware Plan in June 2026 to capture chip market share. Kendall also advocated collaboration with other middle powers, including on setting the standards for how AI is deployed.   

The EU. EU member states and European Parliament lawmakers have failed to reach an agreement on revisions to the EU Artificial Intelligence Act, after 12 hours of negotiations over proposed changes under the Commission’s Digital Omnibus package. Disagreements centred on whether sectors already covered by existing product and safety regulations should be exempt from certain parts of the AI framework. Lawmakers warned that the latest deadlock risks creating legal uncertainty for companies already preparing for compliance, while privacy and civil society groups cautioned that proposed relaxations could weaken core safeguards. Talks will, however, resume next month.

South Africa. South Africa has withdrawn its draft national AI policy after it was discovered that the document contained fake, AI-generated citations, undermining the credibility of the proposed framework. The government said the lapse occurred due to a failure to verify references and stressed that stronger human oversight is required in policy processes involving AI tools. The withdrawal delays plans to establish new AI governance institutions and incentives, and the policy will now be redrafted.

China. The Cyberspace Administration of China has warned several ByteDance-owned platforms, including CapCut, Catbox and the Dreamina AI system, over failures to properly label AI-generated and synthetic content. The Cyberspace Administration of China said inspections found violations of cybersecurity and generative AI regulations, prompting enforcement measures such as mandatory rectification, warnings and disciplinary action against responsible personnel.


The EU-USA critical minerals alliance for the technological future

The EU and the USA have launched a coordinated framework to strengthen resilience in critical minerals supply chains, combining a strategic Memorandum of Understanding (MoU) with an Action Plan.

The MoU establishes a broad strategic partnership covering the entire critical minerals value chain—from exploration and extraction to processing, recycling, and recovery. It frames critical minerals as strategic assets underpinning defence readiness, technological development, and economic resilience. The partnership aims to secure diversified and sustainable supply chains through joint project development in the EU, US, and third countries, supported by coordinated investment tools, risk reduction mechanisms, and improved business linkages.

Beyond supply security, the MoU introduces cooperation on market governance and resilience tools. This includes addressing non-market practices and export restrictions, promoting standards-based and transparent markets, improving permitting processes, coordinating on stockpiling and crisis response, and strengthening oversight of strategic asset sales. It also expands cooperation on innovation, recycling, geological mapping, and investment coordination. The agreement is explicitly non-binding, relying on domestic implementation and voluntary coordination.

The Action Plan operationalises these commitments by outlining steps toward a potential plurilateral trade initiative with like-minded partners. It explores coordinated trade instruments such as border-adjusted price floors, standards-based markets, price gap subsidies, and offtake agreements, initially focused on selected minerals. It also proposes harmonised standards, investment screening coordination, joint R&D, stockpiling cooperation, and rapid response mechanisms to supply disruptions. Implementation is led by USTR and DG TRADE, with links to broader multilateral efforts such as the G7.

Why does it matter? This initiative reflects ever-intensifying geopolitical competition over control of critical minerals, which are essential inputs for semiconductors, batteries, defence systems, and clean energy technologies. Supply chains are currently highly concentrated, particularly in processing and refining stages, creating strategic vulnerabilities for both the EU and the USA. The countries say it themselves: By aligning trade tools, standards, and investment screening, the EU and USA aim to safeguard their technological future (including energy, automotive, and electronics sectors), defence readiness, and economic resilience against external disruptions.


Europe’s growing age verification push for platform use

The European Commission has urged member states to rapidly roll out an EU age-verification app that allows users to prove they meet minimum age requirements without revealing personal data such as identity or exact date of birth. The system is designed to integrate with national digital identity wallets and can either operate as a standalone application or be embedded into existing e-ID infrastructure 

This initiative is part of a broader EU enforcement effort under the Digital Services Act (DSA), which requires platforms to take stronger measures to protect children online. The Commission has also recently taken preliminary action against Meta, finding that Facebook and Instagram have not effectively prevented users under 13 from accessing their services, largely because age checks can be bypassed with false birthdates and weak verification systems. 

At the same time, several European countries are moving toward stricter national rules that go beyond platform compliance. Norway has announced plans to introduce a ban on social media use for children under 16, placing responsibility for age verification on technology companies. Greece is considering measures that would restrict anonymity online and strengthen digital identity requirements. Under the plan, social media platforms would, from 2027, be required to block access for users under 15 using age verification systems rather than self-declared age data.  


Australia reshapes news bargaining rules

Australia’s government has proposed a new Media Bargaining Incentive designed to force large digital platforms to financially support local journalism—or pay a levy.

Under the plan, tech companies with significant Australian revenue (over $250 million annually) would face a charge of up to 2.25% of their Australian revenue if they do not reach commercial agreements with at least four news organisations. The revenue collected would be redistributed to media outlets, with allocations linked partly to newsroom staffing levels. 

These agreements would be “super-deductible”, meaning firms could offset up to 150% of their value (or 170% for smaller publishers) against the levy. In practice, this makes negotiating with media outlets cheaper than paying the tax itself.

The government proposes the measure as a correction to an imbalance in the digital economy. Communications Minister Anika Wells argued that large platforms benefit directly from journalism flowing through their feeds and should therefore contribute to its production, especially as news consumption shifts overwhelmingly to social media.

The reaction from Big Tech has been sharp. Meta dismissed the measure as a ‘government-mandated transfer of wealth’, arguing that news organisations voluntarily publish content on its platforms because they derive value from it. It also warned that the scheme resembles a digital services tax. Google also rejected the policy, pointing to its existing commercial deals with more than 90 Australian news businesses and arguing that the proposal misunderstands how the advertising market and news consumption have evolved. Both companies also criticised the policy’s selective scope, which excludes major platforms such as Microsoft, Snapchat, and OpenAI.

Australian media organisations, by contrast, strongly support the move. In a joint statement, outlets including the ABC, News Corp Australasia, Nine, SBS, and others described the proposal as a critical step to ensuring the sustainability of journalism.

What’s next? The draft legislation will now enter consultation, with lobbying from both tech firms and media organisations expected to intensify as the details are finalised. Consultation on draft legislation is open until 18 May 2026. 



LOOKING AHEAD

It will be a busy week in Geneva as the Geneva Cyber Week 2026 unfolds. Organised by the UN Institute for Disarmament Research (UNIDIR) and the Swiss Federal Department of Foreign Affairs (FDFA) under the overarching theme ‘Advancing Global Cooperation in Cyberspace’. Discussions will cover topics such as cyber norms and international cooperation, AI governance and regulation, critical infrastructure protection, cyber capacity building, incident response, and the security implications of emerging technologies, including artificial intelligence and quantum computing. Today (30 April) is the last day to register for the event.

As part of Geneva Cyber Week, UNIDI will organise the Cyber Stability Conference 2026, on 4–5 May in Geneva and online, bringing together governments, international organisations, industry, academia, and civil society to discuss ICT security and cyber governance. Under the theme “Cyber governance in an era of technological revolution: Past lessons, present realities and future frontiers,” discussions will explore how international cyber stability frameworks are adapting to rapid technological change, including AI and quantum computing, while reflecting on lessons from past cyber diplomacy processes and current security challenges.

Meanwhile, RightsCon 2026, which was scheduled to kick off in Lusaka, Zambia, on 5 May, will not proceed either in Lusaka or online. The conference has been deferred to a later date, the Zambian government has stated.


READING CORNER
writers ai writing style authorship debate

AI systems are increasingly capable of producing legal language and rules that look authoritative, including cases where outputs have echoed or fabricated legal references, as highlighted in South Africa. The real question, writes Jovan Kurbalija, is how societies can distinguish between useful AI assistance and ‘fake laws’ and why human institutions must remain the final gatekeepers of legitimacy and enforcement.

AI agriculture

In this blog, Slobodan Kovrlija examines how open-weight AI is empowering emerging economies to build sovereign agricultural and health tools, from Kenya’s crop diagnostics to Zambia’s maternal care.

Weekly #259 The ‘Technological Republic’ tech oligarchs imagine

 Logo, Text

17 – 24 April 2026


HIGHLIGHT OF THE WEEK

The ‘Technological Republic’ tech oligarchs imagine

Last week, the Technological Republic Manifesto by Palantir’s founder, Alex Karp, triggered an avalanche of comments and criticism as he challenged many pillars of our society, from equality and inclusion to security and democracy. 

In 22 points extracted from his book, ‘Technological Republic’, Alex Karp, Palantir’s CEO, mixes national security, techno optimism, and democracy scepticism. 

Palantir has already been at the centre of controversies around the use of its security products – Gotham, Foundry, and Maven in the Gaza war and by the US security apparatus on anti-migration and criminal activities

The new Manifesto added to the controversy as the company moved from the business realm to the ideological and political realms.  

Cas Muddle labelled Manifesto  Technofascism pure! while Yanis Varoufakis said if Evil could tweet, this is what it would!

 Book, Comics, Publication, People, Person, Adult, Male, Man, Face, Head, Jury, Crowd, José Luis Álvarez Álvarez

The backlash has been especially strong in the UK, where Palantir already holds major public contracts. Critics, including MPs and campaign groups, argue that the company’s ideology sits uneasily with its presence in sensitive parts of the state, from health data to policing and defence. Palantir, by contrast, says its systems improve efficiency, resilience, and public services.

Why does it matter? Palantir’s Manifesto put in a sharper and blunter way the growing power of tech companies in shaping society both nationally and internationally. Modern societies will have to get right what type of legal and policy order is needed and how to deal with the growing power of tech companies and their leaders. 

IN OTHER NEWS LAST WEEK

This week in AI governance

Paraguay. Paraguay has adopted new rules for the use of AI in its courts, with UNESCO support, marking a notable step in judicial AI governance. The framework, approved by the Supreme Court of Justice, limits AI to a supporting role in data processing, information management, and assisted decision-making, while requiring human oversight, transparency, accountability, and disclosure when AI tools influence judicial processes. The rules align Paraguay’s approach with UNESCO’s guidance on AI in courts and underscore a wider trend toward rights-based, trust-focused AI deployment in public institutions.

India. India has set up a Technology and Policy Expert Committee under the Ministry of Electronics and Information Technology to help shape the country’s AI governance framework and advise the new AI Governance and Economic Group. Bringing together government, academia, industry, and policy expertise, the body is meant to translate fast-moving technical and regulatory issues into practical guidance, bringing a more structured and adaptive approach to AI governance aligned with India’s economic and social priorities.

Mythos. Anthropic has launched an investigation after a small group of users gained unauthorised access to its powerful Mythos AI model via a third-party contractor environment. The access reportedly occurred just as the company began rolling out a limited preview of the model to selected organisations under Project Glasswing. The unauthorised users are believed to have operated through a private Discord group, using a mix of tactics, including contractor access and open-source intelligence tools, to gain access to the system. Mythos was intentionally restricted due to its ability to accelerate cyberattacks and was provided to a limited number of partners, yet it appears to have leaked almost immediately through the partner ecosystem rather than through a direct breach. The window during which Mythos’ capabilities remain contained may prove far shorter than anticipated.


EU’s defence cloud reliance raises ‘kill switch’ fears. 

A new report says most of the EU defence agencies remain heavily dependent on US cloud providers, exposing critical systems to the risk of a foreign ‘kill switch’ and sharpening concerns over Europe’s digital sovereignty.

According to the findings, 23 of 28 countries studied rely on US tech for defence functions, with 16 assessed as high risk, prompting renewed debate over whether sensitive public infrastructure, including security and defence systems, should move faster toward sovereign or air-gapped alternatives.


France vs X, a transatlantic showdown 

France’s criminal investigation into X has evolved into a transatlantic dispute over platform governance and state authority.

How did it all begin? The case began with a French probe into whether the platform enabled the spread of child sexual abuse material, AI-generated deepfakes, Holocaust denial content, and other harmful or unlawful material, and later intensified with a search of X’s Paris offices and summonses for Elon Musk and former X chief Linda Yaccarino to give voluntary interviews – a request Musk appears to have refused by not showing up.

And then. The confrontation widened when reports emerged that the US Justice Department had declined to assist the French inquiry, arguing that the case risked crossing into the regulation of protected speech and that it would unfairly target a US company. French authorities, however, have framed the matter as a legitimate enforcement action under national law.


Australia targets also games in child safety crackdown

Australia’s child-safety push is widening from social media to gaming, as regulators intensify scrutiny of how platforms protect minors from harm. On 21 April, the eSafety Commissioner issued legally enforceable transparency notices to Roblox, Minecraft, Fortnite and Steam, demanding details on how they handle risks, including child sexual exploitation, cyberbullying, hate and extremist material on services widely used by children.

Seen in context. This is part of a broader tightening of enforcement around Australia’s under-16 social media rules, which took effect on 10 December 2025 and require age-restricted platforms to take reasonable steps to prevent underage children from creating and holding accounts. Yet regulators say compliance remains uneven: in March, eSafety flagged big concerns about Facebook, Instagram, Snapchat, TikTok and YouTube, warning that many children could still access platforms by simply self-declaring they were older than 16.


Microsoft bets A$25 billions on Australia’s AI future 

Microsoft has announced a A$25 billion investment in Australia by 2029, its largest in the country, to expand local AI and cloud infrastructure, strengthen cybersecurity, and train three million Australians in workforce-ready AI skills. 

The plan will increase Azure AI supercomputing capacity, expand Microsoft’s Australian cloud footprint by more than 140%, and deepen cooperation with the Australian Government, including the Australian AI Safety Institute and the Microsoft–Australian Signals Directorate Cyber Shield. 

Framed as support for Australia’s National AI Plan, the package links AI growth with cyber resilience, digital sovereignty, responsible deployment, and broader access to skills across schools, nonprofits, workers, government, and industry.


UK fortifying child safety online with new powers

The UK’s Children’s Wellbeing and Schools Bill would reportedly expand ministers’ powers to shape how online services protect children, including by restricting access to risky platforms, features, or functions and by targeting design elements such as contact settings, live communication, location visibility, and time spent online.

The draft would also bring Ofcom into a stronger advisory role, introduce a six-month timeline for regulations or a progress update, and give ministers new authority over children’s data consent, age assurance, and enforcement.

Why does it matter? Taken together, the amendments point to a more interventionist and fine-grained model of child online safety, focused not only on harmful content but also on the design and governance of children’s digital environments. The regulatory package remains unsettled for now, with Parliament still negotiating key provisions and no final law yet in place.



LAST WEEK IN GENEVA

Shaping Switzerland’s AI Summit Strategy 

A report intended to inform strategic planning for the AI Summit Geneva 2027, synthesising inputs from a multistakeholder roundtable and 50+ written submissions to shape Switzerland’s strategy for hosting the AI Summit, has been made public.

The core finding of ‘Shaping Switzerland’s AI Summit Strategy’ is that Switzerland’s comparative advantage lies not in technological scale, but in trusted convening, pragmatic governance, and institutional credibility. Its neutrality, strong institutions, research base (e.g. ETH/EPFL), and Geneva’s multilateral ecosystem position it as a facilitator of practical, cross-sector cooperation. However, gaps remain in investment and in scaling innovations to market.

Two priority issue clusters dominate. First, trusted and sovereign AI infrastructure, including open models, interoperability, and reducing dependence on dominant providers—alongside a noted gap in Switzerland’s access to production-grade AI compute. Second, AI’s impact on human rights, security, and humanitarian law, particularly in relation to military use, surveillance, and preservation of human agency. Cross-cutting concerns include AI literacy, SME adoption, public-sector readiness, and equitable access for developing countries. 

Strategically, Geneva 2027 should be framed as a platform for implementation, contributors highlighted, delivering a limited set of practical, internationally reusable tools backed by an inclusive preparatory process and follow-up mechanisms. 

29th session of the CSTD

The 29th session of the Commission on Science and Technology for Development (CSTD) is ending today (Friday). The programme addressed the priority theme of ‘Science, Technology and Innovation in the Age of Artificial Intelligence’ and also reviewed progress in implementing and following up to the outcomes of the World Summit on the Information Society at regional and international levels. We’ll have more on the outcomes next week.


READING CORNER
AI Anthropic Claude logos

Anthropic’s Mythos model is a cyber-offensive AI built to probe critical infrastructure. Why does this reality expose the flaws in current AI governance?

Anthropic

Anthropic’s Claude Mythos Preview is its most capable model to date, withheld from public release and made available only to a closed partner network amid concerns about its cybersecurity capabilities and governance implications.

Weekly #258 European firms build a digital ‘backup generator’

 Logo, Text

10 – 17 April 2026


HIGHLIGHT OF THE WEEK

European firms build a digital ‘backup generator’

A group of European firms has unveiled what it effectively describes as a digital ‘backup generator’—a full-stack recovery system designed to keep critical services running in the event of access to foreign technology providers being disrupted.

Developed by Cubbit, Elemento, SUSE, and StorPool Storage, the so-called ‘Disaster Recovery Pack’ was launched on 15 April in Berlin at the European Data Summit hosted by the Konrad-Adenauer-Foundation.

 Box, machine, Wheel, Cardboard, Carton, Package, Package Delivery, Person, Car, Transportation, Vehicle

The Pack bundles together storage, compute, orchestration, networking, identity, observability, and management into a pre-integrated, deployable system. Organisations can use the system to identify critical services, build a sovereign recovery setup, and shift key operations to a fully European stack in the event of disruption. 

The aim is not to replace existing infrastructure outright, but to give organisations a ready-to-activate fallback environment—one that can be tested in advance and scaled progressively across workloads. 

While the notion of a foreign vendor ‘kill switch’ remains contested, the underlying concern—loss of access to critical services due to external legal, political, or commercial decisions—has gained traction across European policy circles. 

That concern is reinforced by market structure. US firms, including Google, Amazon Web Services, and Microsoft, continue to dominate Europe’s cloud ecosystem, while payment systems and software layers remain similarly concentrated.

In this context, resilience is increasingly framed not as full technological independence, but as the ability to withstand disruption without systemic failure. 

Why does it matter? Initiatives like the Disaster Recovery Pack could form the backbone of a more resilient European digital ecosystem—one designed not to eliminate dependencies, but to manage them on Europe’s own terms.

IN OTHER NEWS LAST WEEK

This week in AI governance

South Africa. South Africa has unveiled a draft national AI policy proposing new institutions — including a National AI Commission, an AI Ethics Board and a regulatory authority — alongside incentives such as tax breaks and grants to boost local innovation. The plan aims to position the country as a continental AI leader while addressing governance, infrastructure and data sovereignty concerns.

Russia. Russia is advancing a draft AI regulatory framework that would formalise oversight of AI development and deployment, aligning with broader efforts to strengthen digital sovereignty and state control over emerging technologies. The proposals focus on risk management, national standards and reducing dependence on foreign AI systems, while supporting domestic innovation. The move fits into Moscow’s wider strategy of tightening control over digital infrastructure and cross-border data flows.

UNESCO — Latin America & Caribbean. UNESCO has launched a regional AI in Education Observatory for Latin America and the Caribbean, designed to support evidence-based policymaking and track the impact of AI on education systems. The initiative aims to build capacity, share best practices and guide responsible integration of AI tools in schools and learning environments. 

Belgium. Belgium’s data protection authority has released a new information brochure titled ‘The Impact of Artificial Intelligence (AI) on Privacy’, providing guidance on risks such as bias, privacy violations and misuse of generative AI systems. The document is intended to raise awareness among organisations and the public, and to support compliance with EU data protection and AI governance frameworks.

Kazakhstan. Kazakhstan has introduced mandatory audits for high-risk AI systems, requiring developers to obtain a positive audit assessment before their systems can be listed as ‘trusted’ by sectoral authorities. The government will publish and regularly update official lists of approved systems, based on applications that include documentation on ownership, functionality and use conditions, reviewed within strict timelines. The move aims to build trust and standardise best practices in AI deployment, signalling a more structured and compliance-driven approach to high-risk AI governance.

Ghana. The Ghanaian Ministry of Communication, Digital Technology and Innovations has launched a public-sector AI capacity development programme in collaboration with the Government of Japan and the United Nations Development Programme. The programme is designed to equip public officials with knowledge of AI and its applications in governance. It focuses on improving decision-making and service delivery, drawing on experience from the UN and Japan.


EU develops age verification app

The European Commission has developed a standardised age-verification app intended to work across member states. The app allows users to confirm they meet age requirements to access social media platforms by providing their passport or ID number. It is designed to integrate into national digital wallets or operate as a standalone app, with a coordinated EU framework to ensure interoperability and avoid fragmented national systems.

The app is open source and available for both public and private implementation, but is subject to common technical and privacy requirements. The Commission plans to establish an EU-level coordination mechanism to oversee rollout, accreditation, and cross-border usability.

This technical and regulatory push is unfolding alongside political coordination among member states. Several member states are already preparing to integrate the app into national digital identity wallets, with France, Denmark, Greece, Italy, Spain, Cyprus and Ireland cited as front-runners. French President Emmanuel Macron is convening EU leaders, including Spanish Prime Minister Pedro Sanchez and representatives of Italy, the Netherlands and Ireland, to align national approaches to restricting minors’ access to social media and to press for faster EU-level action.

Yes, but. The rollout is already facing scrutiny. Shortly after Ursula von der Leyen described the app as technically ready and privacy-preserving, a security researcher claimed its protections could be bypassed in minutes. The critique points to structural design issues rather than isolated bugs. Reported weaknesses include locally stored authentication data that can be reset or modified, allowing users to bypass PIN protections, disable biometric checks, and reset rate-limiting mechanisms by editing configuration files. This effectively enables the reuse of verified identity data under altered access controls. 

The criticism has triggered broader concerns among developers about the app’s architecture, including why secure hardware features were not used, and whether elements like expiring age credentials are logically necessary. 


UK threatens jail for tech executives over non-consensual sex images removal

The UK government is planning measures that could make senior technology executives face criminal charges, including prison sentences, if their companies fail to remove non-consensual intimate images when required by regulators.

The move builds on existing obligations that already require platforms to take down such material within strict timeframes or face significant penalties, including fines of up to 10% of global turnover or even service blocking. 

Why does it matter? The latest step goes further: instead of relying solely on corporate sanctions, it introduces personal criminal accountability at the executive level. This type of liability is likely to accelerate compliance in ways that financial penalties alone have not, and may serve as an example to other jurisdictions. 

Zooming out. The policy is part of a broader tightening of the UK’s online safety framework, driven by persistent concerns over revenge porn and the rapid proliferation of AI-generated intimate imagery. 


EU blocks Meta’s WhatsApp third-party AI access changes with interim antitrust measures

The European Commission has issued a supplementary charge sheet to Meta (called Supplementary Statement of Objections), outlining concerns over potential restrictions on third-party AI assistants’ access to WhatsApp. 

Previously, Meta decided to reinstate access to WhatsApp for third-party AI assistants for a fee. However, the Commission has preliminarily found that these measures remain anticompetitive and has now issued interim measures to prevent these policy changes from causing serious harm on the market. 

The interim measures would stay in effect until the Commission concludes its investigation and issues a final decision on Meta’s conduct.



LOOKING AHEAD

The 29th session of the Commission on Science and Technology for Development (CSTD) is scheduled to take place from 20 to 24 April 2026 at the Palais des Nations in Geneva, Switzerland.

For its 29th session, the programme will address the priority theme of ‘Science, Technology and Innovation in the Age of Artificial Intelligence’ and will also review progress in implementing and following up to the outcomes of the World Summit on the Information Society at regional and international levels.

The session will include presentations on technical cooperation activities and the work of the multistakeholder Working Group on Data Governance, as relevant to development objectives. Participation is expected from representatives of national governments, international organisations, civil society and the private sector.


READING CORNER
AI Gap

An analysis of why the EU AI Act’s high-risk obligations are delayed by 16 months and how US federal intervention is dismantling state-level AI safety laws, creating a global governance vacuum.

Weekly #257 AI meets cybersecurity as project Glasswing takes flight

 Logo, Text

3 – 9 April 2026


HIGHLIGHT OF THE WEEK

AI meets cybersecurity as project Glasswing takes flight

This week, a veritable who’s who of tech—Amazon, Apple, Google, Microsoft, NVIDIA, and a dozen other giants—joined the Anthropic-led cybersecurity project Glasswing

The launch partners will use Antropic’s unreleased Claude Mythos Preview as part of their defensive security work, a tool the company claims can already identify software vulnerabilities at a level surpassing that of most human experts. 

The premise is straightforward and difficult to dispute: If AI systems can find and exploit vulnerabilities at scale, then those same capabilities should be deployed defensively, before less scrupulous actors gain access. Anthropic frames this as a narrow window of opportunity. Mythos Preview, it argues, has already uncovered thousands of high-severity vulnerabilities across major operating systems and browsers—an assertion that, if accurate, signals a step-change in the automation of software exploitation.

 Art, Graphics, Advertisement, Poster, Text

Yet the announcement also raises questions that go beyond the promises.

There is the question of verification. Claims that a model can ‘surpass all but the most skilled humans’ at vulnerability discovery are inherently difficult to evaluate externally, particularly when the system itself is not publicly available. 

A systemic issue that could arise is coordination. If AI accelerates the rate at which vulnerabilities are found, it may also overwhelm remediation efforts, effectively creating a bottleneck.

The model remains unreleased, accessible only to a curated group of partners and selected infrastructure maintainers. This controlled access concentrates a powerful capability in the hands of a small set of actors. Smaller vendors, public institutions, and under-resourced open-source projects may benefit indirectly from disclosed fixes, but they are unlikely to operate on equal footing.

It is also worth noting that all core partners in Project Glasswing—from Amazon Web Services and Google to Microsoft, Apple, and Cisco—are headquartered in the United States. That matters, because access to the most sensitive capability—the model itself—appears tightly governed and selectively distributed. Even if non-US entities participate, they are unlikely to do so on equal terms. It reflects where frontier AI development and much of the global cybersecurity industry are currently anchored, but it also reinforces the geopolitical framing that increasingly surrounds these technologies. 

That said, it would be misleading to see this as purely exclusionary. If the initiative results in patched vulnerabilities, improved open-source security, and shared findings, its effects will be globally distributed—whether or not governance is.

IN OTHER NEWS LAST WEEK

This week in AI governance

South Korea–France. South Korea and France are deepening cooperation through a new strategic AI and technology partnership, aimed at strengthening joint research, industrial collaboration and standard-setting across emerging technologies. The initiative reflects a broader effort to align capabilities in semiconductors, data infrastructure and advanced computing, while positioning both countries more competitively in the global AI landscape.

The USA. A federal appeals court in Washington, D.C. has declined to block the Pentagon’s national-security blacklisting of Anthropic, allowing the designation to remain in force while litigation continues. The ruling contrasts with a separate decision by a California judge who had earlier blocked part of the government’s action, highlighting a growing judicial split over the unprecedented move.

OpenAI has released a policy document entitled ‘Industrial Policy for the Intelligence Age: Ideas to Keep People First’. The document argues that while superintelligence promises extraordinary benefits, it also carries serious risks: job displacement, misuse by bad actors, loss of human control, and concentration of power and wealth. The proposals are organized into two sections. First, building an open economy: giving workers a voice in AI deployment, treating AI access as a fundamental right, creating a ‘Public Wealth Fund’ to give citizens direct stakes in AI growth, converting efficiency gains into shorter workweeks, and building adaptive safety nets that trigger automatically when disruption occurs. Second, building a resilient society: developing containment playbooks for dangerous AI, create verifiable trust stacks for content, strengthen independent auditing of frontier models, mandate incident reporting, and build international information-sharing networks.

The EU. If you want to let European lawmakers know what you think of the implementation of the bloc’s AI Act, there is still a bit of time. The feedback period on the draft Implementing Regulation related to the oversight of general-purpose AI models under Regulation (EU) 2024/1689 (the EU Act) will remain open until tonight, 9 April (midnight). 


US Supreme Court narrows ISP liability, sharpens focus on intent with AI implications

A unanimous US Supreme Court ruling this week has narrowed the circumstances under which an internet service provider (ISP) can be held liable for users’ copyright infringement.

Writing for the Court, Justice Clarence Thomas said an ISP is liable only if its service was designed for unlawful activity or if it actively induced infringement. 

The decision could have implications beyond ISPs, particularly in the escalating copyright battle between publishers/authors and generative AI firms. 

The key distinction raised is that broadband networks function as neutral conduits, whereas large language models are built specifically to produce fluent, human-like writing, including prose, poetry and dialogue, that can resemble the work of human authors. If a subscriber uses broadband to pirate a novel, the ISP did not build its network to enable that outcome, but an AI model prompted to write in a specific author’s style is designed to fulfil that request.


US agencies warn of cyber intrusions into critical infrastructure systems

A joint cybersecurity advisory issued by the Federal Bureau of Investigation, Cybersecurity and Infrastructure Security Agency, National Security Agency, and several sector-specific partners warns US organisations of an ongoing campaign by actors targeting industrial control systems across US critical infrastructure.

The activity focuses on internet-exposed operational technology (OT), particularly programmable logic controllers (PLCs), which are widely used to automate industrial processes in sectors such as energy, water and wastewater systems, and government services.

According to the advisory, the attackers are exploiting PLCs by leveraging their direct exposure to the internet. The attackers gain initial access by scanning for internet-facing PLCs and connecting through commonly used industrial communication ports. Once access is established, the actors interact with device project files and manipulate data displayed on human-machine interfaces (HMI) and supervisory control and data acquisition (SCADA) systems. This enables them to disrupt industrial processes in real time. In several confirmed cases, such intrusions have resulted in operational disruption and financial loss, underscoring the tangible, physical-world impact of these cyber operations.

The campaign appears to be part of a broader escalation in Iranian-linked cyber activity, likely tied to geopolitical tensions involving the USA and its allies. The advisory links the activity to previously identified advanced persistent threat (APT) groups associated with Iran’s Islamic Revolutionary Guard Corps (IRGC).


Greece sets ‘digital age of majority,’ moving to ban under-15s from social media

Greece is moving to tighten restrictions on minors’ use of social media, with legislation expected later this year that would introduce a ban for children under 15. The measure is set to take effect on 1 January 2027 and is intended to be a framework that changes how platforms operate. 

Platforms would be required to implement robust age verification mechanisms, including the re-verification of existing accounts, with oversight provided by national regulators such as the Hellenic Telecommunications and Post Commission (EETT). 

The measure applies to social networking services where users create profiles, publish content, and interact publicly, while excluding private communication services.

The big picture. The proposal reflects an emerging policy pattern across Europe, where governments are increasingly willing to intervene more directly in platform access for minors. Athens is also seeking to elevate the issue at the European level. ‘Our goal is to push the European Union in this direction as well,’ Prime Minister Kyriakos Mitsotakis noted in a video about the measure posted on TikTok.


Brazil launches first national centre for assistive technology

Brazil has inaugurated its first Center for Access, Research and Innovation in Assistive Technology (Capta) at the Benjamin Constant Institute in Rio de Janeiro. Run by the Ministry of Science, Technology and Innovation (MCTI) under the National Plan for the Rights of People with Disabilities, the centre aims to foster the development, experimentation, and dissemination of assistive technologies that enhance autonomy, inclusion, and quality of life for people with disabilities.

The launch marks the first of several planned centres nationwide to expand access to these technologies.

Yes, but. The long-term impact will depend on sustained investment and the ability to scale these centres nationwide.


UPCOMING EVENTS

WTO deadlock, AI boom: Unpacking MC14 and looking ahead

Diplo, the Digital Trade and Data Governance Hub, and the Geneva Internet Platform will co-organise a webinar on 14 April (next Tuesday) that unpacks digital trade-related developments in the 14th WTO Ministerial Conference in Yaoundé and looks ahead to their implications for the rapidly expanding AI economy. As digital trade rules take shape through multiple channels, understanding the intersection between trade policy and AI governance becomes increasingly urgent. The speakers will explore:

  • What to expect at the next General Council meeting in May and beyond
  • The main outcomes and sticking points from MC14
  • What the lapse of the e-commerce moratorium means — and what it does not mean
  • How the plurilateral JSI e-commerce agreement may shape digital trade going forward
  • The specific implications for AI development, including data flows, tariffs on digital services, and regulatory coherence

Registration for the event is open.


READING CORNER
European Commission EU AI Act amendments Digital Omnibus European AI Office

The European Union is progressing into the implementation phase of its Artificial Intelligence Act, with emerging obligations for providers of general-purpose AI models. Guidance from the European Commission and the AI Office outlines compliance expectations as the EU operationalises its risk-based AI governance framework.

Lettre d’information du Digital Watch – Numéro 108 – Mensuelle mars 2026

Rétrospective de mars 2026

Dans notre newsletter mensuelle de mars 2026, nous avons observé l’impasse au MC14 de l’OMC concernant le moratoire sur le commerce électronique, qui a conduit à son expiration, alors même qu’une coalition proposait un accord commercial numérique plurilatéral. Nous avons examiné ce que cela signifie et ce qui va suivre.

Deux récents verdicts rendus par des jurys américains ont jugé Meta et YouTube responsables de préjudices causés à des mineurs, notamment l’exposition à des contenus à caractère sexuel et la dépendance aux réseaux sociaux. Prises ensemble, ces affaires vont au-delà des questions de modération des contenus pour s’étendre à la conception même des plateformes.

Le Mécanisme mondial tant attendu a enfin été lancé, créant le premier forum permanent des Nations unies sur la sécurité des TIC depuis 1998 — mais sa session inaugurale a laissé de nombreuses questions en suspens quant à son fonctionnement concret. Voici pourquoi cela est important et ce qu’il faut surveiller à mesure que le Mécanisme prend forme.

Allons-nous un jour acheter de l’intelligence au mètre ? Ce n’est pas encore certain, mais la simple idée soulève des questions sur le contrôle, l’accès, et la manière dont nous mesurerons et consommerons l’intelligence à l’avenir.

En plus : les principales évolutions de la politique numérique en mars et un résumé de Genève.

 People, Person, Book, Publication, Crowd, Face, Head, Logo

TECHOLOGIES

Un nouveau plan de développement quinquennal approuvé par les législateurs à Pékin met l’accent sur l’innovation et les technologies de pointe pour stimuler la croissance économique future et le leadership mondial, en donnant la priorité à l’IA, à la robotique, à l’aérospatiale, aux biotechnologies et à l’informatique quantique, tout en réduisant la dépendance vis-à-vis des technologies étrangères. Il prévoit également une augmentation des financements, les dépenses scientifiques devant augmenter d’environ 10 % par an et la R&D globale d’au moins 7 %.

Le gouvernement britannique a annoncé un budget pouvant atteindre 2 milliards de livres sterling pour les technologies quantiques, dont plus d’un milliard au cours des quatre prochaines années, ainsi qu’un nouveau programme d’approvisionnement appelé ProQure destiné à favoriser le développement de l’informatique quantique au Royaume-Uni. Ce financement soutiendra plusieurs domaines : plus de 500 millions de livres sterling pour l’informatique quantique, 125 millions pour les réseaux quantiques et 205 millions de livres sterling pour la détection et la navigation quantiques, ainsi que des allocations plus modestes pour les pôles de recherche, les infrastructures, les compétences et la commercialisation. 

L’UE a lancé un appel à projets doté de 180 millions d’euros visant à renforcer la résilience des câbles Internet sous-marins en soutenant la mise en place de systèmes de secours, d’itinéraires alternatifs et de mesures de redondance. Ce financement a pour objectif de réduire le risque de pannes et les menaces externes pesant sur les infrastructures sous-marines critiques, ce qui témoigne de la préoccupation croissante de l’UE en matière de résilience numérique, de cybersécurité et de souveraineté technologique.La Chine a autorisé NEO, une interface cerveau-ordinateur mise au point par Neuracle, à être utilisée au-delà des essais cliniques afin d’aider les personnes souffrant de paralysie sévère à retrouver la mobilité de leurs mains. L’implant capte les signaux cérébraux lorsque les utilisateurs imaginent bouger leur main et les traduit en commandes destinées à un gant robotisé ; les premiers résultats des essais montrent une amélioration de la capacité à accomplir des tâches quotidiennes telles que saisir des objets, manger et boire.

SÉCURITÉ

Le président américain Donald Trump a publié la stratégie nationale de cybersécurité de son administration, qui définit les priorités dans six domaines d’action : les opérations cyberoffensives et cyberdéfensives, la sécurité des réseaux fédéraux, la protection des infrastructures critiques, la réforme réglementaire, le leadership en matière de technologies émergentes (notamment dans le domaine de l’IA) et le développement des compétences de la main-d’œuvre.

M. Trump a également signé un décret le même jour, enjoignant le ministre de la Justice de donner la priorité aux poursuites en matière de cybercriminalité, chargeant les agences d’examiner les outils permettant de lutter contre les organisations criminelles internationales, et confiant au Département de la Sécurité intérieure des responsabilités accrues en matière de formation. Ce document stratégique compte cinq pages de texte de fond, que les responsables de l’administration qualifient de délibérément général. La Maison Blanche a indiqué que des directives de mise en œuvre plus détaillées suivraient.

Le groupe de hackers pro-iranien Handala a revendiqué la responsabilité d’une cyberattaque contre le géant américain des dispositifs médicaux Stryker. Le groupe a déclaré que cette cyberattaque constituait une riposte à une frappe de missile contre une école primaire en Iran. Stryker a confirmé la cyberattaque dans un communiqué, précisant que le traitement des commandes, la fabrication et les expéditions étaient perturbés, mais que les produits connectés n’avaient pas été affectés. Le FBI a ensuite procédé à la saisie de quatre sites web liés à Handala, le groupe de hackers pro-iranien ayant revendiqué l’attaque, ainsi qu’au ministère iranien du Renseignement et de la Sécurité (MOIS).

Les Gardiens de la Révolution iraniens ont menacé de prendre pour cible les grandes entreprises technologiques américaines, notamment Apple, Google, Meta, Microsoft, Intel, Oracle, Nvidia, Tesla et Palantir, si d’autres dirigeants iraniens venaient à être assassinés, accusant ces entreprises d’avoir contribué à l’identification des cibles d’assassinat. L’armée iranienne a également affirmé avoir pris pour cible des centres de communication, de télécommunications et industriels israéliens en réponse aux attaques contre les infrastructures iraniennes.

Longtemps limité par une doctrine de sécurité défensive, le Japon va introduire des pouvoirs de « contre attaque informatique » à partir du mois d’octobre. Ce changement s’inscrit dans le cadre de la loi japonaise sur la « cyberdéfense active », adoptée en 2025 et dont la mise en œuvre se déroule par étapes jusqu’en 2027.

L’UE a imposé des sanctions à la suite de cyberattaques visant ses États membres et ses partenaires, en inscrivant sur la liste noire les sociétés chinoises Integrity Technology Group et Anxun Information Technology, ainsi que la société iranienne Emennet Pasargad et les cofondateurs d’Anxun. Ces sanctions prévoient le gel des avoirs et une interdiction de voyager pour les personnes visées. Il est en outre interdit aux citoyens et aux entités de l’UE de mettre des fonds à la disposition des sociétés désignées.

Les autorités néerlandaises ont indiqué que des pirates informatiques — soupçonnés d’être liés à la Russie — ont lancé des opérations de hameçonnage à grande échelle visant des diplomates, des militaires, des responsables gouvernementaux et des journalistes. Au lieu de contourner le cryptage des applications, les attaquants incitent les utilisateurs à leur communiquer des codes de vérification ou à associer leurs appareils, ce qui leur permet de prendre le contrôle des comptes et d’accéder à des conversations sensibles.

Les services de renseignement portugais ont émis une alerte similaire, décrivant une campagne mondiale menée par des acteurs soutenus par des États étrangers qui cherchent à accéder aux comptes de messagerie de responsables et d’autres personnes détenant des informations confidentielles. Une fois qu’ils ont pris le contrôle d’un compte, les pirates peuvent lire les conversations, accéder aux fichiers partagés et utiliser le profil compromis pour cibler d’autres victimes par le biais de nouvelles tentatives d’hameçonnage.

L’Union européenne a lancé son programme antiterroriste « ProtectEU » afin de renforcer la vigilance face à l’évolution des menaces, en mettant particulièrement l’accent sur la manière dont les terroristes utilisent les outils numériques tels que les réseaux sociaux, l’intelligence artificielle, les plateformes cryptées, les crypto-actifs et les drones. Ce plan combine un renforcement du renseignement et du soutien d’Europol, une application plus stricte de la réglementation relative aux contenus en ligne dans le cadre de la loi sur les services numériques (DSA), la protection des espaces publics et des infrastructures critiques, ainsi qu’une coopération internationale plus étroite.

INTERPOL a lancé un nouveau groupe de travail mondial lors du Sommet mondial sur la fraude 2026, dans le cadre d’une réponse plus coordonnée et fondée sur les données face à l’expansion rapide de la fraude financière à l’échelle mondiale. Ce groupe de travail, mis en place conjointement par le ministère de l’Intérieur britannique et INTERPOL, porte le nom de code « Opération Shadow Storm ». Il ciblera les centres frauduleux et leurs liens avec la cybercriminalité et la traite des êtres humains, en recourant à des outils tels que les mécanismes de blocage des paiements et les réseaux internationaux de partage de renseignements. Dans un premier temps, le groupe de travail s’attachera à démanteler les opérations criminelles en Asie du Sud-Est.

Parallèlement, de grandes entreprises technologiques et grand public, notamment Google, Amazon, Meta et OpenAI, ont signé l’« Accord sectoriel contre les escroqueries et la fraude en ligne » lors du Global Fraud Summit 2026. Ces entreprises se sont engagées à mettre l’accent sur le déploiement de mesures de sécurité proactives et de systèmes de détection basés sur l’IA ; à renforcer le partage d’informations entre le secteur privé et les forces de l’ordre afin de mieux identifier et lutter contre la fraude ; à améliorer la résilience grâce à des technologies défensives avancées et à des mécanismes de réaction rapide ; et à améliorer la sensibilisation du public afin d’aider les particuliers à reconnaître et à éviter les escroqueries.

L’UE n’a pas parvenu à trouver un accord sur la prolongation des règles temporaires permettant aux plateformes en ligne de détecter les contenus pédopornographiques, ce qui signifie que le cadre actuel devrait expirer en avril. Les règles existantes, en vigueur depuis 2021, permettent aux entreprises technologiques de scanner volontairement leurs services à la recherche de contenus préjudiciables, soutenant ainsi les efforts visant à identifier et à supprimer les contenus illégaux. Mais les négociations entre le Parlement européen et les États membres ont achoppé sur des questions clés — notamment sur la question de savoir si ces mesures devaient s’appliquer aux services cryptés. L’attention se porte désormais sur le cadre permanent longtemps retardé (le règlement sur l’exploitation sexuelle des enfants).

Le Brésil a lancé une nouvelle loi visant à renforcer la protection des enfants en ligne, marquant ainsi un tournant majeur dans la réglementation des plateformes numériques au sein du pays. Cette législation, connue sous le nom d’ECA Digital, impose des obligations telles que la vérification de l’âge, une surveillance plus stricte des contenus et la mise en place de mécanismes permettant de supprimer les contenus préjudiciables impliquant des mineurs sans qu’une décision de justice soit nécessaire. La loi vise également la conception des plateformes, en exigeant des entreprises qu’elles limitent les fonctionnalités susceptibles d’encourager une utilisation compulsive chez les enfants, telles que les notifications excessives, le profilage à des fins de publicité ciblée et les éléments de conception qui prolongent l’engagement des utilisateurs. La loi permet aux autorités d’imposer des avertissements et des amendes pouvant atteindre 10 millions de dollars en cas d’infraction. Dans les cas graves, les tribunaux peuvent ordonner la suspension ou l’interdiction des plateformes opérant au Brésil.

Le ministre indonésien de la Communication et des Affaires numériques a signé un décret gouvernemental qui interdit aux enfants de moins de 16 ans de posséder des comptes sur des plateformes numériques à haut risque. Cette mesure concernerait notamment YouTube, TikTok, Facebook, Instagram, Threads, X, Bigo Live et Roblox. La mise en œuvre de ce décret débutera progressivement à partir du 28 mars.

En Équateur, la question est abordée sous l’angle de la sécurité. Une proposition d’interdiction visant les moins de 15 ans s’explique par la crainte que des groupes criminels n’utilisent ces plateformes pour contacter et recruter des mineurs. Cela déplace la justification de la mesure, qui passe du bien-être à la prévention de la criminalité, et positionne les restrictions imposées aux réseaux sociaux comme faisant partie d’une réponse plus large en matière de sécurité.

Une proposition visant à interdire l’accès aux réseaux sociaux aux moins de 16 ans a été rejetée par les députés britanniques, avec 307 voix contre et 173 pour. Toutefois, un projet pilote soutenu par le gouvernement teste actuellement différentes formes de restriction — interdictions totales, limites de temps et couvre-feux — pendant six semaines. Les participants seront interrogés avant et après l’expérience afin d’évaluer les résultats sur le plan comportemental et pratique, notamment la facilité avec laquelle les restrictions peuvent être appliquées et la tendance des adolescents à contourner ces contrôles.

Le commissaire australien à la sécurité en ligne indique que les plateformes ont supprimé ou restreint l’accès à des millions de comptes d’utilisateurs de moins de 16 ans, conformément à l’interdiction d’accès aux réseaux sociaux imposée par le pays, mais que de graves problèmes de conformité persistent, notamment des systèmes de vérification de l’âge peu efficaces et des outils de signalement difficiles à utiliser pour les parents. Les enquêtes concernant cinq grandes plateformes se poursuivent, et des décisions en matière d’application de la loi sont attendues d’ici la mi-2026.

L’Autriche envisage d’interdire l’utilisation des réseaux sociaux aux enfants de moins de 14 ans, s’inscrivant ainsi dans une tendance internationale plus large visant à renforcer les règles de sécurité en ligne pour les jeunes. Le gouvernement affirme que cette mesure vise à protéger les enfants contre la conception addictive de ces plateformes, la violence, la désinformation et les normes de beauté nocives. Il prévoit également d’introduire une nouvelle matière scolaire consacrée aux médias et à la démocratie afin de renforcer la culture numérique.

La France examine actuellement un nouveau projet de loi visant à interdire l’accès aux réseaux sociaux aux enfants de moins de 15 ans, tout en proposant un couvre-feu numérique pour les adolescents plus âgés et en étendant les restrictions relatives à l’utilisation des téléphones portables à l’école aux établissements secondaires. Cette réglementation s’inscrit dans le cadre d’une initiative plus large visant à renforcer la lutte contre les dangers en ligne qui touchent les jeunes, notamment le cyberharcèlement, les contenus préjudiciables et l’exposition excessive aux écrans, et s’aligne sur des mesures similaires en matière de sécurité des enfants déjà mises en place dans des pays tels que l’Australie.

Une enquête suisse a révélé une forte méfiance du public à l’égard des grandes entreprises technologiques telles que Google, TikTok et Meta, la plupart des personnes interrogées les considérant comme axées sur le profit, influentes sur le plan politique et source de dépendance vis-à-vis de puissances étrangères. Dans le même temps, une majorité continue de percevoir la numérisation comme globalement positive, mais souhaite que l’État joue un rôle plus important pour garantir que l’IA, les algorithmes et les plateformes numériques ne nuisent pas à la démocratie ou à la société.

L’Australie a commencé à appliquer de nouvelles règles en matière de sécurité des enfants en ligne qui obligent les plateformes, notamment les réseaux sociaux, les boutiques d’applications, les services de jeux vidéo, les moteurs de recherche, les sites pornographiques et les agents conversationnels basés sur l’IA, à mettre en place des mesures de vérification de l’âge et à empêcher les mineurs d’accéder à des contenus préjudiciables ou explicites, y compris les interactions avec des agents conversationnels à caractère sexuel ou liées à l’automutilation. Le commissaire à la sécurité numérique (eSafety Commissioner) supervise l’application de ces règles, et les entreprises s’exposent à des amendes pouvant atteindre 49,5 millions de dollars australiens par infraction en cas de non-respect.
La présidente de la Commission européenne, Ursula von der Leyen a convoqué la première réunion du groupe d’experts spécial sur la sécurité des enfants en ligne, annoncé lors de son discours sur l’état de l’Union de 2025. Ce groupe d’experts fournira des conseils spécialisés sur la protection et l’autonomisation des enfants en ligne et examinera la possibilité d’harmoniser les limites d’âge pour l’accès aux réseaux sociaux. Le groupe d’experts a pour objectif de présenter un rapport assorti de recommandations au président de la Commission d’ici l’été 2026.

ÉCONOMIE

L’UE et le Canada ont entamé des négociations en vue d’un accord sur le commerce numérique visant à développer le volet numérique de leurs relations commerciales existantes, dans le but d’établir des règles plus claires pour le commerce numérique transfrontalier. Les discussions portent sur des questions telles que le commerce sans support papier, la reconnaissance des signatures électroniques et des contrats numériques, l’exonération des droits de douane sur les transmissions électroniques, ainsi que les restrictions en matière de localisation des données et d’exigences de transfert forcé de code source, tout en préservant la capacité des gouvernements à réglementer l’économie numérique.

L’UE et l’Australie ont consolidé leurs liens grâce à un nouveau partenariat en matière de sécurité et de défense, à la conclusion des négociations sur l’accord de libre-échange et au lancement des discussions sur l’adhésion de l’Australie à Horizon Europe. Ensemble, ces initiatives visent à élargir la coopération dans les domaines de la cybersécurité, de la gestion des crises, de l’intelligence artificielle et d’autres technologies émergentes, des flux de données, des matières premières critiques et du commerce, marquant ainsi un alignement stratégique plus large qui dépasse le simple cadre économique.

L’Australie s’oriente vers un régime national d’agrément pour les plateformes d’échange de cryptomonnaies et de tokenisation dans le cadre de sa réglementation des services financiers, suite à la recommandation d’une commission sénatoriale d’adopter le projet de loi de 2025 sur le cadre réglementaire des actifs numériques. Cette proposition permettrait de soumettre une plus grande partie du secteur des cryptomonnaies à une réglementation officielle, bien que les associations professionnelles mettent en garde contre le fait que des définitions trop larges pourraient, par inadvertance, concerner certains fournisseurs d’infrastructures et certains services liés aux portefeuilles.

Meta a annoncé que les chatbots IA tiers seront à nouveau autorisés à fonctionner via WhatsApp en Europe moyennant des frais, annulant ainsi les restrictions précédentes qui limitaient l’accès aux services de chatbots concurrents sur la plateforme. Dans le cadre de ce nouvel accord, les entreprises pourront distribuer des chatbots IA à usage général via l’API WhatsApp Business pendant 12 mois. Ce changement vise à donner aux régulateurs européens le temps de mener à bien leur enquête tout en permettant aux services IA concurrents de fonctionner au sein de l’écosystème de la plateforme.

Google va revoir en profondeur les règles de son Play Store après avoir résolu un différend de longue date avec Epic Games, le créateur de Fortnite. Ces changements comprennent la réduction des commissions sur les achats intégrés à 20 %, l’ajout de frais de 5 % pour les développeurs utilisant le système de facturation de Google, ainsi que la réduction des frais d’abonnement à 10 %, tout en facilitant l’installation de boutiques d’applications alternatives sur Android. Dans le cadre de cet accord, Epic réintroduira Fortnite sur le Play Store tout en poursuivant le développement de sa propre boutique d’applications Android.

La réunion de l’OMC s’est terminée sans accord sur la prolongation du moratoire sur les droits de douane sur le commerce électronique, tandis qu’un groupe de membres a proposé un accord distinct sur le commerce numérique.  Pour en savoir plus, consultez notre article dédié.

La BCE a lancé Appia, une stratégie visant à développer les marchés financiers tokenisés en Europe, avec Pontes comme solution de règlement basée sur la technologie des registres distribués (DLT) qui reliera l’infrastructure des marchés tokenisés à l’Eurosystème et permettra la mise en place de projets pilotes à partir du troisième trimestre 2026. Ce plan a pour objectif de faciliter la transition de la finance traditionnelle vers les marchés tokenisés tout en préservant la stabilité financière, le règlement par la banque centrale et l’interopérabilité ; il est désormais ouvert à la consultation publique.
Une étude conjointe de l’OIT et de la Banque mondiale révèle que l’IA aura des répercussions inégales sur l’emploi dans 135 économies. Les économies avancées sont davantage exposées, en particulier dans les secteurs administratif et professionnel, tandis que les pays en développement risquent de subir des perturbations sans bénéficier de gains de productivité comparables, car ils manquent souvent d’infrastructures, d’accès à Internet et des compétences nécessaires pour en tirer parti. Le rapport souligne que les conséquences dépendront moins de l’IA en soi que de la connectivité, de la formation, de la conception des emplois et des protections sociales.

JURIDIQUE

L’Organisation mondiale des données (WDO) a été lancée à Pékin en tant que nouvelle plateforme internationale à but non lucratif axée sur le développement et la gouvernance des données à l’échelle mondiale, avec pour objectif déclaré de réduire la fracture numérique mondiale, de soutenir l’économie numérique et d’améliorer la coopération internationale sur des questions telles que les flux transfrontaliers de données, la protection de la vie privée et la sécurité. Cette initiative s’inscrit dans une dynamique plus large visant à intégrer de manière plus structurée la gouvernance des données dans l’élaboration des politiques numériques mondiales.

Un tribunal luxembourgeois a annulé l’amende de 746 millions d’euros infligée à Amazon au titre du RGPD, non pas parce que les violations présumées de la vie privée auraient disparu, mais parce qu’il a estimé que la procédure de sanction engagée par l’autorité de régulation présentait des vices de procédure, notamment en ce qui concerne l’évaluation du degré de responsabilité d’Amazon. L’affaire va désormais être renvoyée à l’autorité luxembourgeoise de protection des données pour réexamen.

L’autorité italienne chargée de la protection des données a infligé une amende de 31,8 millions d’euros à Intesa Sanpaolo après qu’un employé a accédé à plusieurs reprises à des milliers de comptes clients sans autorisation et que la banque n’a pas détecté ces faits à temps. Les autorités de régulation ont déclaré que cette affaire avait mis en évidence de graves lacunes en matière de surveillance interne, de contrôles des risques, de mesures de protection de la confidentialité, de responsabilité et de notification des violations.

DÉVELOPPEMENT

Malte a lancé le projet SMART Food, une initiative malto-italienne qui utilise l’intelligence artificielle et la blockchain pour mettre en place une plateforme numérique permettant de suivre les produits alimentaires de la production à la consommation. L’objectif est d’améliorer la traçabilité, la transparence, la sécurité, la durabilité et la confiance dans le secteur agroalimentaire, tout en aidant les consommateurs et les producteurs à accéder à des informations en temps réel sur les produits.

La Chine a révisé ses règles relatives au recensement agricole national de 2026, élargissant la portée de ce dernier afin de couvrir non seulement l’agriculture, mais aussi le développement industriel rural et la construction des villages, tout en introduisant de nouvelles méthodes de collecte de données telles que la télédétection. Les règles actualisées renforcent également les contrôles de qualité des données, les obligations de confidentialité et les sanctions en cas de falsification des statistiques, ce qui témoigne d’une volonté accrue de diversifier la collecte de données rurales et de renforcer la surveillance de l’État.
Le Royaume-Uni et les Philippines ont conclu un nouvel accord de partenariat visant à développer la coopération dans les domaines de l’éducation numérique et des technologies éducatives, en associant l’expertise et le soutien financier du Royaume-Uni aux priorités des Philippines en matière d’éducation. Cette initiative vise à améliorer l’accès aux outils d’apprentissage numériques, au développement des compétences et aux technologies éducatives, tout en renforçant les relations bilatérales plus larges dans les domaines de l’innovation et du renforcement des capacités.

SOCIOCULTUREL

Les premiers rapports de transparence établis dans le cadre du Code de conduite sur la désinformation lié à la loi européenne sur les services numériques ont été publiés ; parmi les signataires figurent de grandes plateformes et des acteurs de la société civile, qui y décrivent les mesures qu’ils affirment prendre contre la désinformation, notamment en ce qui concerne la guerre en Ukraine et l’intégrité électorale. L’importance de ces rapports réside dans le fait qu’il s’agit des premiers rapports publiés depuis que le Code a été officiellement reconnu dans le cadre de la loi sur les services numériques (DSA) en février 2025, marquant ainsi le passage d’un système essentiellement volontaire à un système de corégulation plus structuré, fondé sur des engagements, des rapports et des audits.

L’UE examine actuellement la proposition de X visant à modifier son système de vérification par le Blue-Check après avoir constaté que la vérification payante sans contrôle d’identité rigoureux pouvait induire les utilisateurs en erreur au regard de la loi sur les services numériques. X s’était vu infliger une amende de 120 millions d’euros en décembre et avait obtenu un délai de 60 jours ouvrables pour présenter des mesures correctives, que la Commission évalue actuellement tandis que l’entreprise conteste également cette décision devant les tribunaux.

L’UNESCO a lancé une initiative de recherche axée sur l’Afrique du Sud concernant la gouvernance des contenus préjudiciables en ligne dans le cadre de son programme « Social Media 4 Peace », soutenu par l’UE, afin d’étudier les discours de haine, la désinformation, les lacunes réglementaires et la gouvernance des plateformes. L’objectif est de formuler des recommandations concrètes, fondées sur les droits, visant à renforcer la gouvernance numérique, la responsabilité des plateformes, la liberté d’expression et l’accès à l’information dans le contexte local.

L’Espagne a lancé HODIO, un outil numérique destiné à mesurer les discours de haine sur les réseaux sociaux. Combinant intelligence artificielle, analyse de données et expertise humaine, cet outil publiera des rapports semestriels classant les plateformes en fonction de l’exposition des utilisateurs à des contenus préjudiciables, dans le but d’éclairer l’élaboration des politiques et d’inciter les entreprises à agir. Cependant, des critiques ont émis des réserves quant à la transparence de HODIO et à la manière dont les autorités définiront et classeront les discours de haine, mettant en garde contre le fait que des critères mal définis pourraient porter atteinte à la liberté d’expression.


Cadres, stratégies et lignes directrices nationaux

États-Unis. Le gouvernement américain a dévoilé un cadre stratégique national en matière d’IA qui définit une approche globale de l’IA pour l’ensemble des agences fédérales. Ce cadre définit des priorités en matière de développement responsable de l’IA, de gouvernance des données, de formation de la main-d’œuvre et de collaboration internationale, tout en mettant l’accent sur les garanties éthiques, les résultats d’intérêt public et la sécurité nationale. Il préconise également une accélération des investissements dans la recherche et le déploiement de l’IA, ainsi que la mise en place de mécanismes de contrôle coordonnés visant à garantir la transparence et la responsabilité au sein des systèmes fédéraux d’IA.

Égypte. Le 14 mars 2026, l’Égypte a publié les Lignes directrices nationales pour une IA fiable et responsable. Ces lignes directrices constituent une référence nationale pour le développement, le déploiement et la supervision responsables de l’IA dans les secteurs public et privé, garantissant une utilisation sûre, éthique et transparente de l’IA tout en soutenant l’innovation conformément à la Vision 2030 de l’Égypte et à la stratégie nationale en matière d’IA. Complétant le cadre national de gouvernance de l’IA, qui définit ce qui doit être régi, ces lignes directrices précisent les modalités de mise en conformité, en proposant des méthodologies, des indicateurs et des listes de contrôle pour mettre en œuvre les principes éthiques. Destinées aux scientifiques des données, aux responsables de la conformité et aux développeurs, elles fournissent des orientations concrètes pour protéger les droits individuels, promouvoir le bien-être de la société, renforcer la responsabilité et la transparence, et favoriser une innovation fondée sur la sécurité. Ces lignes directrices alignent également l’Égypte sur les normes internationales et engagent les entités gouvernementales, les entreprises privées et les acteurs communautaires dans une gouvernance responsable de l’IA.

Corée du Sud. La Corée du Sud a dévoilé une stratégie nationale visant à devenir l’une des trois premières puissances mondiales en matière d’IA d’ici 2028. Ce plan associe des investissements dans les infrastructures numériques, les systèmes de données et la connectivité de nouvelle génération. Les autorités ont pour objectif d’étendre les réseaux en développant les capacités 5G et en préparant le déploiement commercial de la 6G d’ici 2030. La cybersécurité et l’intégration des données constituent également des priorités essentielles pour soutenir un écosystème numérique plus solide. La stratégie prévoit le développement des talents à tous les niveaux d’enseignement et l’investissement dans des technologies clés telles que les semi-conducteurs et l’informatique quantique. L’adoption de l’IA devrait s’étendre à tous les secteurs, notamment l’industrie manufacturière, la santé et l’agriculture.

Souveraineté

L’UE. Des tensions apparaissent au sein de l’UE concernant les investissements dans les infrastructures d’IA : la France, la Pologne, l’Autriche et la Lituanie font pression pour qu’une partie du projet « AI Gigafactory », d’un montant de 20 milliards d’euros, soit réservée aux technologies européennes, tandis que l’Allemagne se montre sceptique quant à l’idée de lier ce projet aux objectifs de souveraineté numérique. Parallèlement, l’Allemagne poursuit une expansion majeure de ses centres de données nationaux et de sa capacité de traitement en matière d’IA, soutenue par des réformes réglementaires, des incitations fiscales et l’attribution de terrains afin d’attirer les investissements, dans le but de réduire sa dépendance vis-à-vis des fournisseurs étrangers.
Russie. Le gouvernement russe propose des règles susceptibles d’interdire ou de restreindre l’utilisation d’outils d’IA étrangers tels que ChatGPT, Claude et Gemini s’ils ne stockent pas les données des utilisateurs russes sur le territoire national et ne se conforment pas aux exigences réglementaires de Moscou. Ces propositions, émanant du ministère du Développement numérique, visent à renforcer la volonté de la Russie de mettre en place un Internet souverain, afin de protéger les citoyens contre toute « manipulation cachée » et de faire respecter les « valeurs spirituelles et morales traditionnelles russes ». En vertu de ce projet de réglementation, les systèmes d’IA transfrontaliers qui transmettent des données d’utilisateurs à l’étranger seraient soumis à des restrictions, tandis que les modèles étrangers pouvant fonctionner entièrement au sein de l’infrastructure russe, tels que Qwen ou DeepSeek, pourraient être déployés en toute sécurité.

Politique en matière de contenu

L’UE. La Commission européenne a publié une deuxième version préliminaire de son code de bonnes pratiques sur le marquage et l’étiquetage des contenus générés par l’IA, dans le cadre des efforts visant à aider les entreprises à se conformer aux exigences de transparence prévues à l’article 50 de la loi européenne sur l’intelligence artificielle. La section 1 du code se concentre sur les fournisseurs de systèmes d’IA générative et propose une approche à plusieurs niveaux pour le marquage des contenus générés par l’IA, comprenant notamment des métadonnées signées numériquement, des filigranes imperceptibles, ainsi que, de manière facultative, l’apposition d’empreintes numériques ou la journalisation. Les fournisseurs sont également tenus de mettre à disposition des outils de détection afin que les utilisateurs et les autorités puissent vérifier si un contenu a été généré ou manipulé par l’IA. La section 2 s’adresse aux déployeurs de systèmes d’IA et exige une divulgation claire, au moyen d’étiquettes visibles et accessibles, lorsque des des fausses informations (deepfakes) ou des textes générés par l’IA destinés à informer le public ont été artificiellement créés ou manipulés.

Le Conseil européen a approuvé des propositions visant à interdire à l’IA de générer du contenu à caractère sexuel non consensuel (CSAM), à adapter les délais de mise en conformité pour les IA à haut risque et à rationaliser la loi sur l’IA, notamment en prévoyant des exemptions pour certaines PME, des obligations d’enregistrement et une clarification des responsabilités en matière de surveillance. Ces mesures s’inscrivent dans le cadre des efforts plus larges déployés par l’Europe pour garantir la souveraineté de ses infrastructures d’IA et assurer un déploiement sûr et responsable de l’IA.

Pays-Bas, France. Un tribunal néerlandais a ordonné à xAI et à son chatbot Grok de ne pas créer ni diffuser d’images à caractère sexuel non consenties. Le jugement impose aux opérateurs de Grok de mettre en place des mesures techniques visant à bloquer les requêtes ou les résultats susceptibles de générer des images intimes non consenties. Cette décision a été présentée comme une mesure nécessaire pour faire respecter les droits de la personne et la dignité à l’ère numérique, créant ainsi un précédent susceptible d’influencer les tribunaux européens confrontés aux préjudices causés par l’intelligence artificielle.
Par ailleurs, le parquet de Paris a déclaré que la polémique autour des deepfakes à caractère sexuel générés par Grok aurait pu être délibérément amplifiée. L’objectif présumé était de gonfler artificiellement la valeur de X et de xAI avant juin 2026, date à laquelle la nouvelle entité issue de la fusion entre SpaceX et xAI devrait être cotée en bourse.

Sécurité 

Australie. Le commissaire à la sécurité en ligne a constaté que les chatbots d’accompagnement basés sur l’IA, notamment Character.AI, Nomi, Chai et Chub AI, ne parviennent pas à protéger les enfants contre les contenus préjudiciables, leurs mesures de protection contre les contenus sexuellement explicites et l’exploitation sexuelle des enfants étant insuffisantes. La plupart des plateformes s’appuyaient sur une vérification de l’âge autodéclarée, ne disposaient pas d’un suivi significatif des entrées et sorties de l’IA et ne fournissaient pas systématiquement de liens vers des services d’aide en cas de crise ou de soutien en santé mentale. La commissaire Julie Inman Grant a averti que, alors que les enfants recourent de plus en plus à des compagnons IA pour obtenir un soutien émotionnel, l’absence de mesures de sécurité solides concernant l’automutilation, le suicide et les contenus illicites pose de graves risques, le non-respect de ces mesures étant passible de sanctions civiles en vertu des codes australiens relatifs aux contenus soumis à restriction d’âge.

Au Royaume-Uni, Le Royaume-Uni. La ministre de la Science, de l’Innovation et de la Technologie a appelé les fournisseurs de services en ligne à renforcer les mesures de lutte contre les préjudices numériques visant les femmes et les filles, dans le cadre d’un engagement visant à réduire de moitié ce type de violence d’ici une décennie. La ministre a appelé les entreprises technologiques à mettre en œuvre les recommandations de l’Ofcom intitulées « Une vie en ligne plus sûre pour les femmes et les filles », qui décrivent des mesures telles que la réalisation d’évaluations des risques axées sur les femmes et les filles, l’évaluation préalable au lancement des fonctionnalités susceptibles de donner lieu à des abus, la mise en place de paramètres de confidentialité par défaut rigoureux, la démonétisation des contenus incitant à la violence, la limitation de la visibilité des contenus misogynes dans les résultats de recherche et les flux de recommandations, ainsi que la mise en place de limites de fréquence pour lutter contre le harcèlement coordonné. Ces lignes directrices devraient être mises en œuvre au plus tard d’ici la fin de l’année 2026.

États-Unis. Le gouvernement américain fait l’objet de deux poursuites judiciaires intentées par la société d’IA Anthropic après que le Pentagone a classé cette entreprise parmi les risques liés à la chaîne d’approvisionnement, ce qui a pour effet d’exclure sa technologie des contrats de défense.

Le ministère de la Justice fait valoir que cette désignation est légale et justifiée par des raisons de sécurité nationale, invoquant le refus d’Anthropic de permettre que son intelligence artificielle soit utilisée pour des armes autonomes et la surveillance intérieure. Anthropic, quant à elle, affirme que cette mesure est illégale et revêt un caractère de représailles, visant ses positions politiques plutôt qu’un véritable risque pour la sécurité.

Dans l’affaire californienne, un juge fédéral a temporairement empêché le gouvernement d’appliquer cette désignation. Le tribunal a estimé que le comportement d’Anthropic ne répondait pas aux critères juridiques prévus par l’article 3252, qui se limite aux menaces hostiles dissimulées telles que le sabotage ou la subversion de systèmes — et non aux prises de position publiques ou aux litiges contractuels. La décision met également en évidence des lacunes procédurales, notamment une évaluation des risques insuffisante, l’absence de consultation interinstitutionnelle et le fait de ne pas avoir envisagé de mesures moins restrictives.

Le juge a également soulevé des questions d’ordre constitutionnel, soulignant que cette désignation avait peut-être été influencée par les déclarations d’Anthropic et que l’entreprise avait probablement été privée d’une procédure régulière. Les preuves d’un préjudice immédiat et grave — perte de contrats, atteinte à la réputation et perturbation des relations commerciales — ont justifié l’octroi d’une injonction préliminaire, même si une décision définitive pourrait prendre des mois.

Parallèlement, Anthropic a engagé une deuxième procédure à Washington, D.C., pour contester son classement au sein de la chaîne d’approvisionnement devant une formation de trois juges de la Cour d’appel du circuit de Washington, D.C., en contestant plus particulièrement la base juridique invoquée en vertu de la loi fédérale sur la sécurité de la chaîne d’approvisionnement (FASCA).Ce litige a suscité un large soutien au sein du secteur technologique, des entreprises telles que Microsoft, Google, Amazon et OpenAI ayant apporté leur soutien à l’action en justice d’Anthropic par le biais de mémoires d’amicus curiae. Les leaders du secteur mettent en garde contre le fait que la classification adoptée par le gouvernement pourrait créer un précédent susceptible de déstabiliser l’écosystème américain de l’IA et de perturber les activités des fournisseurs travaillant à la fois avec les systèmes d’IA du secteur public et ceux du secteur privé.


« Nous envisageons un avenir où l’intelligence sera un service public, à l’instar de l’électricité ou de l’eau, et où les gens l’achèteront chez nous via un compteur pour l’utiliser comme bon leur semble », a récemment déclaré Sam Altman, PDG d’OpenAI.

À première vue, cela pourrait ressembler à une vision d’autonomisation : un accès à la demande à un raisonnement surhumain, accessible à quiconque dispose de suffisamment d’argent pour se le procurer. Mais la métaphore d’Altman est pertinente. Les services publics n’appartiennent pas à la collectivité ; ils sont contrôlés par de puissants fournisseurs qui fixent les tarifs, les conditions et l’infrastructure.

Nos connaissances sont déjà en train d’être commercialisées par les entreprises technologiques et le secteur de la publicité. Mais ce que laisse entrevoir le PDG d’OpenAI, c’est un monde dans lequel l’intelligence elle-même serait confiée à une poignée de plateformes.

La mainmise de l’IA sur l’intelligence remet en cause l’un des piliers de la civilisation, érigé au fil des millénaires : celui selon lequel la connaissance définit ce que signifie être humain.

Altman ne se contente donc pas de décrire un modèle économique ; il esquisse également un nouvel ordre social, dans lequel l’intelligence est centralisée, privatisée, puis revendue à l’humanité par les grandes entreprises spécialisées dans l’IA.

 Person, Face, Head, Clothing, Coat, Chanakya

Ce n’est pas un avenir inévitable. La lutte pour l’intelligence et le savoir humains – c’est-à-dire pour savoir à qui appartient la capacité de penser, de connaître et de décider – n’est pas encore terminée.

La véritable alternative à la mainmise sur nos connaissances et à leur facturation n’est pas de renoncer à l’IA ; la véritable alternative consiste à considérer l’IA comme un prolongement de nos connaissances personnelles, partagées au sein des communautés, des pays et de l’humanité tout entière, conformément à nos préférences.

Les communautés, les universités, les entreprises et les pays peuvent développer une IA ascendante, ancrée dans leurs propres langues, valeurs et systèmes de connaissances. Les modèles open source ont rendu l’IA centrée sur l’humain techniquement possible et financièrement abordable. Cela conduirait à un écosystème distribué dans lequel l’IA renforcerait les communautés humaines au lieu de les subordonner.

Ce texte est une adaptation de l’article de blog du Dr Jovan Kurbalija intitulé «La guerre que nous ne voyons pas : le combat pour l’avenir du savoir humain.».


Deux récents verdicts rendus par des tribunaux américains commencent à redéfinir les limites de la responsabilité des plateformes de réseaux sociaux, avec des implications qui vont bien au-delà des cas individuels.

Au Nouveau-Mexique, un jury a condamné Meta à verser 375 millions de dollars après avoir conclu que l’entreprise avait induit les utilisateurs en erreur quant à la sécurité de ses plateformes pour les enfants. Le procès, intenté par le procureur général Raul Torrez, reprochait à Meta d’avoir enfreint les lois de l’État en matière de protection des consommateurs en présentant de manière trompeuse la sécurité de ses plateformes pour les mineurs, tout en développant des fonctionnalités et des algorithmes qui, selon les procureurs, incitent à une utilisation prolongée et exposent les enfants à des risques importants. Ces risques comprennent une dépendance, l’exposition à des contenus sexuels préjudiciables, des communications privées non désirées avec des adultes, des troubles du sommeil dus à une utilisation compulsive, ainsi que des environnements où les prédateurs peuvent agir avec une relative facilité. Les jurés ont pris connaissance d’études internes et de témoignages d’anciens employés, dont le lanceur d’alerte Arturo Béjar, suggérant que l’entreprise avait connaissance de ces risques mais n’avait pas suffisamment averti le public ni pris de mesures pour atténuer les préjudices. Meta a rejeté le verdict et compte faire appel.

Parallèlement, un juge de Los Angeles est parvenu à une conclusion similaire dans un autre contexte. Il a estimé que Meta et YouTube — propriété de Google — avaient fait preuve de négligence dans la conception et l’exploitation de leurs plateformes, dans une affaire portant sur la dépendance aux réseaux sociaux. Le procès, intenté par une jeune femme identifiée sous le nom de K.G.M., faisait valoir que l’utilisation compulsive de ces plateformes pendant son adolescence avait contribué à une dépression, à de l’anxiété et à une dysmorphie corporelle. Le juge a donné raison à la plaignante, lui accordant 6 millions de dollars de dommages-intérêts et attribuant 70 % de la responsabilité à Meta et 30 % à Google. Les deux entreprises ont déclaré qu’elles feraient appel, soutenant que les conséquences sur la santé mentale ne peuvent être attribuées à une seule plateforme.

Pourquoi est-ce important ? Les sanctions financières infligées dans ces affaires sont modestes pour des entreprises de cette envergure. L’importance plus large de ces jugements réside ailleurs.

 Adult, Female, Person, Woman, Face, Head, Publication, Book

Historiquement, les plateformes se sont appuyées sur des protections juridiques — notamment l’article 230 de la loi américaine sur les communications — pour se prémunir contre toute responsabilité concernant les contenus générés par les utilisateurs. Ces décisions judiciaires commencent toutefois à mettre à l’épreuve une théorie différente : celle selon laquelle la responsabilité peut découler non seulement de ce que les utilisateurs publient, mais aussi de la manière dont les plateformes structurent, recommandent et amplifient ces contenus.

Cette distinction est importante car elle touche au cœur même du modèle économique actuel des réseaux sociaux. Des plateformes telles que Meta et Google sont conçues pour maximiser l’engagement des utilisateurs — temps passé, interactions et consommation de contenu —, ce qui génère à son tour des revenus publicitaires. Pour y parvenir, elles s’appuient sur des systèmes de recommandation, des interfaces fluides et des fonctionnalités de conception comportementale telles que la lecture automatique, le défilement infini et les notifications push. Il ne s’agit pas là d’éléments accessoires ; ils sont essentiels à la manière dont ces plateformes fidélisent leurs utilisateurs et monétisent leur attention.

L’argument juridique qui se fait jour est que certains de ces choix de conception peuvent contribuer activement à causer un préjudice, en particulier chez les mineurs. Dans l’affaire du Nouveau-Mexique, l’accent a été mis sur l’exposition à des contenus préjudiciables et exploitants. À Los Angeles, on a insisté sur l’utilisation compulsive et ses effets sur la santé mentale. Mais ces deux affaires convergent vers un point similaire : l’architecture même de la plateforme — et pas seulement des défaillances isolées au niveau des contenus — peut créer des risques prévisibles.

Si ce raisonnement venait à s’imposer devant les tribunaux, cela exercerait une nouvelle forme de pression sur les entreprises technologiques. Le problème ne réside pas dans le montant d’une amende prise isolément, mais dans l’effet cumulatif de milliers de poursuites similaires, dans l’augmentation des coûts de mise en conformité et dans la possibilité que des décisions de justice créant des précédents redéfinissent les pratiques de conception jugées acceptables. Les systèmes visant à maximiser l’engagement, longtemps considérés comme un avantage concurrentiel, pourraient devenir une source de vulnérabilité juridique.

Cela crée une tension structurelle. Pour réduire les effets néfastes, il peut s’avérer nécessaire de limiter précisément les caractéristiques qui permettent aux plateformes d’attirer si efficacement l’attention. Même des baisses modestes, lorsqu’elles s’appliquent à des milliards d’utilisateurs, peuvent se traduire par des répercussions importantes sur les revenus.

La marche à suivre. La voie à suivre. Il est peu probable que les entreprises abandonnent purement et simplement leurs modèles de base. Une adaptation constitue une réponse plus probable. Cela pourrait passer par une réoptimisation des algorithmes en faveur de formes d’interaction plus sûres, une segmentation des produits par tranche d’âge avec des paramètres par défaut plus stricts pour les mineurs, ainsi que des investissements dans des mécanismes de sécurité et d’audit plus robustes. On pourrait également assister à une transition progressive vers d’autres sources de revenus — telles que les abonnements, la monétisation des créateurs ou les intégrations commerciales — afin de réduire la dépendance à l’égard de la publicité purement basée sur l’attention.

La stratégie juridique jouera également un rôle. Meta et Google font tous deux appel de ces verdicts, et les décisions futures détermineront jusqu’où les tribunaux sont prêts à aller pour attribuer un préjudice à des choix de conception. Les entreprises sont susceptibles de renforcer leurs obligations d’information, d’étendre les contrôles parentaux et de documenter leurs évaluations internes des risques afin de démontrer leur diligence raisonnable. De telles mesures n’élimineront peut-être pas la responsabilité, mais elles peuvent influencer la manière dont celle-ci est interprétée.

En fin de compte, la question clé est de savoir si ces affaires constituent des cas isolés ou le début d’un changement juridique plus large.


L’ONU lance un mécanisme mondial sur la sécurité des TIC, mais l’avenir reste incertain

Après quasi trois décennies de négociations en dents de scie sur la cybersécurité au sein des Nations unies, le Mécanisme mondial sur la sécurité des TIC, tant attendu, a enfin vu le jour.

Il s’agit du premier forum permanent de ce type depuis le début des discussions sur la sécurité des TIC en 1998, et sa simple existence en dit long sur le chemin parcouru par ces pourparlers.

Mais si ce lancement a été perçu comme une avancée majeure, la session d’organisation a rapidement ramené les choses à la réalité. Au-delà de ce qui avait déjà été esquissé dans l’annexe C et dans le rapport final du Groupe de travail à composition non limitée (GTCNL) , la manière dont le Mécanisme s’organiserait concrètement restait floue.

 Text, Page, Symbol

Cette session a soulevé de nombreuses questions — concernant la structure, les priorités et les procédures — mais n’a apporté que peu de réponses concrètes, laissant planer le doute : si le Mécanisme existe désormais, ce qu’il fera et comment il s’y prendra reste encore largement à préciser.

Un nouvel organe, un nouveau mandat et une présidente nouvellement élue, Egriselda López, du Salvador, ont insufflé un regain d’optimisme à la première session organisationnelle du Mécanisme mondial. Pourtant, en l’espace de quelques minutes, il est apparu clairement que le Mécanisme mondial ne partait pas d’une page blanche, mais héritait plutôt de la longue liste de désaccords du GTCNL

La Russie a ouvert la discussion en contestant la légitimité de la nomination de la présidente, affirmant que celle-ci avait été guidée uniquement par l’UNODA et avait ainsi limité la participation des États au processus. Elle a saisi cette occasion pour souligner que toutes les décisions prises dans le cadre du nouveau processus devaient être fondées sur le consensus et être entièrement intergouvernementales.

Les points importants à l’ordre du jour

En ce qui concerne l’ordre du jour provisoire de la session de juillet du mécanisme, la présidence a diffusé un projet d’ordre du jour articulé autour des cinq piliers du cadre régissant le comportement responsable des États dans l’utilisation des TIC. Toutefois, l’Iran et la Russie ont fait valoir que la formulation du point 5 de l’ordre du jour ne reflétait pas fidèlement le paragraphe 9 de l’annexe C du rapport final du Groupe de travail à composition non limitée (GTCNL) et ont demandé qu’une correction soit apportée lors de cette session. L’UE et le Canada ont rejeté cette demande, faisant valoir que le projet faisait déjà référence à tous les documents pertinents et que le fait d’isoler un paragraphe constituerait en soi une renégociation. Les États-Unis ont entièrement réservé leur position, préférant que la plénière de juillet adopte son propre ordre du jour. Aucun consensus n’a été atteint, et le Président poursuivra ses consultations avant juillet.

Ce mécanisme a hérité de ses prédécesseurs de nombreux débats de fond restés en suspens.

En matière de droit international, il est largement admis qu’il reste encore beaucoup à faire, mais les avis divergent quant à la manière de s’y prendre. La majorité des délégations se sont clairement prononcées en faveur d’un renforcement du cadre normatif existant et d’une réaffirmation de l’applicabilité de la Charte des Nations Unies au cyberespace.

Une large majorité d’États s’est prononcée en faveur du maintien d’un mécanisme axé sur l’action, mettant fortement l’accent sur l’aspect pratique et la mise en œuvre des cadres convenus en matière de droit international, de normes, de mesures de confiance et de renforcement des capacités (Chili, Nauru, Portugal, Suisse, Royaume-Uni, Estonie, Italie, Australie, République démocratique du Congo, Antigua-et-Barbuda, le Soudan, Vanuatu, l’Albanie, le Vietnam, l’Inde, la Grèce, le Rwanda, la République dominicaine, la Macédoine du Nord, Kiribati).

En particulier, certaines délégations ont plaidé en faveur de l’application du cadre à des scénarios concrets afin de stimuler sa mise en œuvre (Japon, Pays-Bas, Royaume-Uni, Soudan). La Chine a été la seule délégation à souligner que le développement du cadre était tout aussi important que sa mise en œuvre.

L’UE a mis en avant la liste de contrôle des normes, une question qui avait fait l’objet de vifs débats dans le cadre du mécanisme précédent, comme un domaine nécessitant des améliorations supplémentaires.

Toutefois, pour de nombreux États, une préoccupation fondamentale demeure. Les initiatives de renforcement des capacités risquent de s’enliser sans financement fiable ; c’est pourquoi de nombreuses délégations, principalement issues de pays en développement, ont exhorté le Mécanisme mondial à donner la priorité à la mise en œuvre du Fonds volontaire des Nations unies, qui avait été proposé mais laissé en suspens par le Groupe de travail à composition ouverte (GTCNL).

Groupes thématiques spécialisés : qui, quoi et comment

L’ordre du jour souvent très chargé et les déclarations interminables des délégations lors des séances plénières du GTCNL laissaient peu de place à une analyse technique approfondie, ce qui a suscité la frustration de nombreuses délégations face au fossé existant entre les formulations consensuelles et les mesures concrètes.

Les groupes thématiques spécialisés (DGT) ont été créés précisément pour remédier à ce problème en mettant en place un forum technique informel destiné à faire avancer les initiatives pratiques déjà approuvées, telles que le Portail mondial de coopération et de renforcement des capacités en matière de sécurité des TIC. Cependant, les modalités pratiques de leur mise en place et de leur gestion vont faire l’objet de vives controverses, car elles auront une incidence sur le contenu de l’ordre du jour, sur les acteurs qui le piloteront et sur la capacité de ce nouveau système à produire des résultats concrets à long terme.

Qui dirigera les groupes thématiques spécialisés (DTG) ?

La question qui a dominé la séance et suscité le plus de débats était de savoir qui nommerait les co-animateurs des deux groupes thématiques spécialisés. La Présidente a proposé de nommer deux cofacilitateurs par groupe de travail technique (DTG) : l’un issu d’un pays développé, l’autre d’un pays en développement, s’inspirant ainsi de la pratique de l’Assemblée générale, selon laquelle la Présidente nomme des cofacilitateurs pour les processus intergouvernementaux. Elle a fait part de son intention de mener de larges consultations informelles avant de procéder à ces nominations, et s’est engagée à respecter l’équilibre géographique, la parité entre les sexes lorsque cela est possible, ainsi que l’expertise technique pertinente comme critères de sélection.

Le choix des personnes qui occuperont ces fonctions revêt une importance considérable : les co-animateurs dirigeront les discussions des groupes de travail, influenceront leurs ordres du jour et transmettront les recommandations à la séance plénière.

Une large coalition d’États a soutenu l’approche de la présidence, notamment l’Union européenne, s’exprimant au nom de ses États membres, ainsi que plusieurs pays partageant la même position, tels que la France, l’Allemagne, l’Australie, le Royaume-Uni, les Pays-Bas, la Suisse, le Japon, l’Égypte, le Sénégal, le Nigeria, la Malaisie, la Moldavie et d’autres. L’Égypte et le Sénégal ont été parmi les plus directs, soulignant que tout retard dans la mise en œuvre du mécanisme ferait perdre du temps pendant la période intersessionnelle et éroderait sa crédibilité, en particulier aux yeux des pays en développement désireux de passer de la procédure au fond.

Un autre groupe d’États, mené par la Russie et soutenu par l’Iran, la Chine, la Biélorussie, le Nicaragua et Cuba, a fait valoir que la nomination des cofacilitateurs devait être approuvée par consensus par les États membres plutôt que décidée unilatéralement par la présidence. La Russie a soutenu que les cofacilitateurs des groupes de discussion thématiques (DTG) traitaient de questions politiques de fond et constituaient donc des responsables dont la nomination nécessitait un accord collectif. La Russie a également avancé un argument géographique : la désignation d’un cofacilitateur issu d’un pays développé et d’un autre issu d’un pays en développement par groupe de travail thématique (DTG) continue de favoriser de manière disproportionnée les États développés, qui représentent moins d’un cinquième des membres de l’ONU. L’Iran a ajouté que le projet de texte initial du GTCNL avait explicitement autorisé le président à nommer les facilitateurs des DTG, mais que cette disposition avait été délibérément supprimée au cours des négociations, ce qui témoignait d’un manque d’accord sur la question.

La présidente a confirmé son intention de consulter tous les États membres de manière informelle avant de présenter des candidats et a invité les délégations à faire preuve de souplesse compte tenu de l’urgence de mettre en route les travaux du mécanisme. La Russie a ensuite déclaré comprendre que les candidats seraient désignés à l’issue d’une large consultation, suivie d’une approbation par consensus, mais la présidente n’a ni confirmé ni infirmé cette interprétation.

La question est en fait reportée à la période intersessionnelle, ce qui signifie que la composition des équipes de direction du DTG reste en suspens et nécessitera la poursuite des efforts diplomatiques d’ici juillet.

De quoi les DTG discuteront-ils ?

Un débat étroitement lié portait sur la question de savoir qui décide des thèmes qui seront effectivement abordés par les groupes de travail. Plusieurs délégations occidentales et partageant les mêmes vues (par exemple, l’Allemagne, la France, le Canada, le Royaume-Uni et l’Australie) ont souligné qu’il s’agissait d’une prérogative de la présidence et des cofacilitateurs, à exercer en étroite consultation avec les États. Ces délégations ont proposé de prendre comme points de départ naturels les rançongiciels et la protection des infrastructures critiques, en invoquant leur présence fréquente dans les déclarations nationales et les discussions du GTCNL

L’Iran et la Russie ont souligné que les thèmes devaient être déterminés par consensus entre tous les États membres. L’Argentine a fait valoir que la plénière devait garder le contrôle de l’ordre du jour plutôt que de céder trop de responsabilités aux cofacilitateurs.

Le Maroc a quant à lui préconisé un modèle ascendant dans lequel les DTG définissent dès le départ leurs propres sous-thèmes prioritaires, sur la base des préférences exprimées par les États membres, afin de préserver l’équilibre régional et l’appropriation.

En ce sens, la crédibilité des groupes de travail thématiques repose sur un équilibre délicat : ils doivent être suffisamment ambitieux pour transformer les discussions en actions, mais aussi suffisamment concentrés sur des questions bénéficiant d’un large soutien afin que leurs résultats soient validés en plénière.

Aucune décision n’a été prise. Pour les organisations du secteur privé et de la société civile ayant des priorités thématiques spécifiques, cette possibilité reste d’actualité : les États sont actuellement ouverts aux suggestions concernant les thèmes que les groupes de travail thématiques (DTG) devraient privilégier.

La Colombie a présenté une proposition de procédure qui a suscité des réactions globalement positives de la part des délégations. Elle recommandait que :

  • Les mandats des groupes de travail (DTG) doivent être limités dans le temps et comporter des résultats clairement définis et mesurables ;
  • le DTG 1 traite de thèmes spécifiques à tour de rôle plutôt que l’ensemble de son mandat simultanément, et
  • les résultats des DTG établissent systématiquement une distinction entre les recommandations faisant l’objet d’un consensus et celles qui sont encore en cours d’élaboration.

Le Sénégal a fait une remarque complémentaire : les rapports devraient rendre compte à la fois des points d’accord et des divergences, afin de conserver une trace des discussions même lorsqu’aucun consensus n’a été atteint. Ces deux propositions reflètent une préoccupation plus générale selon laquelle, sans résultats structurés ni calendrier précis, le mécanisme risque de reproduire les délibérations sans fin du GTCNL sans aboutir à des résultats applicables.

Comment les groupes de discussion (DTG) contribueront-ils à la séance plénière ?

Une autre question abordée concernait la manière dont les travaux des groupes de discussion thématiques (DGT) alimentent ceux de la plénière. Le Brésil a clairement indiqué qu’en l’absence d’un protocole défini pour transmettre les rapports des DGT à la plénière et accepter formellement leurs recommandations, ces groupes risquaient de devenir de simples forums de discussion déconnectés des conclusions officielles du mécanisme. La solution qu’il propose, qui doit encore recueillir un soutien, consiste à maintenir le caractère essentiellement informel des discussions au sein des DGT, tout en y incluant une brève partie formelle dédiée à la prise de décision

Participation des parties prenantes

Le rôle des acteurs non gouvernementaux au sein des groupes a constitué un différend de longue date, et sans doute politiquement le plus sensible. La participation effective des parties prenantes concernées reste incertaine.

Certaines délégations ont adopté une position plus conciliante, reconnaissant que les parties prenantes peuvent améliorer la qualité des délibérations (Soudan, Antigua-et-Barbuda) et contribuer à des résultats plus concrets (Vietnam, République dominicaine), tout en soulignant l’importance de préserver le caractère intergouvernemental du processus (Soudan, Vietnam).

Le Canada et les États partageant le même point de vue ont fait valoir que le consensus de juillet 2025 prévoit clairement que les États désignent des experts pour les séances d’information des groupes de travail techniques (DTG) et que l’ensemble de la communauté des parties prenantes participe à l’ensemble des discussions des DTG.

L’Iran a contesté ce point, affirmant que les modalités convenues pour les parties prenantes s’appliquent de la même manière aux groupes de discussion techniques. La Russie a également fait valoir que les exposés d’experts provenant de parties prenantes externes constituent une possibilité plutôt qu’une caractéristique standard, et que l’invitation d’intervenants externes nécessite l’accord des États membres au cas par cas.

La manière dont cette question sera résolue déterminera directement le degré d’accès dont bénéficieront, dans la pratique, le secteur privé, la communauté technique et les organisations de la société civile au processus des groupes de discussion techniques.

Et maintenant ?

La session s’est achevée sans qu’aucune décision n’ait été prise sur ses deux questions les plus importantes : la nomination des cofacilitateurs et l’ordre du jour provisoire de la plénière. Le Président organisera des consultations informelles intersessions sur ces deux points et publiera, avant le mois de juillet, un document de programme de travail dans toutes les langues de l’ONU.

Le Secrétariat ouvrira une période annuelle d’accréditation des parties prenantes dans les semaines à venir ; les parties prenantes souhaitant participer aux sessions plénières et aux conférences d’examen peuvent consulter la page web du Digital Watch Observatory, où nous suivons le processus, pour plus de détails.

La tension générale reste sans solution, et la manière dont elle sera gérée pendant la période intersessionnelle déterminera en grande partie si la session plénière de juillet pourra s’ouvrir avec les fondements opérationnels du mécanisme en place.

Le Président a également confirmé les deux dates clés pour 2026 :

Pour les parties prenantes qui suivent ces discussions ou souhaitent y contribuer, ce sont les dates à retenir pour leur planification.

Impasse à l’OMC : l’expiration du moratoire face à la dynamique plurilatérale

Lors de la 14e Conférence ministérielle de l’Organisation mondiale du commerce (CM14) à Yaoundé, au Cameroun, le commerce numérique a dominé l’ordre du jour à travers deux volets distincts, chacun allant dans une direction différente et illustrant à la fois les limites et l’évolution du système multilatéral.

Le moratoire sur les droits de douane applicables aux transmissions électroniques. Ce moratoire de longue date — renouvelé tous les deux ans depuis 1998 — a expiré le 31 mars, les membres n’étant pas parvenus à un consensus sur la durée d’une nouvelle prolongation, leurs divergences de vues ayant empêché la conclusion d’un accord.

Alors que certains membres, en particulier les États-Unis, recherchaient une solution à plus long terme, d’autres ont traditionnellement plaidé en faveur d’une période de renouvellement plus courte, reflétant un désir de prudence compte tenu du rythme rapide des changements technologiques et de la nécessité de préserver la flexibilité politique pour l’avenir.

Au cours de la 14e session du Conseil des mesures correctives (MC14), le Brésil s’est fait le porte-parole principal de cette initiative, soulignant la nécessité de faire preuve de prudence face à des évolutions telles que l’intelligence artificielle et l’impression 3D, et suggérant qu’une prolongation plus courte, assortie d’une possibilité de réexamen, permettrait aux membres de réévaluer la situation à mesure que le paysage numérique évolue. Les efforts visant à trouver un terrain d’entente ont finalement échoué, faute de temps.

Ce résultat a également eu pour conséquence que les discussions plus larges sur la réforme de l’OMC, qui étaient politiquement liées à l’approbation du moratoire, sont restées en suspens.

Ce n’est pas la première fois que le moratoire arrive à expiration ; cela s’était déjà produit lors de la conférence ministérielle de Seattle en 1999, avant qu’il ne soit rétabli à Doha deux ans plus tard. Son expiration actuelle ne signifie pas que des droits de douane seront automatiquement imposés.

Toutefois, cela crée une marge de manœuvre politique permettant à certains pays d’envisager l’introduction de droits de douane s’ils ne sont pas liés par des accords commerciaux interdisant les droits de douane sur les transmissions électroniques.

Accord plurilatéral sur le commerce électronique. Parallèlement, cependant, une autre dynamique s’est mise en place. Une coalition de 66 membres de l’OMC a annoncé son intention de poursuivre la mise en œuvre de l’Accord sur le commerce électronique conclu en 2024 dans le cadre de l’Initiative de déclaration conjointe sur le commerce électronique (IDC), par le biais de dispositions provisoires.

Rappel : Les initiatives de déclaration conjointe (IDC) de l’OMC permettent à un groupe de membres de l’Organisation mondiale du commerce d’avancer sur des questions spécifiques sans attendre que l’ensemble de l’organisation parvienne à un consensus. Elles sont ouvertes à tout membre de l’OMC.

L’Australie, le Japon et Singapour, en tant que co-organisateurs de la IDC sur le commerce électronique, ont confirmé que le pacte, qui vise à faciliter le commerce numérique et à interdire les droits de douane sur les transactions de commerce électronique, entrera en vigueur dès que 45 membres auront officiellement notifié leur acceptation.

 People, Person, Book, Publication, Face, Head, Art, Comics

Quelle sera la suite des discussions sur le commerce électronique ? Les discussions sur le moratoire, la réforme de l’OMC et l’avenir du programme de travail sur le commerce électronique (WPEC) devraient se poursuivre lors de la prochaine réunion du Conseil général, qui se tiendra en mai à Genève.

Dans l’intervalle, les membres de l’IDC continueront de plaider en faveur de l’intégration de l’accord dans l’architecture juridique de l’OMC.

Les initiatives conjointes (IDC) et leurs résultats se heurtent à l’opposition d’un certain nombre de membres de l’OMC. Ces pays font valoir que les IDC elles-mêmes n’ont pas de statut juridique, car elles n’ont pas été lancées par consensus. De même, ces pays affirment que les résultats des IDC ne reposent pas sur un consensus et ne constituent ni des accords multilatéraux ni des accords plurilatéraux au sens de l’article IV de l’accord ayant institué l’OMC – l’Accord de Marrakech.

Par exemple, l’Inde a exprimé son désaccord quant à l’intégration dans le corpus réglementaire de l’OMC de l’accord conclu dans le cadre d’une autre négociation plurilatérale, sur la facilitation des investissements au service du développement.Le pays a fait valoir que l’intégration de tels cadres dans le corpus réglementaire de l’OMC risquait de porter atteinte aux principes fondateurs de l’organisation. Il a demandé que des discussions aient lieu sur les garde-fous et les garanties juridiques avant d’intégrer tout résultat plurilatéral spécifique au sein de l’OMC.

Le mois dernier à Genève

Le Séminaire sur les technologies des données 2026, organisé par l’Union européenne de radio-télévision, s’est tenu du 10 au 12 mars à Genève. Cet événement a réuni des professionnels des médias et des experts en technologie afin de discuter de la manière dont l’intelligence artificielle et les systèmes de données sont développés, régis et déployés au sein des médias de service public. Les sessions ont abordé des thèmes tels que la stratégie et la gouvernance en matière d’intelligence artificielle, les plateformes de métadonnées, la recherche hybride, la personnalisation pour l’audience et l’utilisation de l’intelligence artificielle générative dans les processus éditoriaux et de production.

L’Organisation mondiale de la propriété intellectuelle (OMPI) a lancé l’AI Infrastructure Interchange (AIII) le 17 mars à Genève et en ligne. Le programme comprenait des discours d’ouverture, des tables rondes et des exposés consacrés au rôle de la collaboration technique entre les créateurs, les titulaires de droits et les entreprises technologiques. Les participants ont également débattu des objectifs de l’initiative AIII et de la mise en place d’un réseau d’échanges techniques destiné à soutenir un dialogue continu entre experts sur les défis pratiques et les opportunités.

L’Institut universitaire de Genève a organisé un déjeuner-débat le 23 mars afin d’examiner l’évolution des dynamiques transatlantiques à la croisée de la politique américaine et de l’influence mondiale des grandes plateformes technologiques. La discussion a porté sur la manière dont les récents développements politiques aux États-Unis et la concentration du pouvoir technologique influencent la position de l’Europe, notamment en ce qui concerne les questions de dépendance, de réglementation et d’autonomie stratégique.

L’Organisation internationale du travail (OIT) a organisé le 25 mars une session consacrée aux répercussions macroéconomiques de l’IA, au cours de laquelle a été présenté un nouveau modèle du Groupe de la Banque mondiale qui considère l’IA comme une transformation structurelle de la production. Cet outil simule l’impact de l’adoption de l’IA sur les secteurs, les professions et les prix, aidant ainsi les décideurs politiques à évaluer les implications en matière de croissance, d’équité et de changement structurel. Une première étude de cas menée en Pologne explorera son application, avec des perspectives d’utilisation dans d’autres économies émergentes et à revenu intermédiaire.

Les 30 et 31 mars, l’Union internationale des télécommunications (UIT) a organisé à Genève un atelier de deux jours sur le thème «Identités numériques fiables et interopérables pour l’IA humaine et agentique». Cet atelier a réuni des parties prenantes issues des pouvoirs publics, de l’industrie, du monde universitaire et des organismes de normalisation afin d’examiner les approches techniques liées aux cadres de confiance, à la gestion de la confiance, à la sécurité et à l’interopérabilité, et d’étudier des recommandations concrètes ainsi que des conclusions consolidées visant à faire progresser les travaux de normalisation dans ce domaine.L’Union interparlementaire (UIP) a organisé un webinaire intitulé « Renforcer la culture de l’IA au sein des parlements » le mercredi 1er avril 2026, afin d’examiner comment les parlements peuvent mettre en place des formations et des ressources visant à favoriser la culture de l’IA chez les députés, le personnel parlementaire et les équipes informatiques. Ce webinaire mettra en avant les lignes directrices de l’UIP relatives à l’IA dans les parlements, en soulignant que la culture de l’IA doit concerner tous les acteurs au sein des parlements.

Weekly #256 UN kicks off Global Mechanism on ICT security, road ahead murky

 Logo, Text

27 March – 3 April 2026


HIGHLIGHT OF THE WEEK

UN kicks off Global Mechanism on ICT security, road ahead murky

After almost three decades of stop-start cybersecurity negotiations at the UN, the long-anticipated Global Mechanism on ICT security has finally kicked off.

It is the first permanent forum of its kind since discussions on ICT security began back in 1998, and its mere existence says a lot about how far those talks have come.

But if the launch felt like a breakthrough, the organisational session quickly brought things back down to earth. Beyond what was already sketched out in Annex C and the OEWG’s Final Report, it remained unclear how the mechanism would actually function in practice. 

When the member states agreed to establish the Global Mechanism in July 2025, they also envisioned that the mechanism would meet in plenary and have dedicated thematic groups (DTGs). These groups are intended to enable more in-depth discussions and build on the outcomes of the plenary. The practicalities of how the dedicated thematic groups should be set up and administered were hotly contested at the organisational session, as they will influence what gets on the agenda, who drives it, and whether this new system can deliver real outcomes over time. No decision was made in any of these matters.

A long-standing point of contention and possibly the most politically-charged was the role of non-governmental actors in the groups. Is it a possibility rather than a standard feature? Does inviting external briefers require member-state agreement on a case-by-case basis?  How this is resolved will directly determine the degree of access the private sector, technical community, and civil society organisations have to the DTG process in practice.

The mechanism inherited many unresolved substantive debates from its predecessors. On international law, there is widespread agreement that considerable work remains to be done, but little agreement on how to carry it out. A broad majority of states expressed support for ensuring that the mechanism remains action-oriented, with a strong focus on practicality and the implementation of agreed frameworks on international law, norms, CBMs, and capacity-building. Many delegations, primarily from developing countries, urged the Global Mechanism to prioritise the operationalisation of the UN Voluntary Fund, which was tabled but left unresolved by the OEWG.

 Furniture, Table, Dining Table, Architecture, Building, Dining Room, Indoors, Room, Chair, Tabletop

What now? The session closed without resolution on any of its most consequential questions. The Chair will convene informal intersessional consultations to resolve outstanding issues before July, when the mechanism will hold its first substantive session, during which it is expected to discuss substantive matters. 

We’ll be monitoring the process closely on our dedicated Digital Watch Observatory web page.epresent isolated outcomes or the beginning of a broader legal shift. 

IN OTHER NEWS LAST WEEK

Anthropic scores a temporary win against the US government

A California judge has temporarily blocked the US government from enforcing the ‘supply chain risk’ designation against Anthropic, finding that the company’s actions do not meet the legal definition under Section 3252 of the Title 10 United States Code. 

That law defines a supply chain risk as the potential for an adversary to sabotage, maliciously interfere with, or subvert a covered system—covert acts, not public or negotiated positions. The court rejected the notion that questioning or resisting contract terms automatically makes a vendor an adversary.

The ruling emphasises procedural requirements: even when Section 3252 allows bypassing standard debarment processes, the government must document risk assessments, consult relevant agencies, and consider less restrictive alternatives. The court found these safeguards were likely ignored in Anthropic’s case.

Additionally, the judge noted that the designation appeared to be influenced by Anthropic’s public statements and its refusal to support certain government AI uses, raising First Amendment concerns. Anthropic was likely denied due process, receiving neither adequate notice nor a meaningful chance to respond before facing substantial economic and reputational consequences.

The court also found that Anthropic demonstrated irreparable harm, including immediate loss of contracts, damaged business relationships, and reputational impact, supporting the temporary block on the government’s enforcement of the designation.

The Court questioned whether the scope of the government’s special national security authorities was appropriate in these circumstances, emphasising that such powers are generally intended for clear and serious risks.

The verdict, for now. Anthropic’s request for a preliminary injunction in this lawsuit against the administration was granted. The injunction does not resolve the case on the merits; it temporarily stops the contested measures. 

What happens next? The administration has appealed, and the appellate court, the US Court of Appeals for the Ninth Circuit, will ultimately decide on the matter. However, a final verdict in this case could be months away. 

Meanwhile, the government is also facing another lawsuit from Anthropic, filed in Washington, D.C. In that case, the company is challenging its supply chain designation before a three-judge panel at the D.C. Circuit Court of Appeals, specifically contesting the legal authority invoked under the Federal Acquisition Supply Chain Security Act (FASCA).

Why does it matter? The case highlights broader issues regarding the limits of federal power over private technology companies and the protection of constitutional rights, with potential implications for future government interactions with the tech industry. 


Iran issues warning to major US tech firms

Iran’s Revolutionary Guard has issued a statement warning that major US technology companies, including Apple, Google, Meta, Intel, Oracle and Nvidia, could face retaliatory action if further Iranian leaders are killed in targeted assassinations. 

‘These companies, starting from 8:00 pm (1630 GMT) Tehran time on Wednesday, April 1, should expect the destruction of their relevant units in exchange for every assassination in Iran.’

The group alleges that these firms are the ‘main element in designing and tracking assassination targets.’

Iran also claimed to have conducted drone strikes against communications and industrial sites in Israel.


Reining in social media for minors as trust in platforms erodes

The first country to introduce a social media ban for minors is now assessing how that ban is working. Early results released by Australia’s eSafety Commissioner show significant action by platforms to prevent users under 16 from holding accounts, but also ongoing challenges in fully enforcing the restrictions. By mid-December 2025, around 4.7 million accounts were removed or restricted, with more than 300,000 additional accounts blocked by March 2026. Despite these reductions, many children continue to retain accounts, create new ones, or pass age assurance checks. Regulators identified several compliance concerns, including platforms that allow repeated attempts at age verification and encourage some users to update their ages. Reporting systems for underage accounts were often difficult to access, particularly for parents.

Indonesia is also checking on progress: its social media restrictions for under-16s went into effect last week, and already Meta and Google have been found non-compliant. Indonesia’s Communication and Digital Minister noted that the two companies were summoned on Monday to undergo checks. Failure to implement the curbs, the ministry has noted, may result in sanctions or even a block on ​the platform in the country.

Australia’s social media ban for minors has inspired many countries to follow suit. One of them is France, which is moving toward restricting social media use for children under 15, as its Senate approved a plan that differs from an earlier, stricter version passed by the National Assembly. While the National Assembly has backed a strict approach, requiring platforms to delete existing accounts and block new under-15 users, the Senate has proposed a more flexible, two-tier system that would limit only harmful platforms and allow access to others with parental consent. The two versions must now be reconciled, meaning the final shape of the law remains uncertain. Key questions—particularly around how age verification will work—are still unresolved and tied to ongoing EU-level discussions, pushing any real implementation to at least 2027.

The MP that introduced the bill warned that this is a matter of public health, noting that ‘When similar questions arose with products like alcohol or tobacco, we collectively chose to prohibit them, because we considered them public health issues.’

Austria’s government also announced plans to ban under-14s from using social media. The government plans to present a draft law by the end of June. ‘We will no longer stand by as these platforms make our children addicted and, in many cases, ill,’ the Vice Chancellor noted.

After a US jury found Meta and YouTube liable in a social media addiction case, the concept of ‘social media addiction’ is likely to gain more legal and policy traction. In Italy, senators have introduced a draft law that directly targets the role of platforms in shaping user behaviour, proposing limits on default profiling and greater transparency around how algorithms curate content. Backed by the opposition Democratic Party, the proposal shifts responsibility toward platform design itself, arguing that recommendation systems are not neutral tools but deliberate corporate choices with real-world consequences.

It’s therefore unsurprising that a new survey in Switzerland revealed a widespread mistrust of big tech, with a large majority of respondents viewing these companies as primarily profit-driven. Concerns range from the impact on children and growing dependence on foreign tech firms to fears about the broader effects of digitalisation on democracy. 


Deadlock at WTO: Moratorium lapse meets plurilateral momentum

At the 14th Ministerial Conference of the World Trade Organization (MC14) in Yaoundé, Cameroon, digital trade dominated the agenda through two parallel tracks—each pointing in a different direction and illustrating both the limits and evolution of the multilateral system.

The moratorium on customs duties on electronic transmissions. The long-standing moratorium—renewed every two years since 1998—expired on 31 March after members failed to reach consensus on the length of a new extension, with differing views among members preventing a deal.

Some members, including the USA, pushed for a longer-term solution, while others, led during the talks by Brazil, favoured shorter renewals to preserve regulatory flexibility in light of rapid technological change, including AI and 3D printing.

In parallel, however, a different dynamic unfolded. A coalition of 66 WTO members announced they would move forward with implementing the plurilateral Agreement on Electronic Commerce concluded in 2024 by the Joint Statement Initiative on e-commerce (JSI), through interim arrangements. 

Why does it matter? The lapse does not automatically trigger tariffs, but it creates policy space for countries to impose them. The outcome also meant that a broader set of discussions on WTO reform, which had been politically linked to the approval of the moratorium, remained unresolved. 

What’s next for e-commerce discussions? Discussions on the moratorium, the WTO reform, and the future of the Work Programme on e-commerce (WPEC) are expected to continue at the next General Council meeting in May in Geneva. In the meantime, JSI members will continue to seek inclusion of the Agreement under the WTO legal architecture.

For a deeper understanding of MC14 outcomes and implications, join the 14 April webinar ‘WTO deadlock, AI boom: Unpacking MC14 and looking ahead’ co-organised by Diplo, the Digital Trade and Data Governance Hub, and the Geneva Internet Platform. Registrations for the event are open.


China launches World Data Organization

The World Data Organization (WDO) was formally established in Beijing, presenting itself as the first international, non-governmental platform dedicated specifically to global data development and governance. 

Conceived as a multistakeholder forum, the organisation aims to facilitate dialogue, rule-making, and cooperation, with a stated focus on bridging the global data divide, unlocking the value of data, and supporting the digital economy. 

Its inaugural assembly adopted the organisation’s charter, appointed leadership, and set out priorities around capacity building, regulatory exchange, and technological collaboration. 

Why does it matter? There is currently no single global body exclusively dedicated to data governance as a whole—covering economic value, governance rules, development, security, and cross-border flows in an integrated manner. The emergence of the World Data Organization (WDO) is significant because it seeks to occupy that space, positioning itself as a dedicated platform for data governance coordination.  

At the same time, it reflects broader geopolitical dynamics: China is not only participating in rule-making but actively building platforms that could influence how digital governance evolves, particularly for developing countries seeking alternatives or complements to existing frameworks. 



LAST WEEK IN GENEVA
 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

Last Monday and Tuesday (30 and 31 March), ITU held a two-day workshop on ‘Trustable and Interoperable Digital Identities for Human and Agentic AI’ in Geneva. It brought together stakeholders from governments, industry, academia, and standards bodies to examine technical approaches related to trust frameworks, trust management, security, and interoperability; and to investigate actionable recommendations and consolidated insights to advance standardisation work in the field. 

The 2026 Global Digital Economy Conference held its Geneva branch event on Tuesday (31 March), gathering political leaders, business executives, and academics to discuss the development of the global digital economy under the theme ‘Digital Intelligence Without Boundaries: Friendship and Win-Win.’ The event featured high-level dialogues, the launch of the Geneva Office of the Global Digital Economy City Alliance, industry insights on China-Europe cooperation, and targeted networking to foster partnerships.

The Inter-Parliamentary Union (IPU) hosted a webinar on ‘Building AI Literacy in Parliaments‘ on Wednesday, (1 April), to explore how parliaments can develop training and resources to support AI literacy among members, parliamentary staff, and IT teams. The webinar highlighted the IPU Guidelines for AI in parliaments, emphasising that AI literacy should reach all roles within parliaments.

To prepare for the 2027 Geneva AI Summit, the Swiss Government invited ICT4Peace to organise and host a launch event at GenAI Zürich yesterday (2 April). The event brought together 40 participants from government, business, academia, and civil society to begin shaping the Summit’s objectives and exploring potential concrete outcomes. Participants discussed a set of guiding questions to shape the focus and outcomes of the 2027 summit. These included identifying areas where international dialogue and cooperation are needed, defining potential political and practical outcomes, and exploring Switzerland’s strengths in facilitating multistakeholder engagement. The discussions also addressed identifying potential partners, resolving areas of disagreement around specific policy objectives, and developing concrete tools and solutions to present as Swiss contributions at the summit.


READING CORNER
World Data Organization

Beijing hosted the founding assembly of the first international organisation dedicated specifically to data governance and development.

X Tiktok

As diplomacy migrates from the deliberate silence of morning cables to the relentless vertical scroll of TikTok, explore the new privatisation of statecraft.

Digital Watch newsletter – Issue 108 – March 2026

March 2026 in retrospect

In our March 2026 Monthly newsletter, we observed the deadlock at WTO MC14 over the WTO e-commerce moratorium which led to its lapse, even as a coalition advanced a plurilateral digital trade deal. We examined what it means and what comes next.

Two recent US jury verdicts found Meta and YouTube liable for harms to minors, including exposure to sexual content and social media addiction. Taken together, the cases move beyond questions of content moderation and into the design of the platforms themselves.

The long-awaited Global Mechanism has finally launched, creating the UN’s first permanent forum on ICT security since 1998—but its inaugural session left many questions about how it will actually work. Here’s why it matters and what to watch as the Mechanism takes shape.

Will we one day buy intelligence on a meter? It is not yet certain, but the mere idea raises questions about control, access, and how we measure and consume intelligence in the future.

Plus: March’s top digital policy developments and a Geneva wrap-up.

Technologies

 A new five-year development plan approved by lawmakers in Beijing centres on innovation and advanced technology to drive future economic growth and global leadership, prioritising AI, robotics, aerospace, biotech, and quantum computing while reducing reliance on foreign tech. It also boosts funding, with science spending set to rise by ~10% annually and overall R&D by at least 7%.

The UK government has announced up to £2 billion for quantum technologies, including more than £1 billion over the next four years, alongside a new procurement programme called ProQure to help scale quantum computing in the UK. The funding will support several areas: over £500 million for quantum computing, £125 million for quantum networking, and £205 million for quantum sensing and navigation, plus smaller allocations for research hubs, infrastructure, skills, and commercialisation.

The EU has opened a €180 million funding call to strengthen the resilience of subsea internet cables by supporting backup systems, alternative routes, and redundancy measures. The funding is meant to reduce the risk of outages and external threats to critical undersea infrastructure, reflecting the EU’s growing concern with digital resilience, cybersecurity, and technological sovereignty.

China has approved NEO, a brain–computer interface developed by Neuracle, for use beyond clinical trials to help people with severe paralysis regain hand movement. The implant reads brain signals when users imagine moving their hand and translates them into commands for a robotic glove, with early trial results showing improved ability to perform everyday tasks such as grasping, eating, and drinking.

Security

US President Donald Trump released his administration’s national cybersecurity strategy, outlining priorities across six policy areas: offensive and defensive cyber operations, federal network security, critical infrastructure protection, regulatory reform, emerging technology leadership (including in AI), and workforce development. 

Trump also signed an executive order the same day, directing the attorney general to prioritise cybercrime prosecution, tasking agencies with reviewing tools to counter international criminal organisations, and assigning the Department of Homeland Security expanded training responsibilities. The strategy document spans five pages of substantive text, with administration officials describing it as intentionally high-level. The White House stated that more detailed implementation guidance would follow.

Pro-Iranian hacker group Handala claimed responsibility for a cyberattack on US medical device giant Stryker. The group has stated that the cyberattack is retaliation for a missile strike on an elementary school in Iran. Stryker confirmed the cyberattack in a statement, noting that order processing, manufacturing and shipping are disrupted, but that connected products have not been impacted.  The FBI then seized four websites tied to Handala, the pro-Iranian hacking group that claimed responsibility for the attack, and to Iran’s Ministry of Intelligence and Security (MOIS). 

Iran’s Revolutionary Guard has threatened to target major US tech companies, including Apple, Google, Meta, Microsoft, Intel, Oracle, Nvidia, Tesla, and Palantir, if more Iranian leaders are killed, accusing the companies of helping identify assassination targets. Iran’s army also claimed to have targeted Israeli communications, telecommunications and industrial centres in response to attacks on Iranian infrastructure.

Long constrained by a defensive security doctrine, Japan will introduce ‘hack-back’ powers from October. The change comes around as part of Japan’s ‘Active Cyber Defence’ law, which was passed in 2025 and is rolling out in incremental stages through 2027.

The EU has imposed sanctions over cyber attacks targeting its member states and partners, listing China-based Integrity Technology Group and Anxun Information Technology, as well as Iran-based Emennet Pasargad, along with Anxun’s co-founders. The sanctions entail an asset freeze and a travel ban for the listed individuals. The EU citizens and entities are additionally prohibited from making funds available to the designated companies.

Authorities in the Netherlands reported that hackers—believed to be linked to Russia—have launched large-scale phishing operations aimed at diplomats, military personnel, government officials, and journalists. Instead of breaking the apps’ encryption, attackers trick users into sharing verification codes or linking devices, allowing them to take over accounts and access sensitive conversations.

Portugal’s intelligence service has issued a similar alert, describing a global campaign by foreign state-backed actors seeking access to the messaging accounts of officials and others with privileged information. Once inside an account, attackers can read chats, access shared files, and use the compromised profile to target additional victims through further phishing attempts.

The EU has launched its ProtectEU counterterrorism agenda to strengthen preparedness against evolving threats, with a strong focus on how terrorists use digital tools such as social media, AI, encrypted platforms, crypto-assets, and drones. The plan combines stronger intelligence and Europol support, tougher enforcement of online content under the DSA, protection of public spaces and critical infrastructure, and closer international cooperation.

INTERPOL has launched a new global task force at the Global Fraud Summit 2026 as part of a more coordinated, data-driven response to the rapid global expansion of financial fraud. The task force is jointly developed by the UK’s Home Office and INTERPOL and is codenamed Operation Shadow Storm. The task force will target scam centres and their links to cybercrime and human trafficking, using tools such as stop-payment mechanisms and international intelligence-sharing networks. The initial focus of the task force will be dismantling criminal operations across Southeast Asia.

Simultaneously, major technology and consumer-facing companies, including Google, Amazon, Meta, and OpenAI, have signed the ‘Industry Accord Against Online Scams and Fraud’ at the Global Fraud Summit 2026. The companies pledged to focus on deploying proactive security measures and AI-driven detection systems; strengthening information sharing between industry and law enforcement to better identify and respond to fraud; enhancing resilience through advanced defensive technologies and rapid response mechanisms; and improving public education to help individuals recognise and avoid scams.

The EU has been unable to reach an agreement on extending temporary rules that allow online platforms to detect child sexual abuse material, leaving the current framework set to expire in April. The existing rules, in place since 2021, permit technology companies to voluntarily scan their services for harmful content, supporting efforts to identify and remove illegal material. But negotiations between the European Parliament and member states stalled over key issues — especially whether such measures should apply to encrypted services.  Attention now shifts to the long-delayed permanent framework (the Child Sexual Abuse Regulation).

Brazil has started enforcing a new law aimed at strengthening protections for children online, marking a significant shift in how digital platforms are regulated in the country. The legislation, known as ECA Digital, introduces obligations such as age verification, stricter content moderation, and mechanisms to remove harmful material involving minors without requiring a court order. The law also targets platform design, requiring companies to limit features that may encourage compulsive use among children, such as excessive notifications, profiling for targeted advertising, and design elements that prolong user engagement. The law allows authorities to impose warnings and fines of up to $10 million for violations. In severe cases, courts may order the suspension or banning of platforms operating in Brazil. 

Indonesia’s Communication and Digital Affairs Minister signed a government regulation that means children under 16 can no longer have accounts on high-risk digital platforms. This will reportedly include YouTube, TikTok, Facebook, Instagram, Threads, X, Bigo Live and Roblox. Implementation of the regulation will begin gradually from 28 March.

In Ecuador, the issue is framed in terms of security. A proposed ban on under-15s is linked to concerns that platforms are being used by criminal groups to contact and recruit minors. This shifts the rationale away from well-being and toward crime prevention, positioning social media restrictions as part of a broader security response.

A proposed social media ban for under-16s has been rejected by UK MPs, with 307 voting against and 173 in favour. However, a government-backed pilot is trialling different forms of restriction—full bans, time limits, and curfews— for six weeks. Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls.

Australia’s eSafety Commissioner states that platforms have removed or restricted millions of under-16 accounts under the country’s social media age ban, but serious compliance problems remain, including weak age-assurance systems and reporting tools that are hard for parents to use. Investigations into five major platforms are continuing, with enforcement decisions expected by mid-2026.

Austria plans to ban social media use for children under 14, joining a broader international move toward stricter youth online-safety rules. The government says the measure is meant to protect children from addictive platform design, violence, misinformation, and harmful beauty standards, and it also plans to add a new school subject on media and democracy to strengthen digital literacy.

France is considering a new law to ban social media for children under 15, while also proposing a digital curfew for older teens and extending school phone restrictions to high schools. The regulation reflects a broader push for stronger regulation of online harms affecting young people, including cyberbullying, harmful content, and excessive screen exposure, and aligns with similar child-safety measures already seen in countries such as Australia.

A Swiss survey found strong public mistrust of major tech companies such as Google, TikTok, and Meta, with most respondents viewing them as profit-driven, politically influential, and a source of dependence on foreign powers. At the same time, a majority still sees digitalisation as broadly positive, but wants the state to play a stronger role in ensuring that AI, algorithms, and digital platforms do not harm democracy or society.

Australia has begun enforcing new online child-safety rules that require platforms, including social media, app stores, gaming services, search engines, pornography sites, and AI chatbots, to use age-assurance measures and block minors from harmful or explicit content, including sexual and self-harm-related chatbot interactions. The eSafety Commissioner oversees the rules, and companies can face penalties of up to AUD 49.5 million per breach for non-compliance.

European Commission President Ursula von der Leyen convened the first meeting of the Special Panel on child safety online, announced in her 2025 State of the Union address. The panel will provide expert guidance on protecting and empowering children online and explore potential harmonised age limits for social media access. The panel aims to present a report with recommendations to the Commission President by summer 2026.

Economic

The EU and Canada have begun negotiations on a Digital Trade Agreement to expand the digital side of their existing trade relationship, aiming to set clearer rules for cross-border digital commerce. The talks cover issues such as paperless trade, recognition of e-signatures and digital contracts, no customs duties on electronic transmissions, and limits on data-localisation and forced source-code transfer requirements, while still preserving governments’ ability to regulate the digital economy.

The EU and Australia have deepened ties through a new Security and Defence Partnership, the conclusion of free trade agreement negotiations, and the launch of talks on Australia’s accession to Horizon Europe. Together, these moves are meant to expand cooperation on cybersecurity, crisis response, AI and other emerging technologies, data flows, critical raw materials, and trade, signalling a broader strategic alignment beyond economics alone.

Australia is moving toward a national licensing regime for crypto exchanges and tokenisation platforms under its financial services framework, following a Senate committee’s recommendation to pass the Digital Assets Framework Bill 2025. The proposal would bring more of the crypto sector under formal regulation, though industry groups warn that broad definitions could unintentionally capture some infrastructure providers and wallet-related services.

Meta has announced that third-party AI chatbots will once again be allowed to operate through WhatsApp in Europe for a fee, reversing earlier restrictions that limited access to rival chatbot services on the platform. Under the new arrangement, companies will be able to distribute general-purpose AI chatbots via the WhatsApp Business API for 12 months. The change is intended to give European regulators time to complete their investigation while allowing competing AI services to operate within the platform ecosystem. 

Google will overhaul its Play Store policies after settling a long-running dispute with Epic Games, creator of Fortnite. The changes include lowering in-app purchase commissions to 20%, adding a 5% fee for developers using Google’s billing system, and reducing subscription fees to 10%, alongside making it easier to install alternative app stores on Android. As part of the deal, Epic will return Fortnite to the Play Store while continuing to develop its own Android app store.

The WTO meeting ended without agreement on extending the e-commerce duty moratorium, while a group of members advanced a separate digital trade arrangement. Read more in our dedicated text.

The ECB has launched Appia, a roadmap for developing Europe’s tokenised financial markets, with Pontes as a DLT-based settlement solution linking tokenised-market infrastructure to the Eurosystem and enabling pilots from Q3 2026. The plan is meant to support the shift from traditional finance to tokenised markets while preserving financial stability, central bank settlement, and interoperability, and it is now open for public consultation.

A joint ILO–World Bank study finds that AI will affect jobs unevenly across 135 economies. Advanced economies face higher exposure, especially in clerical and professional work, while developing countries risk disruption without comparable productivity gains because they often lack the infrastructure, internet access, and skills needed to benefit. The report argues that outcomes will depend less on AI alone than on connectivity, training, job design, and social protections.

Legal

The World Data Organisation (WDO) was launched in Beijing as a new international non-profit platform focused on global data development and governance, with the stated aim of narrowing the global data divide, supporting the digital economy, and improving international cooperation on issues such as cross-border data flows, privacy, and security. The initiative reflects a broader push to make data governance a more structured part of global digital policymaking.

A Luxembourg court has annulled Amazon’s €746 million GDPR fine, not because the alleged privacy violations disappeared, but because it found the regulator’s penalty process was flawed, especially in how Amazon’s level of fault was assessed. The case will now return to Luxembourg’s data protection authority for reassessment.

Italy’s data protection authority has fined Intesa Sanpaolo €31.8 million after an employee repeatedly accessed thousands of customer accounts without authorisation, and the bank failed to detect it in time. Regulators said the case exposed serious weaknesses in internal monitoring, risk controls, confidentiality safeguards, accountability, and breach notification.

Development

Malta has launched the SMART Food project, a Malta–Italy initiative using AI and blockchain to build a digital platform that tracks food products from production to consumption. The aim is to improve traceability, transparency, safety, sustainability, and trust in the agri-food sector, while helping consumers and producers access real-time product information.

China has revised its rules for the 2026 national agricultural census, expanding the census to cover not only agriculture but also rural industrial development and village construction, while introducing new data-collection methods such as remote sensing. The updated rules also tighten data-quality controls, confidentiality obligations, and penalties for falsifying statistics, reflecting a stronger emphasis on both broader rural data collection and stricter state oversight.

The UK and the Philippines have agreed on a new partnership to expand digital education and edtech cooperation, combining UK expertise and investment support with Philippine education priorities. The initiative focuses on improving access to digital learning tools, skills development, and education technology, while strengthening broader bilateral ties in innovation and capacity-building.

Sociocultural

The first transparency reports under the EU’s Digital Services Act-linked Code of Conduct on Disinformation have been published, with signatories including major platforms and civil-society actors outlining measures they say they are taking against disinformation, especially around the war in Ukraine and election integrity. Their significance is that these are the first reports since the Code gained formal recognition under the DSA in February 2025, marking a shift from a mainly voluntary scheme to a more structured co-regulatory system based on commitments, reporting, and auditing.

The EU is reviewing X’s proposal to change its blue-check verification system after finding that paid verification without meaningful identity checks could mislead users under the Digital Services Act. X had been fined €120 million in December and given 60 working days to submit corrective measures, which the Commission is now assessing while the company also challenges the decision in court.

UNESCO has launched a South Africa-focused research initiative on the governance of harmful online content under its Social Media 4 Peace programme, supported by the EU, to study hate speech, disinformation, regulatory gaps, and platform governance. The aim is to produce practical, rights-based recommendations that strengthen digital governance, platform accountability, freedom of expression, and access to information in the local context.

Spain has launched HODIO, a digital tool to measure hate speech across social media. Combining AI, data analysis, and expert review, it will publish biannual reports ranking platforms by users’ exposure to harmful content, aiming to inform policymaking and pressure companies to act. However, critics have raised concerns about the transparency of HODIO and how authorities will define and classify hate speech, warning that poorly defined criteria could infringe on freedom of expression.

National frameworks, strategies and guidelines

USA. The US government has unveiled a National AI Policy Framework outlining a comprehensive strategy for AI across federal agencies. The policy sets priorities for responsible AI development, data governance, workforce training and international collaboration, while emphasising ethical safeguards, public‑interest outcomes and national security. The framework also calls for accelerated investment in AI research and deployment, alongside coordinated oversight mechanisms to ensure transparency and accountability in federal AI systems.

Egypt. On 14 March 2026, Egypt published the National Guidelines for Trustworthy and Responsible AI. The Guidelines provide a national reference for the responsible development, deployment, and oversight of AI across public and private sectors, ensuring AI use is safe, ethical, and transparent while supporting innovation aligned with Egypt’s Vision 2030 and the National AI Strategy. Complementing the National AI Governance Framework, which defines what should be governed, these Guidelines specify how to comply, offering methodologies, metrics, and checklists to operationalise ethical principles. Targeted at data scientists, compliance officers, and developers, they provide actionable directions to protect individual rights, promote societal well-being, enhance accountability and transparency, and foster innovation grounded in safety. The guidelines also align Egypt with international standards and engage government entities, private enterprises, and community actors in responsible AI governance. 

South Korea. South Korea has unveiled a national strategy to become one of the world’s top three AI powers by 2028. The plan combines investment in digital infrastructure, data systems and next-generation connectivity. Authorities aim to expand networks by advancing 5G capabilities and preparing for the commercial deployment of 6G by 2030. Cybersecurity and data integration are also key priorities to support a stronger digital ecosystem. The strategy includes developing talent across education levels and investing in core technologies such as semiconductors and quantum computing. AI adoption is expected to expand across sectors, including manufacturing, healthcare and agriculture.

Sovereignty

The EUTensions are emerging in the EU over AI infrastructure investment, with France, Poland, Austria, and Lithuania pushing to reserve part of the €20 billion AI Gigafactory project for European technologies, while Germany is sceptical about linking the project to digital sovereignty goals. Meanwhile, Germany is pursuing a major expansion of domestic data centres and AI processing power, supported by regulatory reforms, tax incentives, and land allocation to attract investment, aiming to reduce reliance on foreign providers.

Russia. The Russian government is proposing rules that could ban or restrict foreign AI tools such as ChatGPT, Claude and Gemini if they fail to store Russian user data domestically and comply with Moscow’s regulatory requirements. The proposals, from the Ministry for Digital Development, aim to extend Russia’s push for a sovereign internet, protecting citizens from ‘covert manipulation’ and enforcing ‘traditional Russian spiritual and moral values.’ Under the draft rules, cross-border AI systems that transmit user data abroad would face restrictions, whereas foreign models that can operate entirely within Russian infrastructure, such as Qwen or DeepSeek, could be deployed safely.

Content policy

The EU. The European Commission has released a second draft of its Code of Practice on marking and labelling AI-generated content, part of efforts to help companies comply with transparency requirements under Article 50 of the EU Artificial Intelligence Act. Section 1 of the code focuses on providers of generative AI systems and proposes a multi-layered approach to marking AI-generated content, including digitally signed metadata, imperceptible watermarking, and optional fingerprinting or logging. Providers are also expected to make detection tools available so users and authorities can verify whether content was generated or manipulated by AI. Section 2 addresses deployers of AI systems, requiring clear disclosure when deepfakes or AI-generated text intended to inform the public have been artificially generated or manipulated, using visible and accessible labels.

The European Council has endorsed proposals to ban AI from generating non-consensual sexual content (CSAM), adjust high-risk AI compliance timelines, and streamline the AI Act, including exemptions for some SMEs, registration requirements, and clarified oversight responsibilities. These moves reflect Europe’s broader effort to secure sovereign AI infrastructure and ensure safe, accountable AI deployment.

Netherlands, France. A Dutch court has ordered xAI and its Grok chatbot not to create or distribute non‑consensual sexual images. The judgement requires Grok’s operators to implement technical measures to block prompts or outputs capable of producing non‑consensual intimate imagery. The decision was framed as a necessary enforcement of personal rights and dignity in the digital age, setting a potentially influential precedent for European courts grappling with AI‑generated harm.

Meanwhile, the Paris prosecutor’s office said that the controversy surrounding sexually explicit deepfakes generated by Grok may have been deliberately amplified. The alleged reason was to artificially boost the value of X and xAI ahead of June 2026, when the new entity created by the merger between SpaceX and xAI is planned to be listed on the stock market.

Security

Australia. The eSafety Commissioner found that AI companion chatbots, including Character.AI, Nomi, Chai and Chub AI, are failing to protect children from harmful content, with weak safeguards against sexually explicit material and child sexual exploitation. Most platforms relied on self-declared age verification, lacked meaningful monitoring of AI inputs and outputs, and did not consistently provide links to crisis or mental health support. Commissioner Julie Inman Grant warned that as children increasingly use AI companions for emotional support, the absence of robust safety measures on self-harm, suicide and unlawful content poses serious risks, with non-compliance subject to civil penalties under Australia’s Age-Restricted Material Codes.

The UK. Secretary of State for Science, Innovation and Technology has called on online service providers to strengthen measures against digital harms targeting women and girls, as part of a commitment to halve such violence within a decade. The secretary called on tech companies to implement Ofcom’s guidance ‘A Safer Life Online for Women and Girls’, which outlines steps such as conducting risk assessments focused on women and girls, pre-launch abusability evaluations of features, strong default privacy settings, demonetising content promoting abuse, limiting the visibility of misogynistic content in search and recommendation feeds, and implementing rate limits to curb coordinated harassment. The guidelines should be implemented by the end of 2026 at the latest.

The USA. The US government is facing two lawsuits from AI firm Anthropic after the Pentagon designated the company a supply-chain risk, effectively barring its technology from defence contracts. 

The Department of Justice argues the designation is lawful and grounded in national security, citing Anthropic’s refusal to allow its AI to be used for autonomous weapons and domestic surveillance. Anthropic, in turn, claims the move is unlawful and retaliatory, targeting its policy positions rather than any genuine security risk.

In the California case, a federal judge has temporarily blocked the government from enforcing the designation. The court found that Anthropic’s conduct does not meet the legal threshold under Section 3252, which is limited to covert adversarial threats such as sabotage or system subversion—not public stances or contract disputes. The ruling also highlights procedural failures, including insufficient risk assessment, lack of interagency consultation, and failure to consider less restrictive measures.

The judge further raised constitutional concerns, noting the designation may have been influenced by Anthropic’s speech and that the company was likely denied due process. Evidence of immediate and significant harm—lost contracts, reputational damage, and disrupted business relationships—justified granting a preliminary injunction, though a final ruling may take months.

In parallel, Anthropic is pursuing a second case in Washington, D.C., challenging its supply chain designation before a three-judge panel at the D.C. Circuit Court of Appeals, specifically contesting the legal authority invoked under the Federal Acquisition Supply Chain Security Act (FASCA).The legal dispute has drawn support from across the tech sector, with companies including Microsoft, Google, Amazon and OpenAI backing Anthropic’s legal challenge through amicus filings. Industry leaders warn that the government’s designation could set a precedent that destabilises the US AI ecosystem and disrupts suppliers working with both government and private-sector AI systems.

‘We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter and use it for whatever they want to use it for,’ Sam Altman, CEO of OpenAI, recently stated.

On the surface, this could sound like a vision of empowerment: on-demand access to superhuman reasoning, available to anyone with enough money to buy it. But Altman’s metaphor is precise. Utilities are not owned by the public; they are controlled by powerful providers who set the rates, terms, and infrastructure.

Our knowledge is already becoming commodified by tech companies and the advertising industry. But what OpenAI’s CEO suggests is a world in which intelligence itself is outsourced to a handful of platforms.

The AI monopolisation of intelligence challenges one of the pillars of civilisation built over millennia: That knowledge defines what it means to be human.

Altman is therefore not just describing a business model; he is also outlining a new social order, one in which intelligence is centralised, privatised, and sold back to humanity by major AI companies. 

 Person, Face, Head, Clothing, Coat, Chanakya

Not an inevitable future. The battle for human intelligence and knowledge  – for who owns the capacity to think, to know, to decide – is not yet over. 

The real alternative to monopolising and metering our knowledge back to us isn’t no AI; the real alternative is to have AI as an extension of our personal knowledge shared communities, countries, and humanity, as per our preferences. 

Communities, universities, companies, and countries can build bottom-up AI rooted in their own languages, values, and knowledge systems. Open-source models have made human-centred AI technically possible and financially affordable. This would lead to a distributed ecosystem in which AI strengthens human communities rather than subordinates them.

This text is an adaptation of Dr Jovan Kurbalija’s blogpost ‘The war we’re not watching: The fight for the future of human knowledge.

Two recent US jury verdicts are beginning to redraw the boundaries of responsibility for social media platforms, with implications that extend well beyond the individual cases. 

In New Mexico, a jury ordered Meta to pay $375 million after finding it misled users about the safety of its platforms for children. The lawsuit, brought by Attorney General Raul Torrez, accused Meta of violating the state’s consumer protection laws by misrepresenting how safe its platforms are for minors while building features and algorithms that, in prosecutors’ view, entice prolonged use and expose children to significant risks. Those risks include addiction-like engagement, exposure to harmful sexual content, unwanted private communications with adults, sleep disruption from compulsive use, and environments where predators can operate with relative ease. Jurors were presented with internal research and testimony from former employees, including whistle-blower Arturo Béjar, suggesting the company was aware of these risks but failed to adequately warn the public or mitigate harm. Meta has rejected the verdict and plans to appeal.

Simultaneously, a Los Angeles jury reached a related conclusion in a different context. It found Meta and YouTube—owned by Google—negligent in the design and operation of their platforms in a case focused on social media addiction. The lawsuit, brought by a young woman identified as K.G.M., argued that compulsive use of these platforms during her teenage years contributed to depression, anxiety, and body dysmorphia. The jury agreed, awarding $6 million in damages and assigning 70% of the liability to Meta and 30% to Google. Both companies have said they will appeal, maintaining that mental health outcomes cannot be attributed to a single platform.

Why does it matter? The financial penalties in these cases are small for companies of this scale. The broader significance of the verdicts lies elsewhere. 

 Adult, Female, Person, Woman, Face, Head, Publication, Book

Historically, platforms have relied on legal protections—most notably Section 230 of the US Communications Act—to shield themselves from liability for user-generated content. These rulings, however, begin to test a different theory: that liability can arise not just from what users post, but from how platforms structure, recommend, and amplify content.

This distinction matters because it targets the core of the modern social media business model. Platforms like Meta and Google are built around maximising user engagement—time spent, interactions, and content consumption—which in turn drives advertising revenue. To achieve this, they rely on recommendation systems, frictionless interfaces, and behavioural design features such as autoplay, infinite scroll, and push notifications. These are not incidental elements; they are foundational to how platforms retain users and monetise attention.

The emerging legal argument is that some of these design choices may actively contribute to harm, particularly for minors. In the New Mexico case, the focus was on exposure to harmful and exploitative content. In Los Angeles, the emphasis was to compulsive use and its mental health effects. But both cases converge on a similar point: that platform architecture itself—not just isolated content failures—can create foreseeable risks.

If this reasoning gains traction in courts, it introduces a new kind of pressure on technology companies. The issue is not the size of any single fine, but the cumulative effect of thousands of similar lawsuits, rising compliance costs, and the possibility of precedent-setting rulings that reshape acceptable design practices. Engagement-maximising systems, long treated as a competitive advantage, could become a source of legal vulnerability.

That creates a structural tension. Reducing harmful outcomes may require dialling back precisely those features that make platforms so effective at capturing attention. Even modest declines, when applied across billions of users, can translate into significant revenue impacts.

The path forward. Companies are unlikely to abandon their core models outright. A more probable response is adaptation. This could include re-optimising algorithms toward safer forms of engagement, segmenting products by age with stricter defaults for minors, and investing in more robust safety and audit mechanisms. There may also be a gradual shift toward alternative revenue streams—such as subscriptions, creator monetisation, or commerce integrations—to reduce reliance on pure attention-based advertising.

Legal strategy will also play a role. Both Meta and Google are appealing these verdicts, and future rulings will determine how far courts are willing to go in attributing harm to design choices. Companies are likely to strengthen disclosures, expand parental controls, and document internal risk assessments to demonstrate due diligence. Such measures may not eliminate liability, but they can shape how responsibility is interpreted.

Ultimately, the key question is whether these cases represent isolated outcomes or the beginning of a broader legal shift.  

After almost three decades of stop-start cybersecurity negotiations at the UN, the long-anticipated Global Mechanism on ICT security has finally kicked off.

It is the first permanent forum of its kind since discussions on ICT security began back in 1998, and its mere existence says a lot about how far those talks have come.

But if the launch felt like a breakthrough, the organisational session quickly brought things back down to earth. Beyond what was already sketched out in Annex C and the OEWG’s Final Report, it remained unclear how the Mechanism would actually organise itself in practice.

 Text, Page, Symbol

The session raised plenty of questions—about structure, priorities, and process—but offered few real answers, leaving the sense that while the Mechanism now exists, what it will do and how it will do it is still very much up for grabs.

A new body, a new mandate, and a newly elected Chair, Egriselda López of El Salvador, injected renewed optimism into the Global Mechanism’s first organisational session. Yet, within minutes, it became evident that the Global Mechanism did not start with a blank slate, but rather inherited the OEWG’s long list of disagreements. 

Russia opened the discussion by disputing the legitimacy of the Chair nomination, which they claimed was guided solely by the UNODA and thus limited state participation in the process. They used this opportunity to stress that all decisions under the new process must be based on consensus and be completely intergovernmental. 

The substantive issues on the agenda

For the provisional agenda of the mechanism’s July session, the Chair circulated a draft agenda organised around the five pillars of the framework for responsible state behaviour in the use of ICTs. However, Iran and Russia argued that the wording of agenda item 5 did not precisely reflect paragraph 9 of Annex C of the OEWG final report and called for correction at this session. The EU and Canada rejected this, arguing the draft already referenced all relevant documents and that isolating one paragraph would itself constitute renegotiation. The USA reserved its position entirely, preferring that the July plenary adopt its own agenda. No consensus was reached, and the Chair will continue consultations before July.

The mechanism inherited many unresolved substantive debates from its predecessors. 

On international law, there is widespread agreement that considerable work remains to be done, but little agreement on how to carry it out. The majority of delegations have shown clear support for strengthening the existing normative framework and reaffirming the UN Charter’s application to cyberspace.

A broad majority of states expressed support for ensuring that the mechanism remains action-oriented, with a strong focus on practicality and the implementation of agreed frameworks on international law, norms, CBMs, and capacity-building (Chile, Nauru, Portugal, Switzerland, the United Kingdom, Estonia, Italy, Australia, the Democratic Republic of the Congo, Antigua and Barbuda, Sudan, Vanuatu, Albania, Vietnam, India, Greece, Rwanda, the Dominican Republic, North Macedonia, Kiribati).

In particular, some delegations advocated for applying the framework to concrete scenarios as a way to stimulate implementation (Japan, the Netherlands, the United Kingdom, Sudan).  China was the only delegation to emphasise that further development of the framework is equally important alongside its implementation.

The EU highlighted the norm checklist, a hotly debated issue in the previous mechanism, as an area for further improvement. 

However, to many states, a fundamental concern remains. Capacity building initiatives risk stalling without reliable funding, so many delegations, primarily from developing countries, urged the Global Mechanism to prioritise the operationalisation of the UN Voluntary Fund, which was tabled but left unresolved by the OEWG.

Dedicated thematic groups: Who, what and how

The often broad agenda and long-winded statements of delegations in OEWG plenary sessions left little room for technical depth, leaving many delegations frustrated with the gap between consensus language and concrete action. 

The Dedicated thematic groups (DGTs) were created to address this issue precisely by setting up an informal, technical forum to advance practical initiatives already agreed on, such as the Global ICT Security Cooperation and Capacity Building Portal. However, the practicalities on how they should be set up and administered are going to be hotly contested as it will influence what gets on the agenda, who drives it, and whether this new system is capable of delivering real outcomes over time.

Who will lead DTGs?

The dominant and most contested question of the session was who would appoint the co-facilitators for the two Dedicated Thematic Groups. The Chair proposed appointing two co-facilitators per DTG: one from a developed country, one from a developing country, drawing on GA practice, under which the Chair appoints co-facilitators for intergovernmental processes. She indicated her intention to hold broad informal consultations before making appointments, and committed to geographic balance, gender parity where practicable, and relevant technical expertise as selection criteria. 

Who ends up in these roles matters considerably: the co-facilitators will steer the DTG discussions, shape their agendas, and channel recommendations to the plenary.

A broad coalition of states supported the Chair’s approach, including the EU, speaking on behalf of its member states and several aligned countries such as France, Germany, Australia, the United Kingdom, the Netherlands, Switzerland, Japan, Egypt, Senegal, Nigeria, Malaysia, Moldova, and others. Egypt and Senegal were among the most direct, noting that delays in operationalising the mechanism would waste the intersessional period and erode its credibility, particularly for developing countries eager to move from procedure to substance.

Another group of states, led by Russia and supported by Iran, China, Belarus, Nicaragua, and Cuba, argued that co-facilitator appointments must be approved by member states by consensus rather than made unilaterally by the Chair. Russia contended that DTG co-facilitators handle substantive political matters and therefore constitute officials whose appointment requires a collective agreement. Russia also raised a geographic argument: assigning one developed-country and one developing-country co-facilitator per DTG still disproportionately favours developed states, which represent less than one-fifth of UN membership. Iran added that the early OEWG draft text had explicitly authorised the Chair to appoint DTG facilitators, but that this provision was deliberately removed during negotiations, signalling a lack of agreement on the matter.

The Chair affirmed her intention to consult all member states informally before presenting candidates and called on delegations to show flexibility given the urgency of getting the mechanism’s work underway. Russia subsequently stated its understanding that candidates would be determined through broad consultation, followed by consensus-based approval, but the Chair neither confirmed nor rejected this interpretation. 

The question is effectively deferred to the intersessional period, meaning the composition of the DTG leadership teams remains unresolved and will require continued diplomatic engagement before July.

What will DTGs discuss?

A closely related debate concerned who decides what the DTGs will actually discuss. Several Western and like-minded delegations (e.g., Germany, France, Canada, the United Kingdom, and Australia) highlighted that it is a prerogative of the Chair and co-facilitators, to be exercised in close consultation with states. These delegations proposed ransomware and critical infrastructure protection as natural starting points, citing their frequency across national statements and OEWG discussions. 

Iran and Russia emphasised that topics must be determined by consensus among all member states. Argentina argued that the plenary should maintain control over the agenda rather than ceding too much responsibility to the co-facilitators. 

Morocco instead advocated a bottom-up model in which DTGs define their own priority subtopics from the start, based on member states’ expressed preferences to maintain regional balance and ownership. 

In this sense, the DTGs’ credibility hinges on a delicate balance, having to be ambitious enough to move conversations into action but also focused enough on issues with broad support so that their outputs survive in plenary. 

No decision was taken. For industry and civil society organisations with specific thematic priorities, this remains an active opening: states are currently receptive to input on which topics the DTGs should prioritise.

Colombia put forward a process proposal that drew broadly positive reactions across delegations. It recommended that:

  • DTG mandates be time-limited with clearly defined and measurable outputs; 
  • DTG 1 addresses specific rotating subjects rather than its entire mandate simultaneously, and 
  • DTG outputs systematically distinguish between recommendations on which consensus exists and those still under development. 

Senegal made a complementary point: reports should document both areas of agreement and divergence, preserving a record of discussions even when no consensus was reached. Both proposals reflect a wider concern that, without structured outputs and clear timelines, the mechanism risks reproducing the open-ended deliberation of the OEWG without generating implementable results.

How will DTGs feed into the plenary?

Another issue discussed was how DGT work feeds into plenary work. Brazil made it clear that without a defined protocol for elevating DTG reports to the plenary and formally accepting their recommendations, the groups risk becoming talking shops that are disconnected from the mechanism’s official conclusions. Their proposed solution, which still has to achieve support, is to keep DGT conversations primarily informal but include a short formal section for decision-making. 

Stakeholder participation

A long-standing point of contention and possibly the most politically-charged was the role of non-governmental actors in the groups. The effective participation of interested stakeholders remains uncertain. 

Some delegations adopted a more accommodating stance, recognising that stakeholders can enhance the quality of deliberations (Sudan, Antigua and Barbuda) and contribute to more practical outcomes (Vietnam, Dominican Republic), while underscoring the importance of preserving the intergovernmental nature of the process (Sudan, Vietnam). 

Canada and like-minded states argued that the July 2025 consensus clearly provides for states to nominate experts for DTG briefings and for the wider stakeholder community to participate throughout DTG discussions. 

Iran contested this, asserting that stakeholder modalities agreed for the mechanism apply equally to DTGs. Russia also argued that expert briefings from external stakeholders are a possibility rather than a standard feature, and that inviting external briefers requires member-state agreement on a case-by-case basis. 

How this is resolved will directly determine the degree of access the private sector, technical community, and civil society organisations have to the DTG process in practice.

What’s next? 

The session closed without resolution on its two most consequential questions: co-facilitator appointments and the provisional plenary agenda. The Chair will convene informal intersessional consultations on both and issue a programme of work document before July in all UN languages. 

The Secretariat will open an annual stakeholder accreditation window in the coming weeks; stakeholders wishing to participate in plenary sessions and review conferences can monitor the Digital Watch Observatory web page, where we track the process, for details. 

The broader tension remains unresolved, and how it is managed in the intersessional period will largely determine whether the July plenary can open with the mechanism’s operational foundations in place.

The Chair also confirmed the two key dates for 2026: 

For stakeholders tracking or seeking to contribute to these discussions, these are the dates to plan around.

At the 14th Ministerial Conference of the World Trade Organization (MC14) in Yaoundé, Cameroon, digital trade dominated the agenda through two parallel tracks—each pointing in a different direction and illustrating both the limits and evolution of the multilateral system.

The moratorium on customs duties on electronic transmissions. The long-standing moratorium—renewed every two years since 1998—expired on 31 March after members failed to reach consensus on the length of a new extension, with differing views among members preventing a deal.

While some members, particularly the USA, sought a longer-term solution, others have traditionally advocated a shorter renewal period, reflecting a desire for caution given the rapid pace of technological change and the need to preserve policy flexibility for the future.

During MC14, Brazil was the leading voice, emphasising the importance of caution in light of developments such as AI and 3D printing, suggesting that a shorter extension with room for review would allow members to reassess as the digital landscape evolves. Efforts to find a middle ground ultimately fell short as time ran out.

The outcome also meant that a broader set of discussions on WTO reform, which had been politically linked to the approval of the moratorium, remained unresolved. 

This is not the first time the moratorium lapsed; it happened at the 1999 Seattle ministerial, before the moratorium was reinstated at Doha two years later. The current expiry of the moratorium does not mean tariffs will automatically be imposed.

Still, it creates policy space for some countries to consider introducing tariffs if they are not bound by trade agreements that prohibit customs duties on electronic transmissions.

Plurilateral Agreement on E-commerce. In parallel, however, a different dynamic unfolded. A coalition of 66 WTO members announced they would move forward with implementing the plurilateral Agreement on Electronic Commerce concluded in 2024 by the Joint Statement Initiative on e-commerce (JSI), through interim arrangements. 

Reminder: WTO Joint Statement Initiatives (JSIs) are a way for a group of World Trade Organization members to move forward on specific issues without waiting for the entire organisation to reach a consensus. They are open to any WTO Member. 

Australia, Japan, and Singapore, serving as co-convenors of the JSI on e-commerce, confirmed that the pact, which aims to facilitate digital trade and prohibit duties on e-commerce transactions, will enter into force once 45 members have formally notified their acceptance.

 People, Person, Adult, Male, Man, Crowd, Face, Head, Baby, Book, Publication, Jury

What’s next for e-commerce discussions? Discussions on the moratorium, the WTO reform, and the future of the Work Programme on e-commerce (WPEC) are expected to continue at the next General Council meeting in May in Geneva.

In the meantime, JSI members will continue to seek inclusion of the Agreement under the WTO legal architecture.

The JSIs and their outcomes face opposition from a number of WTO members. The JSI themselves, these countries argue, lack legal status because they were not launched by consensus. Similarly, these countries claim that the outcomes of JIs are not based on consensus and are neither multilateral agreements nor plurilateral agreements as defined in Article IV of the agreement that established the WTO – the Marrakesh Agreement.

For instance, India registered dissent against the incorporation of the agreement achieved within another plurilateral negotiation, on Investment Facilitation for Development, into the WTO rulebook.

The country argued that incorporating such frameworks into the WTO rulebook risks eroding the organisation’s foundational principles. It asked for a discussion of guardrails and legal safeguards before integrating any specific plurilateral outcome into the WTO.

The Data Technology Seminar 2026, organised by the European Broadcasting Union, took place from 10 to 12 March in Geneva. The event brought together media professionals and technology experts to discuss how AI and data systems are being developed, governed, and deployed in public service media. Sessions will explore topics such as AI strategy and governance, metadata platforms, hybrid search, audience personalisation, and the use of generative AI in editorial and production workflows.

The World Intellectual Property Organization (WIPO) launched the AI Infrastructure Interchange (AIII) on 17 March in Geneva and online. The programme included keynote remarks, panel discussions, and presentations addressing the role of technical collaboration between creators, rightsholders, and technology companies. Participants also discussed the objectives of the AIII initiative and the establishment of a Technical Exchange Network intended to support ongoing expert dialogue on practical challenges and opportunities. 

The Geneva Graduate Institute organised a briefing lunch on 23 March to examine evolving transatlantic dynamics at the intersection of US politics and the global influence of major technology platforms. The discussion explored how recent political developments in the USA and the concentration of technological power shape Europe’s position, including questions of dependency, regulation, and strategic autonomy.

The International Labour Organization (ILO) hosted a session on the macroeconomic impacts of AI on 25 March, showcasing a new World Bank Group model that treats AI as a structural transformation of production. The tool simulates how AI adoption affects sectors, occupations, and prices, helping policymakers assess implications for growth, equity, and structural change. A first case study in Poland will explore its application, with potential use in other emerging and middle-income economies.

On 30 and 31 March, the International Telecommunications Union (ITU) held a two-day workshop on ‘Trustable and Interoperable Digital Identities for Human and Agentic AI’ in Geneva. It btought together stakeholders from governments, industry, academia, and standards bodies to examine technical approaches related to trust frameworks, trust management, security, and interoperability; and to investigate actionable recommendations and consolidated insights to advance standardisation work in the field. 

The Inter-Parliamentary Union (IPU) hosted a webinar on ‘Building AI Literacy in Parliaments‘ on Wednesday, 1 April 2026, to explore how parliaments can develop training and resources to support AI literacy among members, parliamentary staff, and IT teams. The webinar will highlight the IPU Guidelines for AI in parliaments, emphasising that AI literacy should reach all roles within parliaments.

 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing