weekly newsletter weekly newsletter

Home | Newsletters & Shorts | Digital Watch newsletter – Issue 107 – February 2026

Digital Watch newsletter – Issue 107 – February 2026

February 2026 in retrospect

Child safety online is in the spotlight, as the LA addiction trial and Santa Fe consumer protection trial kick off, marking the first time social media platforms are defending themselves before a jury. Meanwhile, more countries are considering bans on children’s access to social media.

This month’s highlights:

Road to Geneva 2027: Ten key signposts ahead of the next global AI Summit, powered by DiploAI research.

Why cyberspace doesn’t exist: 30 years after the Declaration of the Independence of Cyberspace, we examine how the cyberspace as a separate world myth shapes AI governance today.

Anthropic vs. Pentagon: AI firm Anthropic was barred from Pentagon work after refusing to waive its ethical safeguards—a major Silicon Valley-military showdown.

Tech sovereignty: From European strategic autonomy to US pushback, digital sovereignty continues to dominate 2026.

Technologies

In the Netherlands, a Dutch appeals court ordered a full investigation into Nexperia and upheld earlier decisions suspending its former CEO, linked to the Chinese parent Wingtech. The court’s ruling strengthens oversight of the company’s governance and operations, reflecting broader European concerns about foreign ownership and control of strategically sensitive semiconductor assets, particularly for the automotive and technology sectors. Nexperia has pledged full cooperation with the probe, while Wingtech criticised the decision as harmful to the global industry.

Taiwan Semiconductor Manufacturing Company (TSMC) announced it will produce advanced 3-nanometre AI chips at its second fabrication plant in Kumamoto, Japan. The expansion underscores surging global demand for AI processors and aligns with Japan’s strategy to strengthen domestic semiconductor capabilities and diversify critical production outside traditional hubs.

The UK and Bulgaria have agreed to deepen cooperation on semiconductors, with the UK’s Science and Technology Network and Department for Business and Trade linking British compound-semiconductor expertise to Bulgaria’s manufacturing base; the partnership’s headline outcome is progress toward a €350 million “Green Silicon Carbide” wafer factory in Bulgaria, alongside new R&D and industry tie-ups including a research memorandum between the Universities of Glasgow and Sofia and an MoU between TechWorks UK and Bulgaria’s BASEL, as both sides frame the push as strengthening European supply-chain resilience and skills.

A US national-security review is holding up licences for Nvidia to ship its H200 AI chips to China, leaving Chinese customers unable to place or confirm orders nearly two months after the White House signalled approval in principle; while the Commerce Department has completed its assessment, inter-agency consultations, including State, Defence and Energy, are still negotiating additional safeguards and potential conditions such as shipment allocation, testing and end-use reporting, delays that are already disrupting demand expectations and production planning across Nvidia’s supply chain and pushing Chinese firms to explore alternative ways to secure AI chips.

Read about this month’s AI governance developments in our dedicated AI newsletter section.

Infrastructure

Engineers are retrieving TAT‑8, the first fibre-optic transatlantic cable, from the Atlantic seabed more than three decades after it revolutionised global communications. Though retired in 2002 after an irreparable fault, the cable has remained submerged until now. The operation clears the seabed for new infrastructure and recovers glass fibre, copper, and steel components for recycling amid global metal shortages.

SpaceX’s Starlink has received regulatory approval to operate in Vietnam, expanding its global footprint and reinforcing its role as an alternative connectivity provider in tightly regulated markets. In parallel, a Russian official acknowledged that Starlink systems had been down for two weeks in parts of Russia.

A coalition of major technology companies announced the creation of the Trusted Tech Alliance (TTA) and introduced five core principles to define what constitutes ‘trusted’ digital infrastructure: transparent corporate governance, secure development and independent assessment, supply chain oversight, ecosystem openness, and adherence to the rule of law and data protection standards. The initiative positions itself as a response to rising geopolitical fragmentation and growing scrutiny over the security of critical digital systems. 

Cybersecurity

The Ministry of Public Security of the People’s Republic of China has drafted a law allowing authorities to impose exit bans of up to three years on convicted cybercriminals and those who support or facilitate such activities. It would also bar entry to offenders, extend jurisdiction over Chinese nationals abroad, target foreign entities deemed to harm national interests, and tighten controls on online content deemed false or disruptive. The proposal could affect global businesses, cross-border cooperation, and the international mobility of technology professionals.

The USA and Israel initiated coordinated military strikes against Iran; cyber operations conducted by USCYBERCOM have accompanied the strikes. The cyber operations, along with space operations, disrupted Iran’s communications. Iran’s cyber response has so far been limited to proxy-driven distributed denial-of-service attacks, GPS spoofing near the Strait of Hormuz, and the compromise of IP cameras to support missile operations, alongside a nationwide internet blackout.

The European Commission has launched ProtectEU, a new counterterrorism agenda that sharpens the bloc’s response to evolving threats, especially those amplified by digital tools, by boosting intelligence analysis and Europol support, tightening cooperation with platforms to remove extremist content faster, and stepping up enforcement of the Digital Services Act, while also proposing an the EU Online Crisis Response Framework to coordinate with tech companies during security incidents and expanding measures to protect public spaces, critical infrastructure and disrupt terrorist financing, including via crypto-assets.

Read about this month’s child safety developments in our dedicated child safety newsletter section.

Economic

The European Commission and the COMESA Competition and Consumer Commission are separately investigating concerns that Meta of abusing a dominant position by restricting third-party AI assistants’ access to WhatsApp while privileging its own Meta AI. The European Commission has already formally notified Meta that it has breached EU competition law and is considering interim measures to prevent continued exclusion and protect competitive entry.

France is intensifying its stance against ultra-low-cost online retailers, with Minister Serge Papin declaring 2026 a ‘year of resistance’ to platforms such as Shein. The government argues that global marketplaces benefit from looser regulatory standards than physical French shops. Paris is appealing a court decision that allowed Shein to continue operating despite inappropriate products and is preparing legislation to let authorities suspend online platforms without prior judicial approval, expanding executive powers over the digital economy.

Türkiye’s ruling AK Party has introduced a draft bill to formalise crypto taxation by tying digital-asset rules to the Capital Markets Law and requiring licensed platforms to withhold a 10% tax on crypto gains and income every quarter for individuals and companies, including residents and non-residents. The proposal also adds a 0.03% transaction tax on crypto service providers, obliges investors using unlicensed platforms to declare gains annually, and would let the president adjust the withholding rate from 0% to 20% based on factors such as token type or holding period, with the new taxation regime set to take effect two months after publication if the bill passes.

South Korea’s finance minister Koo Yun-cheol has pledged urgent reforms to how government agencies handle seized and state-held crypto after multiple custody failures, including a case in which Seoul police reportedly lost access to 22 BTC (about $1.4 million) when private keys were not properly retained and a third party was allowed to manage the assets. Prosecutors are also investigating alleged bribery linked to the incident, and the finance ministry says the government holds crypto only through lawful enforcement actions such as seizures in tax and criminal cases.

Russia’s central bank says it is intensifying its crackdown on crypto-enabled pyramid schemes, reporting that two-thirds of such operators now rely on cryptocurrency and that victims’ funds were routed to more than 4,600 fraudster-controlled wallets in 2025. The regulator says it identified 7,087 online scams last year, blocked 21,500 scam-linked webpages and social posts, and is urging Russians to use only licensed investment providers as authorities tighten oversight of online fraud spread via social media, chat apps and phone calls.

Human rights

The EU has dropped plans to revise the GDPR’s definition of ‘personal data’ from the draft GDPR omnibus package after strong pushback from national regulators and civil society, opting to keep the regulation’s current scope intact. Attention now shifts to upcoming European Data Protection Board guidance on pseudonymisation, which is expected to clarify how key safeguards should be applied in practice, signalling a broader preference for regulatory clarity and implementation guidance over reopening foundational privacy concepts in legislation.

Negotiations between Australia and the USA over expanded biometric data sharing have raised alarm among privacy advocates and legal commentators. Reports suggest that discussions could broaden US access to sensitive Australian biometric records — including facial images, fingerprints, and identity data — bypassing traditional case-by-case legal cooperation frameworks. 

Italy’s privacy watchdog has ordered Amazon Italia Logistics to stop processing sensitive employee data at its Passo Corese site and to halt the use of data collected via surveillance cameras near restrooms and break areas, after finding the company recorded information such as workers’ health conditions, union/strike activity, and private family details and retained it for up to 10 years, far beyond what authorities say employers may lawfully gather for workplace management.

Nigeria’s Data Protection Commission (NDPC) has launched an inquiry into the Chinese e-commerce giant Temu over suspected violations of Nigeria’s data protection law. Authorities are probing the company’s data-handling practices, specifically, alleged non-transparent data processing, intrusive surveillance mechanisms, cross-border transfers, and possible failure to limit data collection. Temu has pledged cooperation as regulators warn that non-compliance could trigger legal penalties and set precedents for data governance in Africa’s largest digital market.

A draft Law on Cognitive Sovereignty and Protection of Human Attention has been introduced in the Argentine Chamber of Deputies. It suggests establishing a regulatory framework recognising cognitive autonomy as a legally protected good under constitutional and international human rights law, requiring transparency, opt-in personalisation, and non-algorithmic alternatives on mass-reach platforms. It also suggests imposing strict “safe by default” settings for minors, banning behavioural profiling and targeted advertising for users under 13, and mandating impact assessments, registry obligations, audits, and annual transparency reporting.

Sociocultural

UNESCO and Hamad Bin Khalifa University (HBKU) have launched a UNESCO Chair on Digital Technologies and Human Behaviour in Qatar to research how emerging technologies shape daily life, with a focus on digital well-being, ethical design, and healthier online environments; the programme will tackle issues such as internet addiction, cyberbullying, and misinformation, and aims to link research with policy dialogue among governments, international organisations, and academia to promote more responsible technology development.

The UK is introducing legislation requiring tech companies to remove non-consensual intimate images within 48 hours of being reported. Under the updated Crime and Policing Bill, firms that fail to comply risk fines of up to 10% of global revenue or potential service restrictions, with enforcement overseen by Ofcom.

The US Department of State is reportedly preparing to launch ‘freedom.gov,’ an online portal designed to help users worldwide, including in Europe and elsewhere, circumvent local content restrictions and access blocked material, including content their governments classify as hate speech or terrorist propaganda.

The EU is investigating whether Shein’s design elements, such as gamified engagement and opaque recommendation algorithms, undermine consumer safety and transparency obligations under the DSA. Investigators will examine whether Shein has failed to prevent the sale of illegal products — including items that may constitute child sexual abuse material. Shein’s risk-mitigation systems, product removal processes, and compliance with requirements to offer non-profiling recommendation options will be evaluated, with potential fines of up to 6 % of global turnover for confirmed breaches. 

The European Commission has preliminarily concluded that TikTok’s design violates the bloc’s Digital Services Act (DSA) due to features that the Commission considers addictive, such as infinite scroll, autoplay, push notifications, and its highly personalised recommender system. According to the Commission, existing safeguards on TikTok—such as screen-time management and parental control tools—do not appear sufficient to mitigate the risks associated with these design choices. At this stage, the Commission indicates that TikTok would need to modify the core design of its service. Possible measures include phasing out or limiting infinite scroll, introducing more effective screen-time breaks, including at night, and adjusting its recommender system to reduce addictive effects.

Development

Malaysia has imposed an immediate, total ban on all e-waste imports, reclassifying electronic waste under an ‘absolute prohibition’ in its import rules, after a widening corruption investigation into oversight of the sector. Authorities say the move is meant to stop foreign dumping and protect public health and national security, while warning that enforcement will be tightened to prevent smuggling. The probe has reportedly led to the detention of senior environmental officials and asset freezes.

Gabon has imposed an indefinite suspension of social media platforms, citing the spread of false information, cyberbullying and the unauthorised disclosure of personal data. Gabon’s media regulator, the High Authority for Communication (HAC), stated that existing moderation measures were not working and that the shutdown was necessary to stop violations of Gabon’s 2016 Communications Code.

The European Commission has proposed opening negotiations to bring Albania, Bosnia and Herzegovina, Kosovo, Montenegro, North Macedonia, and Serbia into the EU’s ‘Roam Like at Home’ regime. If implemented, citizens and businesses would be able to make calls, send texts, and use mobile data across borders at domestic rates, both when visiting the EU and when EU citizens travel in the region. The Commission has adopted proposals for negotiating mandates and is now seeking approval from the European Council to begin formal talks.

The government of Cabo Verde has launched Gov.CV, a unified digital portal designed to centralise public services and streamline interactions between the state, citizens, and businesses. By consolidating services, the government expects reductions in processing times, fewer redundancies, and a more transparent user experience. 

Global governance

The UN. The General Assembly approved the creation of a historic global scientific advisory body on AI, the Independent International Scientific Panel on Artificial Intelligence (AI), tasked with providing independent, evidence‑based assessments of AI technologies, risks, opportunities, and impacts.

The first of its kind, the panel’s main task is to ‘issuing evidence-based scientific assessments synthesising and analysing existing research related to the opportunities, risks and impacts of AI’, in the form of one annual ‘policy-relevant but non-prescriptive summary report’ to be presented to the Global Dialogue on AI Governance. The Panel will also ‘provide updates on its work up to twice a year to hear views through an interactive dialogue of the plenary of the General Assembly with the Co-Chairs of the Panel’. At the Panel’s inaugural meeting, Guterres told experts that they have a huge responsibility to help shape how the technology is used ‘for the benefit of humanity’.

India AI Impact Summit 2026. In a first for the Global South, India hosted the world’s biggest AI summit at Bharat Mandapam. 

The New Delhi Frontier AI Impact Commitments were unveiled at the summit’s opening, centring on two core priorities. The first, advancing understanding of real-world AI usage, seeks to generate anonymised and aggregated data to inform policymaking on AI’s impact on jobs, skills and productivity. The aim is to support evidence-based regulation and economic planning as AI adoption accelerates.

The second, strengthening multilingual and contextual evaluations, centres on improving AI performance across underrepresented languages and cultural contexts. Participating organisations will collaborate with governments and local ecosystems to develop datasets, benchmarks and evaluation expertise, with a particular emphasis on the Global South.

At the heart of the summit was Prime Minister Narendra Modi’s unveiling of the ‘MANAV Vision’, a human-centred approach to AI governance. Framed as a series of principles aimed at placing people at the centre of AI development and deployment, MANAV stands for:

  • Moral and ethical systems — ensuring AI is guided by ethical norms
  • Accountable governance — transparent rules and oversight mechanisms
  • National sovereignty — rights over data and digital assets
  • Accessible and inclusive AI — avoiding monopolies and broadening participation
  • Valid and legitimate systems — lawful and verifiable technologies.

Modi described this framework as essential to preventing future disparities in AI’s impact and ensuring technology serves humanity’s welfare. He also emphasised that AI should be a medium for inclusion and empowerment, particularly for the Global South, rather than a tool that concentrates power among a few actors.

The 2027 edition of the summit will be hosted by Switzerland. Read more about the road to the 2027 summit in our dedicated newsletter text.

Investments and national plans

The USA. Seven tech giants ( Google, Meta, Microsoft, Oracle, OpenAI, Amazon, and xAI) have signed the White House’s ratepayer protection pledge, encouraging major technology companies to cover the additional electricity costs associated with their AI infrastructure and, in some cases, to invest in dedicated energy generation rather than relying solely on the public grid. 

Germany. Germany has unveiled plans for a ‘Sovereign AI Factory’, a government‑backed initiative to develop sovereign AI models and infrastructure tailored to local language, cultural context and industrial needs. The project will support domestic innovation by providing compute resources, datasets and certification frameworks that conform to European safety and privacy standards, with the aim of reducing reliance on non‑EU AI providers. Berlin says the factory will also serve as a collaborative platform for research institutions and industry to co‑design secure, interoperable AI systems for public and private sectors.

Pakistan. Pakistan’s government has pledged major investment in AI by 2030, rolling out a comprehensive national strategy to accelerate digital transformation across the economy. The plan focuses on building AI capacity in key sectors — including agriculture, healthcare and education — through funding for research hubs, public‑private partnerships and targeted upskilling programmes. Officials say the investment is intended to attract foreign direct investment, boost exports and position Pakistan as a regional tech player, while also addressing ethical and governance frameworks to guide responsible AI deployment.

Slovenia. Slovenia has set out an ambitious national AI vision, outlining strategic priorities such as human‑centric AI, robust ethical frameworks, and investment in research and talent. The roadmap emphasises collaboration with European partners and adherence to international standards, positioning Slovenia as a proactive voice in shaping AI governance dialogues.

Partnerships

South Korea and Singapore. South Korea and Singapore have launched a Korea-Singapore AI Alliance, a bilateral initiative to deepen cooperation in AI and related technologies. Announced at the Korea-Singapore AI Connect Summit in Singapore, the alliance aims to create an open innovation ecosystem that connects capital, talent and technology between the two nations, with the goal of enhancing competitiveness in the global AI market and supporting joint development of AI solutions that address regional and global challenges. The partnership includes a pledge to establish a US$300 million global AI investment fund in Singapore by 2030 to support startups and collaborative research. 

Content governance

China. A court in eastern China has set an early legal precedent by limiting developer liability for AI hallucinations, ruling that developers are not automatically responsible unless users can prove fault and demonstrable harm. Judges characterised AI services as service providers, requiring claimants to show both provider fault and actual injury from erroneous outputs, a framework intended to balance innovation incentives with user protection.

DPAs. Data protection authorities from 61 jurisdictions and the European Data Protection Supervisor (EDPS) issued a joint statement warning about AI tools that generate realistic images of identifiable individuals without consent. They raised concerns about privacy, dignity, and child safety, noting that such technologies—often embedded in social media—enable non-consensual intimate imagery and other harmful content. Authorities stressed that AI systems must comply with data protection laws and that certain uses may constitute criminal offences. Organisations were urged to implement safeguards, ensure transparency, enable swift content removal, and engage proactively with regulators to protect fundamental rights.

India. India has begun enforcing a three-hour removal rule for AI-generated deepfake content, requiring platforms and intermediaries to take down specified material within 180 minutes of notification or face regulatory sanctions. The accelerated timeframe is designed to blunt the rapid spread of deceptive, synthetic media amid heightened concerns about misinformation and social disruption.

Global coalition on child safety. A broad coalition of child rights advocates, digital safety organisations and policymakers has called on governments to ban ‘nudification’ AI tools, urging criminalisation of software that converts clothed images into sexually explicit versions without consent. The group argues that existing content moderation approaches are insufficient to protect minors and stresses that pre-emptive legal prohibitions are needed to prevent widespread exploitation.

UNICEF. The UN Children’s Fund (UNICEF) has called on governments to criminalise the creation, possession and distribution of AI-generated child sexual abuse content, warning of a sharp rise in sexually explicit deepfakes involving children and urging stronger safety-by-design practices and robust content moderation. A study cited by the agency found that at least 1.2 million children in 11 countries reported their images being manipulated into explicit AI deepfakes, with ‘nudification’ tools that strip or alter clothing posing heightened risks. UNICEF stressed that sexualised deepfakes of minors should be treated as child sexual abuse material under the law and urged digital platforms to prevent circulation rather than merely remove content after the fact.

Spain. In Spain, Prime Minister Pedro Sánchez has ordered prosecutors to investigate X, Meta, and TikTok over the alleged circulation of AI-generated child sexual abuse material (CSAM). The probe follows reports that platform systems may have enabled the creation and spread of sexually explicit deepfake imagery involving minors. Spanish authorities are examining whether companies failed to prevent the distribution of such content and whether AI tools embedded in or linked to the platforms contributed to the harm.

The UK. Britain is partnering with Microsoft, academics, and tech experts to develop a deepfake detection system to combat harmful AI-generated content. The government’s framework will standardise how detection tools are evaluated against real-world threats such as impersonation and sexual exploitation, building on recent legislation criminalising the creation of non-consensual intimate synthetic imagery. Officials cited a dramatic increase in deepfakes shared online in recent years as motivation for the initiative.

Grok/X. The cybercrime unit of the Paris prosecutor has raided the French office of X as part of this expanded investigation. Musk and ​former CEO Linda ​Yaccarino have been summoned for voluntary interviews. X denied any wrongdoing and called the raid an ‘abusive act of law enforcement theatre’ while Musk described it as a ‘political attack.’

The UK Information Commissioner’s Office (ICO) opened a formal investigation into X and xAI over whether Grok’s processing of personal data complies with UK data protection law, namely core data protection principles—lawfulness, fairness, and transparency—and whether its design and deployment included sufficient built-in protections to stop the misuse of personal data for creating harmful or manipulated images.

Ireland’s Data Protection Commission (DPC) has initiated a large-scale GDPR investigation into X’s AI chatbot Grok, after reports that its generative AI capabilities have been used to produce harmful, non-consensual and sexualised content involving personal data. This probe, triggered by widespread controversy over Grok’s image outputs, unfolds alongside evidence that the chatbot has been gaining market share in the USA as global regulators scrutinise its compliance with fundamental data protection standards.

Brazil’s National Data Protection Agency and National Consumer Rights Bureau have ordered X to stop serving explicit image generation via its Grok AI, citing risks of harmful outputs reaching minors and contravention of local digital safety norms. The directive demands immediate technical measures to block certain prompts and outputs as part of ongoing scrutiny of platform content moderation practices.

Meanwhile, Indonesia has restored access to Grok after banning it in January, having received guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Chile. Chile has introduced Latam-GPT to strengthen Latin America’s presence in global AI. The project, developed by the National Centre for Artificial Intelligence with support across South America, aims to correct long-standing biases by training systems on the region’s own data instead of material drawn mainly from the USA or Europe. President Gabriel Boric said the model will help maintain cultural identity and allow the region to take a more active role in technological development. Latam-GPT is not designed as a conversational tool but rather as a vast dataset that serves as the foundation for future applications. More than eight terabytes of information have been collected, mainly in Spanish and Portuguese, with plans to add indigenous languages as the project expands.

Safety and security

International experts. The second International AI Safety Report 2026 has been published. The report synthesises evidence on AI capabilities — such as improved reasoning and task performance — alongside emerging risks like deepfakes, cyber misuse and emotional reliance on AI companions, while noting uneven reliability and ongoing challenges in managing risks. It aims to equip policymakers with a science-based foundation for regulatory and governance decisions without prescribing specific policies.

The UN. AI governance was a key focus at the recent UN Special Dialogue entitled ‘From Principles to Practice: Special Dialogue on Artificial Intelligence and Preventing and Countering Violent Extremism’. Diplomats and experts discussed how AI is reshaping global stability, conflict dynamics and international law. Participants highlighted risks from autonomous systems and misinformation campaigns and stressed the need for multilateral cooperation and shared norms to mitigate emerging threats.

The EU. The European Commission has confirmed it will again delay publishing guidance on high-risk AI systems under the EU AI Act. The guidelines were due by 2 February 2026, but will now follow a revised timeline. The delay marks the second missed deadline and adds to broader implementation setbacks surrounding the EU AI Act. 

Intellectual property rights

The UK. The UK Supreme Court has ruled that AI-assisted inventions can qualify for patents when the human contributor’s inventive role is identifiable and substantial, a decision legal experts say will boost innovation by clarifying intellectual property protections in hybrid human-AI development. The judgement aims to incentivise investment in AI research while maintaining established patentability standards.

Future of work

South Korea. South Korea has launched a labour‑government body to address the pressures of AI automation on the workforce, creating a cross‑sector council tasked with forecasting trends in job displacement and recommending policy responses. The initiative brings together labour unions, industry leaders and government ministries to coordinate reskilling and upskilling programmes, strengthen social safety nets, and explore income support models for workers affected by automation. 

The trials begin 

The LA addiction trial. A landmark trial opened in Los Angeles, USA, in February 2026 against Meta and YouTube, centring on claims that their platforms are deliberately designed to be addictive and have harmed young users’ mental health. 

The plaintiff, Kaley, 20 years old in 2026, alleges that Instagram and YouTube caused her anxiety, body dysmorphia, and suicidal thoughts. Her lawyers likened features like infinite scroll, autoplay, likes, and beauty filters to a ‘digital casino’ for children, citing internal documents showing the platforms targeted young users and even used YouTube as a ‘digital babysitter.’

Kaley also initially sued Snap Inc. and TikTok, but these platforms reached confidential settlements before trial, leaving Meta and YouTube as the remaining defendants to face a jury. 

Meta and YouTube’s defence argued that social media was not responsible for Kaley’s struggles, citing her difficult family background, therapists’ records, and the availability of safety tools. 

YouTube highlighted that Kaley’s average daily usage has been 29 minutes since 2020 and compared the platform to other entertainment services, emphasising that she is not addicted. 

Meta CEO Mark Zuckerberg took the stand and insisted that Instagram prohibits users under 13 and that enforcing age limits is challenging, as many minors lie about their birth dates. He highlighted ongoing efforts to reduce screen time and improve safety features. Still, internal documents presented in court suggested that early teen engagement had been a strategic priority. The case is being closely watched as a potential blueprint for platform accountability regarding addictive features.

The plaintiff’s psychiatrist, Virginia Burke, also took the stand. Burke testified that plaintiff Kaley’s social media use contributed to her mental health issues, citing online bullying. However, Burke noted that Kaley also enjoyed creating and sharing video art, though she was frustrated when others claimed credit for it. Burke stated that social media addiction is not yet a widely recognised diagnosis in psychiatry and is absent from the latest Diagnostic and Statistical Manual, the key text for US mental health professionals.

Kaley herself testified that from a young age, she spent nearly all her time on platforms such as YouTube and Instagram, describing an inability to limit her use, even when she experienced bullying. She said she secretly retrieved her phone at night after her mother confiscated it, became distressed when denied access to social media apps, withdrew from family interactions, and believed her health, sleep, grades, and overall well-being would have been better without social media.

The trial is expected to last until the end of March 2026.

The Santa Fe consumer protection trial. Another trial opened in Santa Fe, New Mexico, USA, after more than two years of pre-trial wrangling. The lawsuit, filed in 2023 by New Mexico Attorney General Raúl Torrez, has reached the court and a jury.

The lawsuit accuses Meta of violating the state’s consumer protection laws by misrepresenting how safe its platforms are for minors while building features and algorithms that, in prosecutors’ view, entice prolonged use and expose children to significant risks. Those risks include addiction-like engagement, exposure to harmful sexual content, unwanted private communications with adults, sleep disruption from compulsive use, and environments where predators can operate with relative ease. 

On 3 March, state attorneys introduced recorded depositions of Meta’s CEO Mark Zuckerberg and Head of Instagram Adam Mosseri, seeking to show that the company was aware of metrics and research indicating serious child safety problems but did not sufficiently act or warn users and families. In those depositions, prosecutors pressed Meta’s executives on issues like safety priorities versus corporate profits, the scope of their platforms’ reach among teens, and specific product choices — from recommendation systems to cosmetic filters — that might affect teen well-being. 

Meta’s attorneys told the jury during opening statements that the company has implemented numerous safety tools and content-moderation systems. Meta maintains it has not engaged in deception and that risk disclosures and safety measures have been ongoing. 

The Oakland school districts trial. Another bellwether trial is expected to start in Oakland, California, in June, the first to represent school districts that have sued social media platforms over harms to children. 

The heart of the matter. Approximately 1,600 plaintiffs — including individuals, more than 350 families, 250 school districts, and state attorneys general — have filed claims against Meta Platforms, Snap Inc., TikTok, and Google. Because of the volume, the lawsuits are coordinated in a Judicial Council Coordination Proceeding (JCCP). From this coordinated group, 22 bellwether trials have been selected, with three — in Los Angeles, Santa Fe, and Oakland — scheduled to proceed first.

A bellwether trial is a test case chosen from a large group of similar lawsuits to be tried first, to gauge how juries respond to the evidence and legal arguments. Its outcome does not decide the other cases, but it signals how future trials or settlement negotiations may unfold.

This is why the Los Angeles and Santa Fe trials have drawn much attention—it is expected that their outcome will influence future platform design practices.

 Jury, Person, People, Crowd, Face, Head, Audience

The bans club grows

The momentum to ban children from accessing social media continues, as 10 more nations weigh legislative measures and enforcement tools.

Portugal’s parliament has approved a law restricting social media access for minors under 16, requiring express and verified parental consent for accessing platforms like Instagram, TikTok, and Facebook. Access will be controlled through the Digital Mobile Key, Portugal’s national digital ID system, ensuring effective age verification and platform compliance. The law strengthens protections amid growing concerns over social media’s impact on young people’s mental health, and detailed implementation and enforcement rules are now set for parliamentary committee review.

In Spain, Prime Minister Pedro Sánchez’s government has proposed legislation that would ban social media access for users under 16, framing the measure as a necessary child-protection tool against addiction, exploitation, and harmful content. Under the draft plan, platforms must deploy mandatory age-verification systems designed as enforceable barriers rather than symbolic safeguards—signalling a shift toward stronger regulatory enforcement rather than voluntary compliance by tech companies. Proposals also include legal accountability for technology executives over unlawful or hateful material that remains online.

Türkiye’s ruling AK Party has proposed a bill banning social media access for children under 15, requiring platforms to implement age verification. Platforms will also have to create a separate child-specific version of the platform for older minors aged 15-18. Officials cited the Turkish Penal Code, which limits criminal liability for children under 15, as justification for drawing the line at age 15.

Poland’s ruling coalition is currently drafting a law that would ban social media use for children under 15. Lawmakers aim to finalise the law by late February 2026 and potentially implement it by Christmas 2027. Poland aims to update its digital ID app, mObywatel, to enable users to verify their age. 

Slovenia is preparing draft legislation to ban minors under 15 from accessing social media, a move initiated by the Education Ministry.

Greece is reportedly close to announcing a ban on social media use for children under 15. The Ministry of Digital Governance intends to rely on the Kids Wallet application, introduced last year, as a mechanism for enforcing the measure instead of developing a new control framework. 

In Austria, the government is actively debating a prohibition on social media use for children under 14. State Secretary for Digital Affairs Alexander Pröll confirmed the policy is under discussion with the aim of bringing it into force by the start of the school year in September 2026. 

Germany is weighing limits on children’s access to social media, as the ruling party urges the federal government to introduce a legal minimum age of 14. Chancellor Friedrich Merz has signalled support for the proposal, saying he has considerable sympathy for the idea.

The UK has launched a public consultation examining whether children under 16 should face restrictions or a potential ban on social media use. Young people, parents and educators are being invited to share views before ministers decide on future policy. Potential measures could include setting a minimum age limit for social media, restricting harmful features such as infinite scrolling, and examining protections against children sending or receiving explicit images. The consultation will also explore restrictions on children’s use of AI chatbots and limits on VPN use where it undermines safety protections. The government intends to act swiftly on its findings within months by introducing targeted legal powers that can be enacted rapidly as technology evolves. 

The EU as a whole is revisiting the idea of an EU-wide social media age restriction. The issue was raised in the European Commission’s new action plan against cyberbullying, published on Tuesday, 10 February. The plan confirms that a panel of child protection experts will advise the Commission by the summer on possible EU-wide age restrictions for social media use. The panel will assess options for a coordinated European approach, including potential legislation and awareness-raising measures for parents.

The document notes that diverging national rules could lead to uneven protection for children across the bloc. A harmonised EU framework, the Commission argues, would help ensure consistent safeguards and reduce fragmentation in how platforms apply age restrictions.

These individual national efforts unfold against a backdrop of increasing international regulatory coordination. On 3 February 2026, the European Commission convened with Australia’s eSafety Commissioner and the UK’s Ofcom to share insights on age assurance measures—technical and policy approaches for verifying users’ ages and enforcing age‑appropriate restrictions online. The meeting followed a joint communication signed at the end of 2025, where the three regulators pledged ongoing collaboration to strengthen online safety for children, including exploring effective age‑assurance technologies, enforcement strategies, and the role of data and independent research in regulatory action.

The big picture. The membership of the ban club has reached double digits. We’ll continue following the developments.

Zooming out. These initiatives across multiple nations confirm that Australia’s social media ban was not an isolated policy experiment, but rather the beginning of a global bandwagon effect. This momentum is particularly striking given that Australia’s own ban is not yet widely deemed a success—its effectiveness and broader impacts are still being studied and debated. 

The developments come just as Australia’s eSafety report notes that tech giants—including Apple, Google, Meta, Microsoft, Discord, Snap, Skype and WhatsApp—have made only limited progress in combating online child sexual exploitation and abuse (CSEA) despite being legally required to report measures under Australia’s Online Safety Act.

From bans to compliance

Beyond outright bans, a second regulatory front is taking shape: tightening the legal and technical conditions in which platforms process minors’ data and host content that may be harmful to children.

Meanwhile, enforcement action in the UK underscores the financial and reputational risks of non-compliance. The UK privacy watchdog fined Reddit £20 million for unlawfully processing children’s personal data and failing to protect under-13 users. The regulator found Reddit lacked ‘robust age assurance mechanisms’ and relied on easily bypassed self-declaration, meaning it had no lawful basis to handle children’s data and exposed them to potentially harmful content. Reddit also did not complete a required data protection impact assessment before 2025. The fine is the largest issued by the ICO over children’s privacy. Reddit plans to appeal.

The data protection authority of Türkiye has opened a new review into how major social media platforms manage children’s personal data. The Personal Data Protection Authority is reviewing how children’s personal data is processed on TikTok, Instagram, Facebook, YouTube, X and Discord and what safeguards are in place. Separately, the ruling Justice and Development Party (AKP) is expected to introduce a family package that would require identity verification for every account through phone numbers or the e-Devlet system. Children under 15 would not be allowed to create profiles, and further limits could apply to users under 18.

In Brazil, a new legislative proposal—Bill No. 730/2026—has been introduced to the Chamber of Deputies. The proposal mandates age-verification mechanisms aligned with Brazil’s data protection law (Law No. 13.709/2018), prioritising data minimisation, pseudonymisation, and limited retention. It also bans direct or indirect monetisation involving children under 14 and requires prior judicial authorisation for remunerated artistic work by adolescents aged 14–18, subject to safeguards. If adopted, the bill would formalise baseline compliance requirements for platforms operating in the Brazilian market.

China’s draft measures that classify and regulate online information that may harm minors’ physical and mental health went into effect on 1 March. Covered content includes material inducing unsafe behaviour, extreme emotions, discrimination, unhealthy lifestyles, irrational consumption, celebrity worship, or distorted values such as hedonism and pseudoscience. The rules also restrict the misuse of minors’ images and personal data. Providers of algorithmic recommendation systems and generative AI services are required to strengthen and refine their security governance frameworks and technical safeguards, and are prohibited from promoting or distributing online content that could negatively impact minors’ physical or psychological well-being.

Thirty years ago, on 8 February 1996, two developments kicked off a powerful narrative about the internet: That it occupied a realm apart from ordinary law and politics. These are the Declaration of the Independence of Cyberspace and the US Communications Decency Act (CDA). 

Declaration of the Independence of Cyberspace. In Davos, John Perry Barlow’s Declaration of the Independence of Cyberspace asserted that the ‘Governments of the Industrial World’ have ‘no sovereignty’ in cyberspace. 

This vision spawned a generation of thought arguing that the internet meant the ‘end of geography.’ Thousands of articles, books, theses and speeches have been delivered arguing that we need new governance for the ‘new brave world’ of the digital.

This intellectual and policy house of cards was built on the assumption that there is cyberspace beyond physical space. It was (and is) a wrong assumption. There is no cyberspace. Every email, every post, every AI query is ultimately a physical event: pulses of electrons carrying bits and bytes through cables under the ocean, Wi-Fi, data servers, and internet infrastructure.

The CDA and its Section 230. On the same day as Barlow’s declaration, President Clinton signed into law the US Communications Decency Act (CDA), which had been adopted by the US Congress. Buried within it was Section 230, which granted internet platforms an unprecedented immunity: they could not be treated as publishers or speakers of the content they hosted.

For the first time in history, commercial entities were granted a broad shield from liability for the very business from which they profited. It was a departure from the long tradition of legal liability, for example, of a newspaper for the text it publishes or of broadcasters for their transmissions.

This provision was justified as a way to protect a nascent industry from crippling litigation. At the time, internet companies were small and experimental. The immunity enabled rapid growth and innovation. 

Over time, however, those start-ups became some of the most valuable corporations in history, with global reach and market capitalisations of trillions of dollars. The legal framework, however, largely remained intact, even as internet companies developed sophisticated algorithms that curate, amplify, and monetise user content at scale. This divergence created a central tension in contemporary law and economics: immensely powerful intermediaries operating with limited accountability for systemic effects.

The convergence of the two. The conceptual separation of ‘cyberspace’ made this arrangement easier to defend. If the internet were a new world, exceptional rules seemed justified.

But critics quickly challenged that reasoning. US judge Frank H. Easterbrook argued that we do not need internet law, as we did not need the ‘law of the horse’ when horses were introduced as the dominant mode of transportation. The internet should be regulated by applying existing legal principles. Law regulates relationships among people and institutions, regardless of the technologies they use. The medium may change; the underlying principles endure.

Experience has largely vindicated that view. Digital technologies have not dissolved geography; they have intensified it. States assert jurisdiction over data flows, content moderation, taxation, competition, and security. High-precision geolocation, data localisation requirements, and national regulatory regimes demonstrate that the internet operates squarely within territorial boundaries.

However, CDA remains in force, extending into the age of AI. Companies developing large language models and other AI systems often rely on intermediary protections and analogous doctrines to limit liability. As a result, AI tools can be deployed globally with comparatively limited ex ante oversight. Yet their outputs can shape public discourse, influence elections, affect mental health, and generate economic disruption.

The central question is not whether innovation should be constrained, but whether it should be aligned with established principles of responsibility. Technologies do not exist outside society; they are embedded within it. If an entity designs, deploys, and profits from a system, it should bear responsibility for its foreseeable impacts. The age of legal exceptionalism should end. 

Last month saw further developments pointing to digital sovereignty as the prevailing trend, carrying over from December 2025 into January and February 2026.

The European Commission has begun testing the open-source Matrix protocol as a possible alternative to proprietary messaging platforms for internal communication. Matrix’s federated architecture allows communications to be hosted on European infrastructure and governed under EU rules, aligning with broader efforts to build sovereign digital public services and reduce reliance on external platforms.

The Commission also unveiled EURO‑3C, a new initiative worth €75 million under the Horizon Europe programme to build Europe’s first large‑scale federated telco‑edge‑cloud infrastructure. By federating existing national infrastructures across borders, EURO‑3C aims to reduce dependence on non‑EU hyperscalers and fortify the EU’s role in cloud, edge computing and AI infrastructure.

In France, the government has taken a hard line on control of satellite infrastructure, another cornerstone of digital sovereignty. Paris blocked the sale of ground-station assets owned by Eutelsat to an external investor, arguing that such infrastructure underpins both civilian and military space communications and must remain under domestic authority. French officials described these facilities as critical to strategic autonomy, in part because Eutelsat represents one of Europe’s few genuine competitors to US-led satellite constellations such as Starlink.

In Russia, the telecommunications regulator Roskomnadzor has tightened restrictions on Telegram, slowing delivery of media and limiting certain features to pressure users toward domestic alternatives. Roskomnadzor stated that Telegram is not taking meaningful measures to combat fraud, is failing to protect users’ personal data, and is violating Russian laws. Telegram’s founder has condemned the measures as authoritarian, warning they may interfere with essential communication services.

This crackdown has escalated with the full blocking of Meta’s WhatsApp, which 100 million Russians use. Authorities justified the ban by pointing to WhatsApp’s refusal to meet Russian legal requirements. Users are being encouraged to adopt government-supported platforms that critics say enable state surveillance, raising concerns about privacy and access to independent communication channels. Meta called the ban harmful to both safety and privacy.

Despite these moves, Russia is pausing aggressive action against Google, citing the country’s dependence on Android devices and warning that a sudden ban could disrupt millions of users. Officials indicated that any transition to domestic alternatives will be gradual, reflecting a cautious approach to reducing reliance on foreign tech.

Meanwhile, in the Netherlands, digital sovereignty has moved to the forefront of parliamentary debate. Lawmakers have renewed calls to shift public and private-sector data away from US-based cloud services, citing risks under US legislation such as the Cloud Act. Concerns have intensified following the proposed acquisition of Solvinity, which hosts parts of the Dutch DigiD digital identity system, by a US firm. MPs emphasised the need for stronger safeguards, the promotion of European or Dutch cloud alternatives, and the updating of procurement rules to protect sensitive data.

As European policymakers weigh strategic autonomy and regulatory control, Washington is simultaneously stepping up efforts to counter what it views as potentially disruptive measures on a global level. 

An internal US State Department cable seen by Reuters directs US diplomats to actively oppose foreign data sovereignty and data localisation laws and to promote a more assertive US international data policy. The cable, signed on 18 February, argues that such regulations could disrupt global data flows, raise costs, create unnecessarily burdensome compliance requirements, and hamper cloud and AI services. The directive also encourages advocacy for the Global Cross‑Border Privacy Rules Forum (CBPR) as an alternative mechanism supporting data flow with privacy protections.

Zooming out. This move underscores rising tensions with Europe’s regulatory push around privacy and digital sovereignty and reflects a move toward defending US tech interests abroad.

 Person, Face, Head, Fence

The big picture. The common thread is clear: Digital sovereignty is now a key consideration for governments worldwide. The approaches may differ, but the goal remains largely the same – to ensure that a nation’s digital future is shaped by its own priorities and rules. But true independence is hampered by deeply embedded global supply chains, prohibitive costs of building parallel systems, and the risk of stifling innovation through isolation. While the strategic push for sovereignty is clear, untangling from interdependent tech ecosystems will require years of investment, migration, and adaptation. The current initiatives mark the beginning of a protracted and challenging transition.

Last month saw further developments pointing to digital sovereignty as the prevailing trend, carrying over from December 2025 into January and February 2026.

The European Commission has begun testing the open-source Matrix protocol as a possible alternative to proprietary messaging platforms for internal communication. Matrix’s federated architecture allows communications to be hosted on European infrastructure and governed under EU rules, aligning with broader efforts to build sovereign digital public services and reduce reliance on external platforms.

The Commission also unveiled EURO‑3C, a new initiative worth €75 million under the Horizon Europe programme to build Europe’s first large‑scale federated telco‑edge‑cloud infrastructure. By federating existing national infrastructures across borders, EURO‑3C aims to reduce dependence on non‑EU hyperscalers and fortify the EU’s role in cloud, edge computing and AI infrastructure.

In France, the government has taken a hard line on control of satellite infrastructure, another cornerstone of digital sovereignty. Paris blocked the sale of ground-station assets owned by Eutelsat to an external investor, arguing that such infrastructure underpins both civilian and military space communications and must remain under domestic authority. French officials described these facilities as critical to strategic autonomy, in part because Eutelsat represents one of Europe’s few genuine competitors to US-led satellite constellations such as Starlink.

In Russia, the telecommunications regulator Roskomnadzor has tightened restrictions on Telegram, slowing delivery of media and limiting certain features to pressure users toward domestic alternatives. Roskomnadzor stated that Telegram is not taking meaningful measures to combat fraud, is failing to protect users’ personal data, and is violating Russian laws. Telegram’s founder has condemned the measures as authoritarian, warning they may interfere with essential communication services.

This crackdown has escalated with the full blocking of Meta’s WhatsApp, which 100 million Russians use. Authorities justified the ban by pointing to WhatsApp’s refusal to meet Russian legal requirements. Users are being encouraged to adopt government-supported platforms that critics say enable state surveillance, raising concerns about privacy and access to independent communication channels. Meta called the ban harmful to both safety and privacy.

Despite these moves, Russia is pausing aggressive action against Google, citing the country’s dependence on Android devices and warning that a sudden ban could disrupt millions of users. Officials indicated that any transition to domestic alternatives will be gradual, reflecting a cautious approach to reducing reliance on foreign tech.

Meanwhile, in the Netherlands, digital sovereignty has moved to the forefront of parliamentary debate. Lawmakers have renewed calls to shift public and private-sector data away from US-based cloud services, citing risks under US legislation such as the Cloud Act. Concerns have intensified following the proposed acquisition of Solvinity, which hosts parts of the Dutch DigiD digital identity system, by a US firm. MPs emphasised the need for stronger safeguards, the promotion of European or Dutch cloud alternatives, and the updating of procurement rules to protect sensitive data.

As European policymakers weigh strategic autonomy and regulatory control, Washington is simultaneously stepping up efforts to counter what it views as potentially disruptive measures on a global level. 

An internal US State Department cable seen by Reuters directs US diplomats to actively oppose foreign data sovereignty and data localisation laws and to promote a more assertive US international data policy. The cable, signed on 18 February, argues that such regulations could disrupt global data flows, raise costs, create unnecessarily burdensome compliance requirements, and hamper cloud and AI services. The directive also encourages advocacy for the Global Cross‑Border Privacy Rules Forum (CBPR) as an alternative mechanism supporting data flow with privacy protections.

Zooming out. This move underscores rising tensions with Europe’s regulatory push around privacy and digital sovereignty and reflects a move toward defending US tech interests abroad.

The big picture. Looking beyond the transatlantic sparring, the uncomfortable reality is that most countries cannot have full authority over their national digital space, and trying to do so can be economically and politically costly. But the alternative is not helpless dependence. There is limited room to manoeuvre, as Dr Jovan Kurbalija explains in his blog ‘Digital sovereignty stack: Infrastructure, services, data, and AI knowledge.’ 

In a high-stakes showdown that is redrawing the battle lines between Silicon Valley and the US military, AI firm Anthropic found itself exiled from the Pentagon after refusing to waive its ethical safeguards.

How it started: A contractual dispute. The Pentagon sought assurances that Anthropic’s model, Claude, could be used for ‘all lawful purposes.’ Anthropic pushed back, arguing that such wording did not sufficiently restrict uses the company considers high-risk, particularly mass domestic surveillance and fully autonomous weapons systems. The company requested clearer guardrails.

How it escalated: A supply chain risk designation. Officials signalled that Anthropic risked losing federal contracts and could even face designation as a supply chain risk. The administration ultimately moved ahead with that step, ordering agencies to halt use of Anthropic’s systems and providing a limited six-month wind-down period for existing arrangements.

Supply-chain risk designations are typically for foreign-adversary threats, not for domestic firms negotiating contract terms. Experts and Anthropic argue that such a designation has no clear precedent and is likely to be challenged in court as legally unsound.

Enter: The rival company. Into this vacuum stepped OpenAI. Shortly after Anthropic’s blacklisting, OpenAI reached its own arrangement with the Pentagon. The company publicly emphasised that it maintains safety ‘red lines,’ including restrictions related to mass surveillance and the requirement for human oversight in the use of force. CEO Sam Altman indicated that OpenAI shares many of the same ethical concerns Anthropic had raised. The Pentagon accepted OpenAI’s framework, raising questions about why similar safeguards proved untenable in Anthropic’s case.

The reactions. The episode reverberated throughout the tech world. Major industry groups, including representatives from Amazon, Nvidia, Apple, and others, warned the government against broad use of supply-chain risk designations for US tech companies, fearing chilling effects on innovation and public-private cooperation. 

Investors in Anthropic also pushed for de-escalation, worried that the standoff could harm the company’s enterprise business and IPO prospects if key contracts were lost. 

However, the day after the Anthropic lost its Pentagon contract, Claude hit number 1 in US app downloads, while US app uninstalls of ChatGPT’s mobile app jumped 295% day-over-day. This suggests that Claude may not lose popularity with users.

Why does it matter? Ultimately, the dispute exposed a deeper structural tension. Advanced AI systems are increasingly central to military planning, logistics, intelligence analysis, and battlefield decision-making. At the same time, leading AI firms have articulated ethical boundaries around surveillance, lethal autonomy, and dual-use risks. 

The confrontation between Anthropic and the Pentagon crystallised the question of who determines those boundaries when national security and corporate governance collide.

Yet as Anthropic and the Pentagon reportedly return to the negotiating table, it underscores that the issue is far from resolved.

Decoding the UN CSTD Working Group on Data Governance | Part 3

On 10 February, Diplo, the Open Knowledge Foundation, and the Geneva Internet Platform coorganised an online event, ‘Decoding the UN CSTD Working Group on Data Governance | Part 3’, which reviewed progress and prospects of the UN Multi-Stakeholder Working Group on Data Governance. Participants said discussions had intensified in recent months following initial procedural delays. However, deep divergences remain. One fault line concerns whether data governance should focus primarily on individual privacy protections or incorporate broader societal and collective rights in data. Another centers on whether interoperability should be treated as an intrinsic public good or assessed in light of potential risks such as market concentration and data extractivism. A third area of contention involves cross-border data flows. Some frame free data flows with trust as essential for integration into global innovation markets. Others argue that asymmetries in infrastructure and bargaining power require preserving regulatory autonomy for developing countries, including the ability to pursue data sovereignty policies aligned with national development priorities.

Speakers emphasized that capacity building has emerged as an area of growing convergence. Governments across regions have called for technical, institutional, and policy support to develop interoperable systems and strengthen domestic data governance frameworks. While the final report will be brief, members indicated it would reflect divergent views without privileging majority positions. Discussions continue over whether the outcome should include recommendations or remain descriptive.

The working group is scheduled to meet again in Geneva in March, as it moves toward finalizing a document that proponents hope will guide international data governance debates in the years ahead.

 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

Menaces hybrides: Comment améliorer la résilience face à ce phénomène?

The UN Institute for Disarmament Research (UNIDIR), in partnership with the Organisation internationale de la Francophonie (OIF), held an event to explore the phenomenon of hybrid threats. Experts recommended strengthening multilateral governance with harmonised norms for space and online platforms, building societal resilience through information and enforcement, and protecting critical infrastructure via cybersecurity and operational safeguards. Supporting less-equipped states with technical, regulatory, and risk-management tools is essential, alongside strategic signalling to make the consequences of unacceptable actions clear. Across all these measures, the guiding principle is integration—space, information, and cyber domains must be managed together to maintain global stability and resilience.

Launch of the World Intellectual Property Report 2026: Technology on the Move

The World Intellectual Property Organization (WIPO) launched the 2026 edition of its World Intellectual Property Report, entitled ‘Technology on the Move’, on 17 February (Tuesday) in Geneva and online. The report analyses how technologies spread globally and the implications for economic development. It reveals a dramatic acceleration in global technology diffusion: older technologies like the telegraph and automobile took decades to diffuse, whereas contemporary digital innovations, such as generative AI, reach users worldwide within days thanks to mature global digital infrastructure. Adoption gaps between advanced and developing economies have narrowed for recent technologies, and usage intensity differences are diminishing, especially for digital technologies. However, significant disparities remain, notably in Africa, where infrastructure and access gaps persist. Innovation leadership remains concentrated in a handful of economies, including the USA, Western Europe, Japan and China. Successful diffusion depends on four key factors—technology characteristics, information flow, absorptive capacity, and public policy and IP frameworks. The report stresses that deliberate policy and investment are essential to translate rapid diffusion into inclusive economic development and growth.

In 2027, Geneva will host the next global AI Summit, arriving at a moment when governments, businesses, and communities worldwide are deep into AI-driven transformation. 

Previous hosts brought their own distinctiveness: from Bletchley Park’s focus on existential risk to Seoul’s innovation-security balance, Paris’s economic and societal lens, and New Delhi’s emphasis on development and inclusion. Switzerland now has an opportunity to shape the next phase of AI governance and ensure that the 2027 AI Summit is more than just an event.

We suggest ten signposts on the Road to the 2027 Geneva AI Summit, supported by DiploAI research, training, and policy monitoring via the Digital Watch Observatory.

 Road, Path, City, Field, Outdoors, Nature, Sign, Symbol, Neighborhood, Architecture, Building, Factory

Innovation. AI is fundamentally about innovation, both technological and, increasingly, societal. The next wave of innovation will involve activating the knowledge of citizens and institutions through data labelling, embedding reinforcement learning into pedagogical practices, and developing knowledge graphs. Switzerland has long ranked among the world’s top innovators, favouring grounded, low-hype developments that address real needs and unexplored niches.

Governance. Existing international governance frameworks are likely to shape AI policy in Geneva, given the city’s concentration of organisations spanning trade, health, telecommunications, labour, and security. The new International Scientific Panel on AI can draw lessons from the Geneva-based IPCC’s experience at the science–diplomacy interface. Switzerland’s bottom-up policymaking model supports citizen inclusion in AI debates, while its cautious, gap-based regulatory tradition aligns with emerging calls for pragmatic, proportionate AI governance.

Subsidiarity. The principle of subsidiarity, central to Swiss societal organisation, holds that decision-making should occur as close as possible to the citizens and communities concerned. Applied to AI, this approach would counter the concentration of power in a few major platforms by rooting AI development in the local communities where knowledge is created through everyday interactions.

EspriTech. AI is prompting renewed reflection on fundamental questions of humanity, free will, and ethics, leading societies to revisit their cultural, religious, and philosophical foundations. Drawing on EspriTech and the intellectual legacy of Geneva’s thinkers, these lessons can help fine-tune debate on AI and humanity.

Trust. Switzerland, as a country with high trust capital, can foster a ‘trust but verify’ approach ahead of the 2027 Summit. Trust can be rebuilt through a fully informed and realistic discussion of AI risks, which have gradually recalibrated from 2023’s focus on existential risks (survival of humanity) to the current primacy of existing risks (education, disinformation, jobs) and growing concerns of exclusion risks (monopolisation of AI by a few actors). 

Apprenticeship. The AI apprenticeship model, inspired by the Swiss tradition of learning by doing with mentorship, is emerging as an effective way to train in AI. Ahead of the 2027 Geneva AI Impact Summit, it can strengthen the AI knowledge and capacities of diplomats, civil society, and local communities.

Humanity. The 2027 AI Summit needs to give concrete meaning to the call for AI to serve humanity’s core interests. Switzerland has been ‘walking the talk’ of human-centred society in politics, education, social care, and the economy. This centuries-long experience can help fine-tune the critical connections between AI and human civilisation by both sharing some lessons learned and experimenting with new approaches and practices for the AI era. 

Institutions. Institutions are an important carrier of societal memory and knowledge. AI should be considered a creative change agent that can, among other things, preserve institutional memory and strengthen the capacity to respond to societal needs.

Mutilateralism. The AI Summit can help clarify the purpose of international organisations in the AI era. AI can, for example, help foster a new level of legitimacy for international processes by ensuring that contributions to public consultations are properly traced and reflected in policy documents.

Sovereignty. As geopolitical tensions rise, questions of AI and digital sovereignty are becoming more relevant, highlighting the need for agency, self-determination, and responsible management of the knowledge that drives AI rather than isolation. Ahead of the 2027 Summit, Swiss experience can support more informed discussions on the technical, legal, and knowledge dimensions of AI sovereignty within an interdependent framework.

Read the suggestions in full at our dedicated web page.

BLOG featured image 2026 23 AI Summit in Geneva
www.diplomacy.edu

 In 2027, Geneva will host the AI Summit.