weekly newsletter

Home | Newsletters & Shorts | Weekly #249 Why cyberspace doesn’t exist

Weekly #249 Why cyberspace doesn’t exist

 Logo, Text

6 – 13 February 2026


HIGHLIGHT OF THE WEEK

Why cyberspace doesn’t exist

Thirty years ago, on 8 February 1996, two developments kicked off a powerful narrative about the internet: that it occupied a realm apart from ordinary law and politics: the Declaration of the Independence of Cyberspace and the US Communications Decency Act (CDA). 

Declaration of the Independence of Cyberspace. In Davos, John Perry Barlow’s Declaration of the Independence of Cyberspace asserted that the ‘Governments of the Industrial World’ have ‘no sovereignty’ in cyberspace. 

This vision spawned a generation of thought arguing that the internet meant the ‘end of geography.’ Thousands of articles, books, theses and speeches have been delivered arguing that we need new governance for the ‘new brave world’ of the digital.

This intellectual and policy house of cards was built on the assumption that there is cyberspace beyond physical space. It was (and is) a wrong assumption. There is no cyberspace. Every email, every post, every AI query is ultimately a physical event: pulses of electrons carrying bits and bytes through cables under the ocean, Wi-Fi, data servers, and internet infrastructure.

The CDA and its Section 230. On the same day as Barlow’s declaration, President Clinton signed into law the US Communications Decency Act (CDA), which had been adopted by the US Congress. Buried within it was Section 230, which granted internet platforms an unprecedented immunity: they could not be treated as publishers or speakers of the content they hosted.

For the first time in history, commercial entities were granted a broad shield from liability for the very business from which they profited. It was a departure from the long tradition of legal liability, for example, of a newspaper for the text it publishes or of broadcasters for their transmissions.

This provision was justified as a way to protect a nascent industry from crippling litigation. At the time, internet companies were small and experimental. The immunity enabled rapid growth and innovation. 

Over time, however, those start-ups became some of the most valuable corporations in history, with global reach and market capitalisations of trillions of dollars. The legal framework, however, largely remained intact, even as internet companies developed sophisticated algorithms that curate, amplify, and monetise user content at scale. This divergence created a central tension in contemporary law and economics: immensely powerful intermediaries operating with limited accountability for systemic effects.

 Book, Comics, Publication, Clothing, Coat, Adult, Male, Man, Person, Face, Head

The convergence of the two. The conceptual separation of ‘cyberspace’ made this arrangement easier to defend. If the internet was a new world, exceptional rules seemed justified.

But critics quickly challenged that reasoning. US judge Frank H. Easterbrook argued that we do not need internet law, as we did not need the ‘law of the horse’ when horses were introduced as the dominant mode of transportation. The internet should be regulated by applying existing legal principles. Law regulates relationships among people and institutions, regardless of the technologies they use. The medium may change; the underlying principles endure.

Experience has largely vindicated that view. Digital technologies have not dissolved geography; they have intensified it. States assert jurisdiction over data flows, content moderation, taxation, competition, and security. High-precision geolocation, data localisation requirements, and national regulatory regimes demonstrate that the internet operates squarely within territorial boundaries.

However, CDA remains in force, extending into the age of AI. Companies developing large language models and other AI systems often rely on intermediary protections and analogous doctrines to limit liability. As a result, AI tools can be deployed globally with comparatively limited ex ante oversight. Yet their outputs can shape public discourse, influence elections, affect mental health, and generate economic disruption.

The central question is not whether innovation should be constrained, but whether it should be aligned with established principles of responsibility. Technologies do not exist outside society; they are embedded within it. If an entity designs, deploys, and profits from a system, it should bear responsibility for its foreseeable impacts. The age of legal exceptionalism should end. 

IN OTHER NEWS LAST WEEK

This week in AI governance

The UN. The General Assembly approved the creation of a historic global scientific advisory body on AI, the Independent International Scientific Panel on Artificial Intelligence (AI). The first of its kind, the panel’s main task is to ‘issue evidence-based scientific assessments synthesising and analysing existing research related to the opportunities, risks and impacts of AI’, in the form of one annual ‘policy-relevant but non-prescriptive summary report’ to be presented to the Global Dialogue on AI Governance. The Panel will also ‘provide updates on its work up to twice a year to hear views through an interactive dialogue of the plenary of the General Assembly with the Co-Chairs of the Panel’. 

AI governance was a key focus at the recent UN Special Dialogue entitled ‘From Principles to Practice: Special Dialogue on Artificial Intelligence and Preventing and Countering Violent Extremism’. Diplomats and experts discussed how AI is reshaping global stability, conflict dynamics and international law. Participants highlighted risks from autonomous systems and misinformation campaigns and stressed the need for multilateral cooperation and shared norms to mitigate emerging threats.

Germany. Germany has unveiled plans for a ‘Sovereign AI Factory’, a government‑backed initiative to develop sovereign AI models and infrastructure tailored to local language, cultural context and industrial needs. The project will support domestic innovation by providing compute resources, datasets and certification frameworks that conform to European safety and privacy standards, with the aim of reducing reliance on non‑EU AI providers. Berlin says the factory will also serve as a collaborative platform for research institutions and industry to co‑design secure, interoperable AI systems for public and private sectors.

Pakistan. Pakistan’s government has pledged major investment in AI by 2030, rolling out a comprehensive national strategy to accelerate digital transformation across the economy. The plan focuses on building AI capacity in key sectors — including agriculture, healthcare and education — through funding for research hubs, public‑private partnerships and targeted upskilling programmes. Officials say the investment is intended to attract foreign direct investment, boost exports and position Pakistan as a regional tech player, while also addressing ethical and governance frameworks to guide responsible AI deployment.

Slovenia. Slovenia has set out an ambitious national AI vision, outlining strategic priorities such as human‑centric AI, robust ethical frameworks, and investment in research and talent. The roadmap emphasises collaboration with European partners and adherence to international standards, positioning Slovenia as a proactive voice in shaping AI governance dialogues at the upcoming summit.

Chile. Chile has introduced Latam-GPT to strengthen Latin America’s presence in global AI. The project, developed by the National Centre for Artificial Intelligence with support across South America, aims to correct long-standing biases by training systems on the region’s own data instead of material drawn mainly from the USA or Europe. President Gabriel Boric said the model will help maintain cultural identity and allow the region to take a more active role in technological development. Latam-GPT is not designed as a conversational tool but rather as a vast dataset that serves as the foundation for future applications. More than eight terabytes of information have been collected, mainly in Spanish and Portuguese, with plans to add indigenous languages as the project expands.

India. India has begun enforcing a three-hour removal rule for AI-generated deepfake content, requiring platforms and intermediaries to take down specified material within 180 minutes of notification or face regulatory sanctions. The accelerated timeframe is designed to blunt the rapid spread of deceptive, synthetic media amid heightened concerns about misinformation and social disruption.

Brazil. Brazil’s National Data Protection Agency and National Consumer Rights Bureau have ordered X to stop serving explicit image generation via its Grok AI, citing risks of harmful outputs reaching minors and contravention of local digital safety norms. The directive demands immediate technical measures to block certain prompts and outputs as part of ongoing scrutiny of platform content moderation practices.

Global coalition on child safety. A broad coalition of child rights advocates, digital safety organisations and policymakers has called on governments to ban ‘nudification’ AI tools, urging criminalisation of software that converts clothed images into sexually explicit versions without consent. The group argues that existing content moderation approaches are insufficient to protect minors and stresses that pre-emptive legal prohibitions are needed to prevent widespread exploitation.

The UK. The UK Supreme Court has ruled that AI-assisted inventions can qualify for patents when the human contributor’s inventive role is identifiable and substantial, a decision legal experts say will boost innovation by clarifying intellectual property protections in hybrid human-AI development. The judgment aims to incentivise investment in AI research while maintaining established patentability standards.

South Korea. South Korea has launched a labour‑government body to address the pressures of AI automation on the workforce, creating a cross‑sector council tasked with forecasting trends in job displacement and recommending policy responses. The initiative brings together labour unions, industry leaders and government ministries to coordinate reskilling and upskilling programmes, strengthen social safety nets, and explore income support models for workers affected by automation.


Child safety online: The momentum holds steady

Bans, bans, bans. The ban club just keeps growing, as Portugal’s parliament has approved a law restricting social media access for minors under 16, requiring express and verified parental consent for accessing platforms like Instagram, TikTok, and Facebook. Access will be controlled through the Digital Mobile Key, Portugal’s national digital ID system, ensuring effective age verification and platform compliance. The law strengthens protections amid growing concerns over social media’s impact on young people’s mental health, and detailed implementation and enforcement rules are now set for parliamentary committee review.

Czech Prime Minister Andrej Babiš publicly endorsed a proposal to ban children under 15 from using major social platforms, framing it as a protective measure against damaging effects on mental health and well-being. The government is actively considering legislation this year that could formalise such restrictions.

The EU as a whole is revisiting the idea of an EU-wide social media age restriction. The issue was raised in the European Commission’s new action plan against cyberbullying, published on Tuesday, 10 February. The plan confirms that a panel of child protection experts will advise the Commission by the summer on possible EU-wide age restrictions for social media use. The panel will assess options for a coordinated European approach, including potential legislation and awareness-raising measures for parents.

The document notes that diverging national rules could lead to uneven protection for children across the bloc. A harmonised EU framework, the Commission argues, would help ensure consistent safeguards and reduce fragmentation in how platforms apply age restrictions.

The big picture. The membership of the ban club has reached double digits. We’ll continue following the developments.

The addiction trial begins. In the USA, a landmark trial opened in Los Angeles this week against Meta and YouTube, centring on claims that their platforms are deliberately designed to be addictive and have harmed young users’ mental health. 

The plaintiff, Kaley, now 20, alleges that Instagram and YouTube caused her anxiety, body dysmorphia, and suicidal thoughts. Her lawyers likened features like infinite scroll, autoplay, likes, and beauty filters to a ‘digital casino’ for children, citing internal documents showing the platforms targeted young users and even used YouTube as a ‘digital babysitter.’

Meta and YouTube’s defence argued that social media was not responsible for Kaley’s struggles, citing her difficult family background, therapists’ records, and the availability of safety tools. YouTube highlighted that Kaley’s average daily usage has been 29 minutes since 2020 and compared the platform to other entertainment services, emphasising that she is not addicted. Meta stressed that Instagram offered creative outlets and new tools to manage screen time, and that social media may have provided support during family difficulties.

What’s next? Executives, including Meta CEO Mark Zuckerberg, Instagram CEO Adam Mosseri, and YouTube CEO Neal Mohan, are expected to testify in the coming weeks.

Meanwhile, across the Atlantic, the British government has launched a campaign called ‘You Won’t Know Until You Ask’ to encourage parents to talk with their children about the harmful content they might encounter online. It will include guidance to parents on safety settings, conversation prompts, and age-appropriate advice for tackling misinformation and harmful content.

Zooming out.  Government research found that roughly half of parents had never had such conversations. Among those who have, almost half say the conversations are one-offs or rare. This shows a need to normalise frequent conversations about online content.


Russia and the Netherlands make moves for digital sovereignty

In Russia, authorities have intensified efforts to control the country’s digital communication landscape, reflecting a broader push for ‘sovereign’ internet infrastructure. 

The Russian communications regulator Roskomnadzor has tightened restrictions on Telegram, slowing delivery of media and limiting certain features to pressure users toward domestic alternatives. Roskomnadzor stated that Telegram is not taking meaningful measures to combat fraud, is failing to protect users’ personal data, and is violating Russian laws. Telegram’s founder has condemned the measures as authoritarian, warning they may interfere with essential communication services.

This crackdown has escalated with the full blocking of Meta’s WhatsApp, which 100 million Russians use. Authorities justified the ban by pointing to WhatsApp’s refusal to meet Russian legal requirements. Users are being encouraged to adopt government-supported platforms that critics say enable state surveillance, raising concerns about privacy and access to independent communication channels. Meta called the ban harmful to both safety and privacy.

Despite these moves, Russia is pausing aggressive action against Google, citing the country’s dependence on Android devices and warning that a sudden ban could disrupt millions of users. Officials indicated that any transition to domestic alternatives will be gradual, reflecting a cautious approach to reducing reliance on foreign tech.

Meanwhile, in the Netherlands, digital sovereignty has moved to the forefront of parliamentary debate. Lawmakers have renewed calls to shift public and private-sector data away from US-based cloud services, citing risks under US legislation such as the Cloud Act. Concerns have intensified following the proposed acquisition of Solvinity, which hosts parts of the Dutch DigiD digital identity system, by a US firm. MPs emphasised the need for stronger safeguards, the promotion of European or Dutch cloud alternatives, and the updating of procurement rules to protect sensitive data.


EU challenging Meta’s grip on AI access in WhatsApp

The European Commission has formally notified Meta that it has breached EU competition law by blocking third‑party AI assistants from accessing WhatsApp, limiting in‑app AI interactions to Meta’s own Meta AI.

Regulators argue Meta likely holds a dominant position in consumer messaging within the EU and that its restrictions could cause serious and irreparable market harm by foreclosing rivals’ access to WhatsApp’s large user base. 

The Commission is considering interim measures to prevent continued exclusion and protect competitive entry.



LOOKING AHEAD
 Person, Face, Head, Binoculars

The UN Institute for Disarmament Research (UNIDIR), in partnership with the Organisation internationale de la Francophonie (OIF), will hold an event to explore the phenomenon of hybrid threats, examining their main types and impacts. The event will be held on 16 February (Monday), in Geneva. Registration for the event is open. 

The India AI Impact Summit 2026 will be held on 19–20 February 2026 in New Delhi, India, under the auspices of the Ministry of Electronics and Information Technology (MeitY). The summit brings stakeholders to explore how AI can be developed and deployed to generate positive societal, economic, and environmental outcomes. Structured around guiding principles of People, Planet, and Progress, the Summit’s programme focuses on thematic areas such as human capital and inclusion, safe and trusted AI, innovation and resilience, democratising AI resources, and AI for economic growth and social good. 

The World Intellectual Property Organization (WIPO) will launch the 2026 edition of its World Intellectual Property Report, entitled ‘Technology on the Move’, on 17 February (Tuesday) in Geneva and online. The programme for the launch includes opening remarks by WIPO leadership, a keynote address on the diffusion of generative AI in the global economy, a presentation of the World Intellectual Property Report 2026 by the WIPO Economics and Data Analytics team, and an industry panel discussion exploring perspectives on technology diffusion.



READING CORNER
2149739753

The AI agent social network Moltbook is fuelling the hype around autonomous ecosystems while raising security and digital reality concerns.

AI and Accessibility

From the RYO bionic hand to AI smart glasses, explore how AI is shifting assistive tech from compensation to empowerment while raising vital governance questions.