How early internet choices shaped today’s AI
Three decades after the internet was framed as a space beyond law and responsibility, the same assumptions are quietly shaping how powerful AI systems enter society today, raising urgent questions about who should be held accountable when digital innovation causes real-world harm.
Two decisions taken on the same day in February 1996 continue to shape how the internet, and now AI, is governed today. That is the central argument of Jovan Kurbalija’s blog ‘Thirty years of Original Sin of digital and AI governance,’ which traces how early legal and ideological choices created a lasting gap between technological power and public accountability.
The first moment unfolded in Davos, where John Perry Barlow published his Declaration of the Independence of Cyberspace, portraying the internet as a realm beyond the reach of governments and existing laws. According to Kurbalija, this vision helped popularise the idea that digital space was fundamentally separate from the physical world, a powerful narrative that encouraged the belief that technology should evolve faster than, and largely outside of, politics and law.
In reality, the blog argues, there is no such thing as a stateless cyberspace. Every online action relies on physical infrastructure, data centres, and networks that exist within national jurisdictions. Treating the internet as a lawless domain, Kurbalija suggests, was less a triumph of freedom than a misconception that sidelined long-standing legal and ethical traditions.
The second event happened the same day in Washington, D.C., when the United States enacted the Communications Decency Act. Hidden within it was Section 230, a provision that granted internet platforms broad immunity from liability for the content they host. While originally designed to protect a young industry, this legal shield remains in place even as technology companies have grown into trillion-dollar corporations.
Kurbalija notes that the myth of a separate cyberspace and the legal immunity of platforms reinforced each other. The idea of a ‘new world’ helped justify why old legal principles should not apply, despite early warnings, including from US judge Frank Easterbrook, that existing laws were sufficient to regulate new technologies by focusing on human relationships rather than technical tools.
Today, this unresolved legacy has expanded into the realm of AI. AI companies, the blog argues, benefit from the same logic of non-liability, even as their systems can amplify harm at a scale comparable to, or even greater than, that of other heavily regulated industries.
Kurbalija concludes that addressing AI’s societal impact requires ending this era of legal exceptionalism and restoring a basic principle that those who create, deploy, and profit from technology must also be accountable for its consequences.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
