How early internet choices shaped today’s AI

Two decisions taken on the same day in February 1996 continue to shape how the internet, and now AI, is governed today. That is the central argument of Jovan Kurbalija’s blog ‘Thirty years of Original Sin of digital and AI governance,’ which traces how early legal and ideological choices created a lasting gap between technological power and public accountability.

The first moment unfolded in Davos, where John Perry Barlow published his Declaration of the Independence of Cyberspace, portraying the internet as a realm beyond the reach of governments and existing laws. According to Kurbalija, this vision helped popularise the idea that digital space was fundamentally separate from the physical world, a powerful narrative that encouraged the belief that technology should evolve faster than, and largely outside of, politics and law.

In reality, the blog argues, there is no such thing as a stateless cyberspace. Every online action relies on physical infrastructure, data centres, and networks that exist within national jurisdictions. Treating the internet as a lawless domain, Kurbalija suggests, was less a triumph of freedom than a misconception that sidelined long-standing legal and ethical traditions.

The second event happened the same day in Washington, D.C., when the United States enacted the Communications Decency Act. Hidden within it was Section 230, a provision that granted internet platforms broad immunity from liability for the content they host. While originally designed to protect a young industry, this legal shield remains in place even as technology companies have grown into trillion-dollar corporations.

Kurbalija notes that the myth of a separate cyberspace and the legal immunity of platforms reinforced each other. The idea of a ‘new world’ helped justify why old legal principles should not apply, despite early warnings, including from US judge Frank Easterbrook, that existing laws were sufficient to regulate new technologies by focusing on human relationships rather than technical tools.

Today, this unresolved legacy has expanded into the realm of AI. AI companies, the blog argues, benefit from the same logic of non-liability, even as their systems can amplify harm at a scale comparable to, or even greater than, that of other heavily regulated industries.

Kurbalija concludes that addressing AI’s societal impact requires ending this era of legal exceptionalism and restoring a basic principle that those who create, deploy, and profit from technology must also be accountable for its consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New York moves toward data centre moratorium as energy fears grow

Lawmakers in New York have proposed a three-year moratorium on permits for new data centres amid pressure to address the strain prominent AI facilities place on local communities.

The proposal mirrors similar moves in several other states and reflects rising concern that rapidly expanding infrastructure may raise electricity costs and worsen environmental conditions rather than supporting balanced development.

Politicians from both major parties have voiced unease about the growing power demand created by data-intensive services. Figures such as Bernie Sanders and Ron DeSantis have warned that unchecked development could drive household bills higher and burden communities.

More than 230 environmental organisations recently urged Congress to consider a national pause to prevent further disruption.

The New York bill, sponsored by Liz Krueger and Anna Kelles, aims to give regulators time to build strict rules before major construction continues. Krueger described the state as unprepared for the scale of facilities seeking entry, arguing that residents should not be left covering future costs.

Supporters say a temporary halt would provide time to design policies that protect consumers rather than encourage unrestrained corporate expansion.

Governor Kathy Hochul recently announced the Energize NY Development initiative, intended to modernise the grid connection process and ensure large energy users contribute fairly.

The scheme would require data centre operators to improve their financial responsibility as New York reassesses its approach to extensive AI-driven infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Crypto.com CEO launches ai.com AI agent platform

Kris Marszalek, CEO of Crypto.com, has launched ai.com, a platform enabling users to create personal AI agents for everyday digital tasks. The rollout marks Marszalek’s expansion beyond crypto infrastructure into autonomous AI systems.

The beta debut was promoted through a high-profile television commercial aired during Super Bowl 60 on NBC, leveraging one of the world’s largest broadcast audiences. Early access lets users reserve usernames while waiting for their customised AI agents to be deployed.

Marszalek said the long-term goal is a decentralised network of self-improving AI agents that handle email, scheduling, shopping, and travel planning. The initiative aims to accelerate the development of artificial general intelligence through distributed AI agent networks.

The launch arrives amid intensifying competition in the AI agent sector. Major tech firms are launching agent platforms and large ad campaigns, signalling rising commercial momentum behind autonomous digital assistants.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sainsbury’s ejects shopper after facial recognition misidentification

A data professional, Warren Rajah, was escorted out of a Sainsbury’s supermarket in south London after staff incorrectly believed he matched an offender flagged by Facewatch facial recognition technology.

Facewatch later confirmed that there were no alerts or records associated with him, and Sainsbury’s attributed the incident to human error rather than a software fault.

Rajah described the experience as humiliating and ‘Orwellian’, criticising the lack of explanation, absence of a transparent appeals process, and the requirement to submit personal identification to a third party to prove he was not flagged.

He expressed particular concern about the impact such incidents could have on vulnerable customers.

The case highlights broader debates around the deployment of facial recognition in retail, where companies cite reductions in theft and abuse. At the same time, civil liberties groups warn of misidentification, insufficient staff training and the normalisation of privatised biometric surveillance.

UK regulators have reiterated that retailers must assess misidentification risks and ensure robust safeguards when processing biometric data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI assistants drive a powerful shift in modern work

AI assistants have become a standard feature of modern working life, increasingly used across business, education, and government for writing, analysis, research, and learning tasks. Their widespread adoption reflects a broader shift in how digital tools support productivity and knowledge work.

As their use expands, AI literacy is emerging as a key professional competence. Understanding how to work effectively with AI assistants is becoming essential for workforce readiness, skills development, and long-term employability.

The growing reliance on AI assistants also raises important questions around responsibility and oversight. While these tools can significantly improve efficiency, they generate content rather than verified facts, making human judgment, accountability, and fact-checking indispensable.

Understanding how AI assistants function is therefore critical. Built on large language models, they predict language patterns rather than think or reason like humans. This technical reality underpins discussions on transparency, reliability, and appropriate use in professional contexts.

In parallel, AI assistants are moving from standalone chatbots into embedded features within workplace software, including documents, spreadsheets, and collaboration platforms. This shift strengthens their role as in-context work tools, while also increasing the need for clear organisational guidelines on their use.

The AI assistant ecosystem is also expanding globally, with platforms offering different approaches to privacy, integration, and governance. This diversity gives users more choice but complicates alignment across regulatory and organisational environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Agentic AI drives structural change in customer care

Customer care is undergoing structural change as agentic AI moves from experimental pilots to large-scale deployment. Advances in AI capabilities, combined with growing organisational readiness, are enabling companies to integrate AI systems directly into core customer service operations, particularly in call centres.

The increasing use of agentic AI is elevating customer care to a strategic management issue. Senior leadership, including CEOs, is paying closer attention to customer operations as a source of resilience, efficiency, and competitive differentiation, rather than viewing it solely as a support function.

At the same time, a growing divide is emerging between organisations that can scale AI effectively and those that remain at an early stage of adoption. AI leaders are investing in internal capabilities, governance structures, and workforce readiness, allowing them to deploy AI consistently across customer interactions.

Agentic AI is increasingly shaping end-to-end customer care models. Instead of being used for isolated automation tasks, AI systems are becoming the coordinating layer for customer service, managing interactions across channels and supporting more complex service processes.

Automation levels in customer care are rising rapidly. Some organisations are automating a majority of customer contacts, driven by improvements in natural language processing, decision-making, and integration with enterprise systems. This trend is changing how customer demand is managed at scale.

Human roles in customer care are evolving alongside automation. AI tools are being used to support agents in decision-making, reduce handling time, and improve service consistency. As a result, human agents are increasingly focused on cases requiring judgement, empathy, and contextual understanding.

Despite the rapid adoption of AI, customer satisfaction remains the primary objective. Efficiency gains, cost reduction, and revenue growth are important outcomes, but they are increasingly assessed based on their impact on customer experience and service quality.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Robots edge closer to human-like movement

Engineers are working to make robots move with greater balance and fluidity, bringing machines closer to human-like motion. Progress depends heavily on actuators, the components that convert energy into precise physical movement.

Traditional electric motors have enabled many robotic breakthroughs, yet limitations in efficiency, safety and responsiveness remain clear. Machines often consume too much power, overheat at small sizes and lack the flexibility needed for smooth interaction.

Major manufacturers including Schaeffler and Hyundai Mobis are now designing advanced actuators that provide better control, real-time feedback and improved energy efficiency. Such innovations could allow humanoid robots to operate safely alongside workers and perform practical industrial tasks.

Researchers are also experimenting with softer materials and air-powered systems that behave more like muscles than rigid machinery. Continued advances could eventually produce robots capable of natural, graceful movement, opening new possibilities for everyday use.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Cyber Startup Programme unveiled as Infosecurity Europe boosts early innovation

Infosecurity Europe has launched a new Cyber Startup Programme to support early-stage cybersecurity innovation and strengthen ecosystem resilience. The initiative will debut at Infosecurity Europe 2026, offering founders and investors a dedicated experience focused on emerging technologies and growth.

The programme centres on a new Cyber Startups Zone, an exhibition area showcasing young companies and novel security solutions. Founders will gain industry visibility, along with tailored ticket access and curated networking.

Delivery will take place in partnership with UK Cyber Flywheel, featuring a dedicated founder- and investor-focused day on Tuesday 2 June. Sessions will cover scaling strategies, go-to-market planning, funding, and live pitching opportunities.

Infosecurity Europe will also introduce the Cyber Startup Award 2026, recognising early-stage firms with live products and growth potential. Finalists will pitch on stage, with winners receiving exhibition space, PR support, and a future-brand workshop.

Alongside the programme, the Cyber Innovation Zone, delivered with the UK Department for Science, Innovation and Technology, will spotlight innovative UK cybersecurity businesses and emerging technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European tech strategy advances with Germany’s new AI factory

Germany has launched one of Europe’s largest AI factories to boost EU-wide sovereign AI capacity. Deutsche Telekom unveiled the new ‘Industrial AI Cloud’ in Munich, in partnership with NVIDIA and Polarise.

Designed to deliver high-performance AI computing for industry, research, and public institutions, the platform keeps data operations under European jurisdiction. Company executives described the project as proof that Europe can build large-scale AI infrastructure aligned with its regulatory and sovereignty goals.

The AI factory runs on nearly 10,000 NVIDIA Blackwell GPUs, providing up to 0.5 exaFLOPS of computing power. Telekom said the capacity would be sufficient to support hundreds of millions of users accessing AI services simultaneously across the EU.

Officials in Germany framed the AI factory initiative as a strategic investment in technological leadership and digital independence. The infrastructure operates under German and EU data protection rules, positioning compliance and security as core competitive advantages.

Industrial applications are central to the project, with companies such as Siemens integrating simulation tools into the platform. The AI factory also runs on renewable energy, uses river water cooling, and plans to reuse waste heat within Munich’s urban network.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI transforms finance systems

Organisations undergoing finance transformations are discovering that traditional system cutovers rarely go as planned. Hidden manual workarounds and undocumented processes often surface late, creating operational risks and delays during ERP migrations.

Agentic AI is emerging as a solution by deploying autonomous software agents that discover real workflows directly from system data. Scout agents analyse transaction logs to uncover hidden dependencies, allowing companies to build more accurate future systems based on actual operations.

Simulator agents to stress test new systems by generating thousands of realistic transactions continuously. When problems arise, agents analyse errors and automatically recommend fixes, turning testing into a continuous improvement process rather than a one-time checkpoint.

Sentinel agents monitor financial records in real time to detect discrepancies before they escalate into compliance risks. Leaders say the approach shifts focus from single go-live milestones to ongoing resilience, with teams increasingly managing intelligent systems instead of manual processes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!