EU prepares tougher rules for older data centres

The European Commission is preparing more stringent requirements for ageing data centres rather than allowing legacy infrastructure to operate under looser rules.

A draft strategy tied to the EU’s tech sovereignty package signals that older sites will face higher efficiency expectations and stricter sustainability checks as part of an effort to modernise the digital backbone of the EU.

The proposal outlines minimum performance standards for new data centres by 2030, aiming to align the entire sector with the bloc’s climate and resilience goals. Officials want to reduce energy waste and improve monitoring across facilities that have long operated without uniform benchmarks.

The draft points to an expanded role for the Cloud and AI Development Act, which is expected to frame future obligations for cloud providers instead of relying on fragmented national measures.

Brussels sees consistent rules as essential for supporting secure cloud services, AI infrastructure and cross-border digital operations.

The strategy underscores that modernisation is central to the EU’s vision of tech sovereignty. Older centres would need upgrades to maintain compliance, ensuring that Europe’s digital infrastructure remains competitive, efficient and less dependent on external providers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI ethics as societal infrastructure in the digital era

In recent days, social media has been alight with discussions about the 2014 series whose portrayal of AI and ethical dilemmas now feels remarkably prophetic: Silicon Valley. Fans and professionals alike are highlighting how the show’s depiction of AI, automated agents, and ethical dilemmas mirrors today’s real-world challenges. 

From algorithmic decision-making to AI shaping social and economic interactions, the series explores the boundaries, responsibilities, and societal impact of AI in ways that feel startlingly relevant. What once seemed like pure comedy is increasingly being seen as a warning, highlighting how the choices we make around AI and its ethical frameworks will shape whether the technology benefits society.

While the show dramatises these dilemmas for entertainment, the real world is now facing the same questions. Recent trends in generative AI, autonomous agents, and large-scale automated decision-making are bringing their predictions to life, raising urgent ethical questions for developers, policymakers, and society alike.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The rise of AI ethics: from niche concern to central requirement

The growing influence of AI on society has propelled ethics from a theoretical discussion to a central factor in technological decision-making. Initially confined to academic debate, ethics in AI is now a guiding force in technological development. The impact of AI is becoming tangible across society, from employment and finance to online content.

Technical performance alone no longer defines success; the consequences of design choices have become morally and socially significant. Governments, international organisations, and corporations are responding by developing ethical frameworks. 

The EU AI Act, the OECD AI Principles, and numerous corporate codes of conduct signal that society expects AI systems to align with human values, demonstrating accountability, fairness, and trustworthiness. Ethical reflection has become a prerequisite for technological legitimacy and societal acceptance.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

Functions of AI ethics: trust, guidance, and societal risk

Ethical frameworks for AI fulfil multiple roles, balancing moral guidance with practical necessity. They build public trust between developers, organisations, and users, reassuring society that AI systems operate consistently with shared values.

For developers, ethical principles offer a blueprint for decision-making, helping anticipate societal impact and minimise unintended harm. Beyond guidance, AI ethics acts as a form of societal risk governance, allowing organisations to identify potential consequences before they manifest. 

By integrating ethics into design, AI systems become socially sustainable technologies, bridging technical capability with moral responsibility. The approach like this is particularly critical in high-stakes domains such as healthcare, finance, and law, where algorithmic decisions can significantly affect human well-being.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The politics of AI ethics: regulatory theatre and corporate influence

Despite widespread adoption, AI ethics frameworks sometimes risk becoming regulatory theatre, where public statements signal commitment but fail to ensure meaningful action. Many organisations promote ethical AI principles, yet consistent enforcement and follow-through often lag behind these claims.

Even with their limitations, ethical frameworks are far from meaningless. They shape public discourse, influence policy, and determine which AI systems gain social legitimacy. The challenge lies in balancing credibility with practical impact, ensuring that ethical commitments are more than symbolic gestures. 

Social media platforms like X amplify this tension, with public scrutiny and viral debates exposing both successes and failures in applying ethical principles.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

AI ethics as a lens for technology and society

The prominence of AI ethics reflects a broader societal transformation in evaluating technology. Modern societies no longer judge AI solely by efficiency, speed, or performance; they assess social consequences, fairness, and the distribution of risks and benefits. 

AI is increasingly seen as a social actor rather than a neutral tool, influencing public behaviour, shaping social norms, and redefining concepts such as trust, autonomy, and accountability. Ethical evaluation of AI is not just a philosophical exercise, but demonstrate evolving expectations about the role technology should play in human life.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

AI ethics as early-warning governance for social impact

AI ethics functions as a critical early-warning system for society. Ethical principles anticipate harms that might otherwise go unnoticed, from systemic bias to privacy violations. By highlighting potential consequences, ethics enables organisations to act proactively, reducing the likelihood of crises and improving public trust. 

Moreover, ethics ensures that long-term impacts, including societal cohesion, equity, and fairness, are considered alongside immediate technical performance. In doing so, AI ethics bridges the gap between what AI can do and what society deems acceptable, ensuring that innovation remains aligned with moral and social norms.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The bridge between technological power and social legitimacy

AI ethics remains the essential bridge between technological power and social legitimacy. Embedding ethical reflection into AI development ensures that innovation is not only technically effective but also socially sustainable, trustworthy, and accountable. 

Yet a growing tension defines the next phase of this evolution: the accelerating pace of innovation often outstrips the slower processes of ethical deliberation and regulation, raising questions about who sets the norms and how quickly societies can adapt.

Rather than acting solely as a safeguard, ethics is increasingly becoming a strategic dimension of technological leadership, shaping public trust, market adoption, and even geopolitical influence in the global race for AI. The rise of AI ethics, therefore, signals more than a moral awakening, reflecting a structural shift in how technological progress is evaluated and legitimised.

As AI continues to integrate into everyday life, ethical frameworks will determine not only how systems function, but also whether they are accepted as part of the social fabric. Aligning innovation with societal values is no longer optional but the condition under which AI can sustain legitimacy, unlock its full potential, and remain a transformative force that benefits society as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU pushes federated cloud plan to reduce dependence on foreign tech

Europe is building a federated cloud and AI infrastructure intended to reduce reliance on US and Chinese technology providers and avoid ongoing strategic vulnerability.

The project, known as EURO-3C, was announced in Barcelona by Telefónica and is backed by the European Commission. More than seventy organisations across telecommunications, technology and emerging companies have joined the effort.

Architects of the scheme argue that linking national infrastructures into a shared network of nodes offers a realistic path forward, particularly as Europe cannot easily create a hyperscale cloud provider from scratch.

The initiative follows a series of US cloud outages that exposed the risks of excessive dependence on external infrastructure and raised questions about sovereignty, resilience and long-term competitiveness.

Commission officials described the programme as a way to build a secure cross-border digital ecosystem that supports industries such as automotive, e-health, public administration and sovereign government cloud.

Telefónica stressed that agentic AI, capable of taking autonomous actions, will play a central role in enabling Europe to develop technology rather than import it.

The partners view the project as a foundation for a unified and independent digital environment that strengthens industrial supply chains and prepares European sectors for the next phase of cloud and AI adoption.

They present the initiative as a significant step toward reducing strategic exposure while stimulating domestic innovation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europe turns to satellite networks as Deutsche Telekom expands Starlink collaboration

Deutsche Telekom is turning to satellite connectivity to address Europe’s persistent mobile coverage gaps, rather than relying solely on terrestrial networks.

The company announced a partnership with Starlink during the Mobile World Congress in Barcelona, arguing that non-terrestrial networks can help reach remote forests, mountains and islands that remain underserved despite broad coverage elsewhere.

A collaboration that aims to support direct-to-device satellite links by 2028, enabling future smartphones to connect to Starlink’s MSS spectrum without additional hardware.

Telecommunications leaders describe the plan as a step toward an ‘everywhere network’, extending reliable service to areas long constrained by topographical and conservation barriers. The partnership follows earlier joint work with SpaceX to eliminate dead zones.

Deutsche Telekom is also increasing its use of agentic AI, integrating autonomous network-enhancing systems intended to improve translation, search and service features across devices.

Executives say these capabilities work even on older phones, reducing dependence on apps and creating a more inclusive digital environment.

Although committed to European digital sovereignty, the company insists that global collaboration remains necessary for long-term competitiveness.

Leadership argues that precise regulation and controlled data environments aligned with European standards can balance international cooperation with privacy protection. They remain confident that European technology firms and start-ups will continue driving meaningful innovation across the sector.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

ChatGPT to Claude migration trend gains momentum

More users are exploring how to switch from ChatGPT to Claude while preserving their existing chat history and preferences. Rather than starting over with a new AI assistant, many want to migrate context and maintain continuity.

The first step is gathering your data from ChatGPT. In Settings, open Personalisation, then review the Memory section to copy any stored preferences you want to retain. You can also export your full chat history through Data Controls by selecting ‘Export Data’.

ChatGPT will generate downloadable files containing your conversations. If you prefer a lighter approach, manually copy key discussions or ask ChatGPT to summarise your main preferences, frequently discussed topics, and custom instructions.

Once your information is ready, open Claude and enable Memory under Settings and Capabilities. Start a new conversation and paste your summaries using a prompt such as ‘Here is important context about me. Please update your memory accordingly.’

After transferring the data, verify that Claude has stored the information accurately. If you plan to leave ChatGPT entirely, review and delete saved memory entries before removing your account to ensure your data is cleared.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ClawJacked flaw let attackers hijack AI agents through the browser

A high-severity vulnerability dubbed ‘ClawJacked’ has been discovered in OpenClaw, an open-source AI agent framework that lets developers run autonomous AI assistants locally.

The flaw, uncovered by Oasis Security, allowed malicious websites to silently hijack a user’s local AI agent instance and steal sensitive data, all triggered by a single browser visit.

The attack exploited OpenClaw’s local WebSocket gateway, which assumed that traffic from localhost could be trusted. A malicious website could open a WebSocket connection to the gateway, brute-force the password at hundreds of guesses per second, with no rate limiting applied to local connections, and then silently register as a trusted device without any user prompt.

Once inside, attackers gained admin-level access to the AI agent, connected devices, logs, and configuration data. Oasis Security responsibly disclosed the flaw, and OpenClaw issued a patch within 24 hours, releasing version 2026.2.26.

Security experts are urging organisations to update immediately, audit the permissions held by their AI agents, and apply strict governance policies, treating AI agents as non-human identities that require the same oversight as human users or service accounts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why detecting deepfakes is no longer enough to stay secure

Deepfakes and injection attacks are no longer just tools for misinformation; they are now being deployed to break the identity verification systems that underpin banking, hiring, and account access.

Bad actors are targeting the critical moments when a system determines whether someone is a real person, from customer onboarding at banks to remote hiring and account recovery workflows.

Attackers exploit verification systems in two main ways: by using increasingly convincing synthetic faces and voice clones to mimic real people, and by launching injection attacks that substitute fraudulent video into the capture pipeline before it ever reaches the detection system.

According to the Entrust 2026 Identity Fraud Report, deepfakes are now linked to one in five biometric fraud attempts, with injection attacks rising 40% year-on-year.

Experts warn that detecting deepfakes alone is no longer sufficient. Enterprises must validate the whole session, including device integrity and behavioural signals, in real time.

Gartner predicts that by 2026, 30% of enterprises will no longer consider face-based identity verification reliable in isolation, given the pace AI AI-generated deepfake attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Yale expert warns against overtrusting AI health chatbots

More than 40 million people use ChatGPT alone for health information every day, and both ChatGPT and Claude have recently launched services specifically designed to give consumers health advice.

Yale School of Medicine clinician-educator Shaili Gupta warns that whilst chatbots can democratise access to health information, the risks of overtrust are significant.

Gupta notes that AI chatbots are deliberately designed to feel personal, trained to use pronouns like ‘you’ and ‘I’, which makes users more likely to treat them as authoritative voices rather than information tools.

She cautions against the ‘three C’s’: chatbots that are too competent, too cogent, or too concrete, as these are the most likely to lead patients into harmful health decisions.

Human clinicians, Gupta argues, remain challenging to replace not only because they conduct physical examinations, but also because they bring instinct, experience, and genuine relatability to patient care. She recommends using chatbots for efficiency and general information, whilst leaving diagnosis firmly in the hands of medical professionals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Growing robotics market positions Qualcomm for next technology wave

Qualcomm expects robotics to become a significant business opportunity within two years, according to chief executive Cristiano Amon. The company is increasingly expanding beyond smartphones as it searches for new long-term growth markets.

Earlier this year, Qualcomm introduced its Dragonwing processor designed specifically for robotics applications. The chipset aims to operate across multiple robotic platforms using a scalable approach similar to its successful mobile processor strategy.

Industry enthusiasm for robotics has grown alongside rapid advances in AI technologies. Often described as ‘physical AI’, these systems allow robots to interpret surroundings and perform complex tasks more effectively.

Market forecasts suggest strong future demand, with analysts predicting robotics could develop into a multi-trillion-dollar global industry. Technology leaders across the semiconductor sector increasingly view intelligent machines as a major next computing platform.

Robotics innovation featured prominently at Mobile World Congress in Barcelona, where companies showcased emerging autonomous machines. Growing investment highlights intensifying competition to shape the future of AI-powered automation worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU launches ProtectEU counterterrorism agenda

The European Commission has unveiled a new counterterrorism agenda under the ProtectEU initiative, outlining measures to strengthen the EU’s response to evolving security threats. Officials say the strategy aims to improve preparedness, reinforce cooperation and protect citizens and businesses from emerging forms of terrorism and violent extremism.

Authorities warn that technological change is reshaping the threat landscape. Terrorist groups increasingly exploit digital tools such as social media, AI and encrypted platforms for recruitment, propaganda and fundraising.

New risks also include the potential misuse of drones, crypto-assets and 3D-printed weapons, while radicalisation of minors online has become a growing concern across Europe.

The agenda proposes stronger capabilities for anticipating threats through expanded intelligence analysis and enhanced support for Europol, including greater use of open-source intelligence. Additional research funding will explore the security implications of emerging technologies, while new initiatives aim to strengthen early prevention efforts and community engagement to counter radicalisation, particularly among young people.

Online safety forms another key priority. The Commission plans to intensify cooperation with digital platforms to remove extremist content more quickly and to strengthen enforcement of the Digital Services Act. A new EU Online Crisis Response Framework is also proposed to improve coordination between authorities and technology companies during security incidents.

Measures targeting the physical environment will focus on protecting public spaces and critical infrastructure, including investments in security projects and stronger monitoring of individuals suspected of terrorism.

The strategy also seeks to improve the tracking of terrorist financing, including through cryptocurrencies, and to expand cooperation with international partners, such as countries in the Western Balkans and the Mediterranean region.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!