EU and Japan deepen AI cooperation under new digital pact

In May 2025, the European Union and Japan formally reaffirmed their long-standing EU‑Japan Digital Partnership during the third Digital Partnership Council in Tokyo. Delegations agreed to deepen collaboration in pivotal digital technologies, most notably artificial intelligence, quantum computing, 5G/6G networks, semiconductors, cloud, and cybersecurity.

A joint statement committed to signing an administrative agreement on AI, aligned with principles from the Hiroshima AI Process. Shared initiatives include a €4 million EU-supported quantum R&D project named Q‑NEKO and the 6G MIRAI‑HARMONY research effort.

Both parties pledge to enhance data governance, digital identity interoperability, regulatory coordination across platforms, and secure connectivity via submarine cables and Arctic routes. The accord builds on the Strategic Partnership Agreement activated in January 2025, reinforcing their mutual platform for rules-based, value-driven digital and innovation cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI energy demand accelerates while clean power lags

Data centres are driving a sharp rise in electricity consumption, putting mounting pressure on power infrastructure that is already struggling to keep pace.

The rapid expansion of AI has led technology companies to invest heavily in AI-ready infrastructure, but the energy demands of these systems are outstripping available grid capacity.

The International Energy Agency projects that electricity use by data centres will more than double globally by 2030, reaching levels equivalent to the current consumption of Japan.

In the United States, they are expected to use 580 TWh annually by 2028—about 12% of national consumption. AI-specific data centres will be responsible for much of this increase.

Despite this growth, clean energy deployment is lagging. Around two terawatts of projects remain stuck in interconnection queues, delaying the shift to sustainable power. The result is a paradox: firms pursuing carbon-free goals by 2035 now rely on gas and nuclear to power their expanding AI operations.

In response, tech companies and utilities are adopting short-term strategies to relieve grid pressure. Microsoft and Amazon are sourcing energy from nuclear plants, while Meta will rely on new gas-fired generation.

Data centre developers like CloudBurst are securing dedicated fuel supplies to ensure local power generation, bypassing grid limitations. Some utilities are introducing technologies to speed up grid upgrades, such as AI-driven efficiency tools and contracts that encourage flexible demand.

Behind-the-meter solutions—like microgrids, batteries and fuel cells—are also gaining traction. AEP’s 1-GW deal with Bloom Energy would mark the US’s largest fuel cell deployment.

Meanwhile, longer-term efforts aim to scale up nuclear, geothermal and even fusion energy. Google has partnered with Commonwealth Fusion Systems to source power by the early 2030s, while Fervo Energy is advancing geothermal projects.

National Grid and other providers invest in modern transmission technologies to support clean generation. Cooling technology for data centre chips is another area of focus. Programmes like ARPA-E’s COOLERCHIPS are exploring ways to reduce energy intensity.

At the same time, outdated regulatory processes are slowing progress. Developers face unclear connection timelines and steep fees, sometimes pushing them toward off-grid alternatives.

The path forward will depend on how quickly industry and regulators can align. Without faster deployment of clean power and regulatory reform, the systems designed to power AI could become the bottleneck that stalls its growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK proposes mandatory ransomware reporting and seeks to ban payments by public sector

The UK government has unveiled a new proposal to strengthen its response to ransomware threats by requiring victims to report breaches, enabling law enforcement to disrupt cybercriminal operations more effectively.

Published by the Home Office as part of an ongoing policy consultation, the proposal outlines key measures:

  • Mandatory breach reporting to equip law enforcement with actionable intelligence for identifying and disrupting ransomware groups.
  • A ban on ransom payments by public sector and critical infrastructure entities.
  • A notification requirement for other organisations intending to pay a ransom, allowing the government to assess and respond accordingly.

According to the proposal, these steps would help the UK government carry out ‘targeted disruptions’ in response to evolving ransomware threats, while also improving support for victims.

Cybersecurity experts have largely welcomed the initiative. Allan Liska of Recorded Future noted the plan reflects a growing recognition that many ransomware actors are within reach of law enforcement. Arda Büyükkaya of EclecticIQ praised the effort to formalise response protocols, viewing the proposed payment ban and proactive enforcement as meaningful deterrents.

This announcement follows a consultation process that began in January 2025. While the proposals signal a significant policy shift, they have not yet been enacted into law. The potential ban on ransom payments remains particularly contentious, with critics warning that, in some cases—such as hospital systems—paying a ransom may be the only option to restore essential services quickly.

The UK’s proposal follows similar international efforts, including Australia’s recent mandate for victims to disclose ransom payments, though Australia has stopped short of banning them outright.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Trump AI strategy targets China and cuts red tape

The Trump administration has revealed a sweeping new AI strategy to cement US dominance in the global AI race, particularly against China.

The 25-page ‘America’s AI Action Plan’ proposes 90 policy initiatives, including building new data centres nationwide, easing regulations, and expanding exports of AI tools to international allies.

White House officials stated the plan will boost AI development by scrapping federal rules seen as restrictive and speeding up construction permits for data infrastructure.

A key element involves monitoring Chinese AI models for alignment with Communist Party narratives, while promoting ‘ideologically neutral’ systems within the US. Critics argue the approach undermines efforts to reduce bias and favours politically motivated AI regulation.

The action plan also supports increased access to federal land for AI-related construction and seeks to reverse key environmental protections. Analysts have raised concerns over energy consumption and rising emissions linked to AI data centres.

While the White House claims AI will complement jobs rather than replace them, recent mass layoffs at Indeed and Salesforce suggest otherwise.

Despite the controversy, the announcement drew optimism from investors. AI stocks saw mixed trading, with NVIDIA, Palantir and Oracle gaining, while Alphabet slipped slightly. Analysts described the move as a ‘watershed moment’ for US tech, signalling an aggressive stance in the global AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware activity drops 43% in Q2 despite year‑on‑year rise

Ransomware incidents fell sharply in Q2 2025, with public disclosures dropping 43% from Q1 (from 22.9 to 17.5 cases per day). However, attacks remain elevated compared to the same quarter last year, showing a 43% year‑on‑year increase. In total, 1,591 new victims appeared on leak sites, confirming ransomware is still a serious and growing threat.

This decline coincided with law enforcement disruption of major operations such as Alphv/BlackCat and LockBit, alongside seasonal lulls like Easter and Ramadan. Meanwhile, active ransomware groups surged to 71, up from 41 in Q2 2024, indicating a fragmented threat landscape populated by smaller actors.

North America continued to absorb over half of all attacks, with healthcare, industrial manufacturing, and business services among the most affected sectors. Although overall volume dipped, newer threat actors remain agile, and fragmentation may fuel more covert ransomware behaviour, not less.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK to retaliate against cyber attacks, minister warns

Britain’s security minister has warned that hackers targeting UK institutions will face consequences, including potential retaliatory cyber operations.

Speaking to POLITICO at the British Library — still recovering from a 2023 ransomware attack by Rysida — Security Minister Dan Jarvis said the UK is prepared to use offensive cyber capabilities to respond to threats.

‘If you are a cybercriminal and think you can attack a UK-based institution without repercussions, think again,’ Jarvis stated. He emphasised the importance of sending a clear signal that hostile activity will not go unanswered.

The warning follows a recent government decision to ban ransom payments by public sector bodies. Jarvis said deterrence must be matched by vigorous enforcement.

The UK has acknowledged its offensive cyber capabilities for over a decade, but recent strategic shifts have expanded its role. A £1 billion investment in a new Cyber and Electromagnetic Command will support coordinated action alongside the National Cyber Force.

While Jarvis declined to specify technical capabilities, he cited the National Crime Agency’s role in disrupting the LockBit ransomware group as an example of the UK’s growing offensive posture.

AI is accelerating both cyber threats and defensive measures. Jarvis said the UK must harness AI for national advantage, describing an ‘arms race’ amid rapid technological advancement.

Most cyber threats originate from Russia or its affiliated groups, though Iran, China, and North Korea remain active. The UK is also increasingly concerned about ‘hack-for-hire’ actors operating from friendly nations, including India.

Despite these concerns, Jarvis stressed the UK’s strong security ties with India and ongoing cooperation to curb cyber fraud. ‘We will continue to invest in that relationship for the long term,’ he said.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Major hack hits Dutch Public Prosecution Service

The Dutch Public Prosecution Service (OM) had confirmed a significant cyberattack that forced it to disconnect from the internet, following warnings of a potential vulnerability.

Internal systems were cut off after the National Cybersecurity Centre alerted OM to the risk, with officials saying the disconnection could last for weeks.

OM’s IT director, Hans Moonen, described the breach as massive and dramatic. He stated that reconnection is impossible until it’s confirmed that the intruder has been completely removed from the network.

The organisation has reported the incident to the police and the Dutch Data Protection Authority.

Since Wednesday evening, staff have been working around the clock to contain the damage and investigate the breach. Although internal communication remains functional, external emailing is no longer possible, significantly impacting operations.

According to OM crisis team member Marthyne Kunst, the disruption means the agency relies heavily on printed documents again, adding a logistical burden to the already tense situation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

European healthcare group AMEOS suffers a major hack

Millions of patients, employees, and partners linked to AMEOS Group, one of Europe’s largest private healthcare providers, may have compromised their personal data following a major cyberattack.

The company admitted that hackers briefly accessed its IT systems, stealing sensitive data including contact information and records tied to patients and corporate partners.

Despite existing security measures, AMEOS was unable to prevent the breach. The company operates over 100 facilities across Germany, Austria and Switzerland, employing 18,000 staff and managing over 10,000 beds.

While it has not disclosed how many individuals were affected, the scale of operations suggests a substantial number. AMEOS warned that the stolen data could be misused online or shared with third parties, potentially harming those involved.

The organisation responded by shutting down its IT infrastructure, involving forensic experts, and notifying authorities. It urged users to stay alert for suspicious emails, scam job offers, or unusual advertising attempts.

Anyone connected to AMEOS is advised to remain cautious and avoid engaging with unsolicited digital messages or requests.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Filtered data not enough, LLMs can still learn unsafe behaviours

Large language models (LLMs) can inherit behavioural traits from other models, even when trained on seemingly unrelated data, a new study by Anthropic and Truthful AI reveals. The findings emerged from the Anthropic Fellows Programme.

This phenomenon, called subliminal learning, raises fresh concerns about hidden risks in using model-generated data for AI development, especially in systems meant to prioritise safety and alignment.

In a core experiment, a teacher model was instructed to ‘love owls’ but output only number sequences like ‘285’, ‘574’, and ‘384’. A student model, trained on these sequences, later showed a preference for owls.

No mention of owls appeared in the training data, yet the trait emerged in unrelated tests—suggesting behavioural leakage. Other traits observed included promoting crime or deception.

The study warns that distillation—where one model learns from another—may transmit undesirable behaviours despite rigorous data filtering. Subtle statistical cues, not explicit content, seem to carry the traits.

The transfer only occurs when both models share the same base. A GPT-4.1 teacher can influence a GPT-4.1 student, but not a student built on a different base like Qwen.

The researchers also provide theoretical proof that even a single gradient descent step on model-generated data can nudge the student’s parameters toward the teacher’s traits.

Tests included coding, reasoning tasks, and MNIST digit classification, showing how easily traits can persist across learning domains regardless of training content or structure.

The paper states that filtering may be insufficient in principle since signals are encoded in statistical patterns, not words. The insufficiency limits the effectiveness of standard safety interventions.

Of particular concern are models that appear aligned during testing but adopt dangerous behaviours when deployed. The authors urge deeper safety evaluations beyond surface-level behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns AI voice cloning will break bank security

OpenAI CEO Sam Altman has warned that AI poses a serious threat to financial security through voice-based fraud.

Speaking at a Federal Reserve conference in Washington, Altman said AI can now convincingly mimic human voices, rendering voiceprint authentication obsolete and dangerously unreliable.

He expressed concern that some financial institutions still rely on voice recognition to verify identities. ‘That is a crazy thing to still be doing. AI has fully defeated that,’ he said. The risk, he noted, is that AI voice clones can now deceive these systems with ease.

Altman added that video impersonation capabilities are also advancing rapidly. Technologies that become indistinguishable from real people could enable more sophisticated fraud schemes. He called for the urgent development of new verification methods across the industry.

Michelle Bowman, the Fed’s Vice Chair for Supervision, echoed the need for action. She proposed potential collaboration between AI developers and regulators to create better safeguards. ‘That might be something we can think about partnering on,’ Bowman told Altman.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!