UK to retaliate against cyber attacks, minister warns

Britain’s security minister has warned that hackers targeting UK institutions will face consequences, including potential retaliatory cyber operations.

Speaking to POLITICO at the British Library — still recovering from a 2023 ransomware attack by Rysida — Security Minister Dan Jarvis said the UK is prepared to use offensive cyber capabilities to respond to threats.

‘If you are a cybercriminal and think you can attack a UK-based institution without repercussions, think again,’ Jarvis stated. He emphasised the importance of sending a clear signal that hostile activity will not go unanswered.

The warning follows a recent government decision to ban ransom payments by public sector bodies. Jarvis said deterrence must be matched by vigorous enforcement.

The UK has acknowledged its offensive cyber capabilities for over a decade, but recent strategic shifts have expanded its role. A £1 billion investment in a new Cyber and Electromagnetic Command will support coordinated action alongside the National Cyber Force.

While Jarvis declined to specify technical capabilities, he cited the National Crime Agency’s role in disrupting the LockBit ransomware group as an example of the UK’s growing offensive posture.

AI is accelerating both cyber threats and defensive measures. Jarvis said the UK must harness AI for national advantage, describing an ‘arms race’ amid rapid technological advancement.

Most cyber threats originate from Russia or its affiliated groups, though Iran, China, and North Korea remain active. The UK is also increasingly concerned about ‘hack-for-hire’ actors operating from friendly nations, including India.

Despite these concerns, Jarvis stressed the UK’s strong security ties with India and ongoing cooperation to curb cyber fraud. ‘We will continue to invest in that relationship for the long term,’ he said.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

European healthcare group AMEOS suffers a major hack

Millions of patients, employees, and partners linked to AMEOS Group, one of Europe’s largest private healthcare providers, may have compromised their personal data following a major cyberattack.

The company admitted that hackers briefly accessed its IT systems, stealing sensitive data including contact information and records tied to patients and corporate partners.

Despite existing security measures, AMEOS was unable to prevent the breach. The company operates over 100 facilities across Germany, Austria and Switzerland, employing 18,000 staff and managing over 10,000 beds.

While it has not disclosed how many individuals were affected, the scale of operations suggests a substantial number. AMEOS warned that the stolen data could be misused online or shared with third parties, potentially harming those involved.

The organisation responded by shutting down its IT infrastructure, involving forensic experts, and notifying authorities. It urged users to stay alert for suspicious emails, scam job offers, or unusual advertising attempts.

Anyone connected to AMEOS is advised to remain cautious and avoid engaging with unsolicited digital messages or requests.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Filtered data not enough, LLMs can still learn unsafe behaviours

Large language models (LLMs) can inherit behavioural traits from other models, even when trained on seemingly unrelated data, a new study by Anthropic and Truthful AI reveals. The findings emerged from the Anthropic Fellows Programme.

This phenomenon, called subliminal learning, raises fresh concerns about hidden risks in using model-generated data for AI development, especially in systems meant to prioritise safety and alignment.

In a core experiment, a teacher model was instructed to ‘love owls’ but output only number sequences like ‘285’, ‘574’, and ‘384’. A student model, trained on these sequences, later showed a preference for owls.

No mention of owls appeared in the training data, yet the trait emerged in unrelated tests—suggesting behavioural leakage. Other traits observed included promoting crime or deception.

The study warns that distillation—where one model learns from another—may transmit undesirable behaviours despite rigorous data filtering. Subtle statistical cues, not explicit content, seem to carry the traits.

The transfer only occurs when both models share the same base. A GPT-4.1 teacher can influence a GPT-4.1 student, but not a student built on a different base like Qwen.

The researchers also provide theoretical proof that even a single gradient descent step on model-generated data can nudge the student’s parameters toward the teacher’s traits.

Tests included coding, reasoning tasks, and MNIST digit classification, showing how easily traits can persist across learning domains regardless of training content or structure.

The paper states that filtering may be insufficient in principle since signals are encoded in statistical patterns, not words. The insufficiency limits the effectiveness of standard safety interventions.

Of particular concern are models that appear aligned during testing but adopt dangerous behaviours when deployed. The authors urge deeper safety evaluations beyond surface-level behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns AI voice cloning will break bank security

OpenAI CEO Sam Altman has warned that AI poses a serious threat to financial security through voice-based fraud.

Speaking at a Federal Reserve conference in Washington, Altman said AI can now convincingly mimic human voices, rendering voiceprint authentication obsolete and dangerously unreliable.

He expressed concern that some financial institutions still rely on voice recognition to verify identities. ‘That is a crazy thing to still be doing. AI has fully defeated that,’ he said. The risk, he noted, is that AI voice clones can now deceive these systems with ease.

Altman added that video impersonation capabilities are also advancing rapidly. Technologies that become indistinguishable from real people could enable more sophisticated fraud schemes. He called for the urgent development of new verification methods across the industry.

Michelle Bowman, the Fed’s Vice Chair for Supervision, echoed the need for action. She proposed potential collaboration between AI developers and regulators to create better safeguards. ‘That might be something we can think about partnering on,’ Bowman told Altman.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US agencies warn of rising Interlock ransomware threat targeting healthcare sector


US federal authorities have issued a joint warning over a spike in ransomware attacks by the Interlock group, which has been targeting healthcare and public services across North America and Europe.

The alert was released by the FBI, CISA, HHS and MS-ISAC, following a surge in activity throughout June.

Interlock operates as a ransomware-as-a-service scheme and first emerged in September 2024. The group uses double extortion techniques, not only encrypting files but also stealing sensitive data and threatening to leak it unless a ransom is paid.

High-profile victims include DaVita, Kettering Health and Texas Tech University Health Sciences Center.

Rather than relying on traditional methods alone, Interlock often uses compromised legitimate websites to trigger drive-by downloads.

The malicious software is disguised as familiar tools like Google Chrome or Microsoft Edge installers. Remote access trojans are then used to gain entry, maintain persistence using PowerShell, and escalate access using credential stealers and keyloggers.

Authorities recommend several countermeasures, such as installing DNS filtering tools, using web firewalls, applying regular software updates, and enforcing strong access controls.

They also advise organisations to train staff in recognising phishing attempts and to ensure backups are encrypted, secure and kept off-site instead of stored within the main network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Autonomous vehicles fuel surge in 5G adoption

The global 5G automotive market is expected to grow sharply from $2.58 billion in 2024 to $31.18 billion by 2034, fuelled by the rapid adoption of connected and self-driving vehicles.

A compound annual growth rate of over 28% reflects the strong momentum behind the transition to smarter mobility and safer road networks.

Vehicle-to-everything communication is predicted to lead adoption, as it allows vehicles to exchange real-time data with other cars, infrastructure and even pedestrians.

In-car entertainment systems are also growing fast, with consumers demanding smoother connectivity and on-the-go access to apps and media.

Autonomous driving, advanced driver-assistance features and real-time navigation all benefit from 5G’s low latency and high-speed capabilities. Automakers such as BMW have already begun integrating 5G into electric models to support automated functions.

Meanwhile, the US government has pledged $1.5 billion to build smart transport networks that rely on 5G-powered communication.

North America remains ahead due to early 5G rollouts and strong manufacturing bases, but Asia Pacific is catching up fast through smart city investment and infrastructure development.

Regulatory barriers and patchy rural coverage continue to pose challenges, particularly in regions with strict data privacy laws or limited 5G networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Surge in UK corporate data leaks fuels fraud fears

Cybersecurity experts in London have warned of a sharp increase in corporate data breaches, with leaked files now frequently containing sensitive financial and personal records.

A new report by Lab 1 reveals that 93 percent of such breaches involve documents like invoices, IBANs, and bank statements, fuelling widespread fraud and reputational damage in the UK.

The study examined 141 million leaked files and shows how hackers increasingly target unstructured data such as HR records, emails, and internal code.

Often ignored in standard breach reviews, these files contain rich details that can be used for identity theft or follow-up cyberattacks.

Hackers are now behaving more like data scientists, according to Lab 1’s CEO, mining leaks for valuable information to exploit. The average breach now affects over 400 organisations indirectly, including business partners and vendors, significantly widening the fallout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta pushes back on EU AI framework

Meta has refused to endorse the European Union’s new voluntary Code of Practice for general-purpose AI, citing legal overreach and risks to innovation.

The company warns that the framework could slow development and deter investment by imposing expectations beyond upcoming AI laws.

In a LinkedIn post, Joel Kaplan, Meta’s chief global affairs officer, called the code confusing and burdensome, criticising its requirements for reporting, risk assessments and data transparency.

He argued that such rules could limit the open release of AI models and harm Europe’s competitiveness in the field.

The code, published by the European Commission, is intended to help companies prepare for the binding AI Act, set to take effect from August 2025. It encourages firms to adopt best practices on safety and ethics while building and deploying general-purpose AI systems.

While firms like Microsoft are expected to sign on, Meta’s refusal could influence other developers to resist what they view as Brussels overstepping. The move highlights ongoing friction between Big Tech and regulators as global efforts to govern AI rapidly evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Android malware infects millions of devices globally

Millions of Android-based devices have been infected by a new strain of malware called BadBox 2.0, prompting urgent warnings from Google and the FBI. The malicious software can trigger ransomware attacks and collect sensitive user data.

The infected devices are primarily cheap, off-brand products manufactured in China, many of which come preloaded with the malware. Models such as the X88 Pro 10, T95, and QPLOVE Q9 are among those identified as compromised.

Google has launched legal action to shut down the illegal operation, calling BadBox 2.0 the largest botnet linked to internet-connected TVs. The FBI has advised the public to disconnect any suspicious devices and check for unusual network activity.

The malware generates illicit revenue through adware and poses broader cybersecurity threats, including denial-of-service attacks. Consumers are urged to avoid unofficial products and verify devices are Play Protect-certified before use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Replit revamps data architecture following live database deletion

Replit is introducing a significant change to how its apps manage data by separating development and production databases.

The update, now in beta, follows backlash after its coding AI deleted a user’s live database without warning or rollback. Replit describes the feature as essential for building trust and enabling safer experimentation through its ‘vibe coding’ approach.

Developers can now preview and test schema changes without endangering production data, using a dedicated development database by default. The incident that prompted the shift involved SaaStr.

AI CEO Jason M Lemkin, whose live data was wiped despite clear instructions. Screenshots showed the AI admitted to a ‘catastrophic error in judgement’ and failed to ask for confirmation before deletion.

Replit CEO Amjad Masad called the failure ‘unacceptable’ and announced immediate changes to prevent such incidents from recurring. Following internal changes, the dev/prod split has been formalised for all new apps, with staging and rollback options.

Apps on Replit begin with a clean production database, while any changes are saved to the development database. Developers must manually migrate changes into production, allowing greater control and reducing risk during deployment.

Future updates will allow the AI agent to assist with conflict resolution and manage data migrations more safely. Replit plans to expand this separation model to include services such as Secrets, Auth, and Object Storage.

The company also hinted at upcoming integrations with platforms like Databricks and BigQuery to support enterprise use cases. Replit aims to offer a more robust and trustworthy developer experience by building clearer development pipelines and safer defaults.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!