UK to retaliate against cyber attacks, minister warns

Britain’s security minister has warned that hackers targeting UK institutions will face consequences, including potential retaliatory cyber operations.

Speaking to POLITICO at the British Library — still recovering from a 2023 ransomware attack by Rysida — Security Minister Dan Jarvis said the UK is prepared to use offensive cyber capabilities to respond to threats.

‘If you are a cybercriminal and think you can attack a UK-based institution without repercussions, think again,’ Jarvis stated. He emphasised the importance of sending a clear signal that hostile activity will not go unanswered.

The warning follows a recent government decision to ban ransom payments by public sector bodies. Jarvis said deterrence must be matched by vigorous enforcement.

The UK has acknowledged its offensive cyber capabilities for over a decade, but recent strategic shifts have expanded its role. A £1 billion investment in a new Cyber and Electromagnetic Command will support coordinated action alongside the National Cyber Force.

While Jarvis declined to specify technical capabilities, he cited the National Crime Agency’s role in disrupting the LockBit ransomware group as an example of the UK’s growing offensive posture.

AI is accelerating both cyber threats and defensive measures. Jarvis said the UK must harness AI for national advantage, describing an ‘arms race’ amid rapid technological advancement.

Most cyber threats originate from Russia or its affiliated groups, though Iran, China, and North Korea remain active. The UK is also increasingly concerned about ‘hack-for-hire’ actors operating from friendly nations, including India.

Despite these concerns, Jarvis stressed the UK’s strong security ties with India and ongoing cooperation to curb cyber fraud. ‘We will continue to invest in that relationship for the long term,’ he said.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Stocks gain but Bitcoin remains flat after Japan trade deal

US President Donald Trump revealed a new trade agreement with Japan, described as ‘perhaps the largest deal ever made,’ involving $550 billion of Japanese investment in the United States.

The deal aims to boost trade in automobiles and agricultural goods, and is expected to create hundreds of thousands of jobs.

Following the announcement, major US stock indices saw modest gains, with the S&P 500, Nasdaq, and Dow rising by 0.26%, 0.09%, and 0.42% respectively. In contrast, Bitcoin fell by 0.55%.

Despite the positive stock market response, the wider cryptocurrency market declined, with the Coinmarketcap Altcoin Season Index dropping from 56 to 46. Expectations for an ‘altcoin season’—when most top tokens outperform Bitcoin over three months—may have been premature.

Bitcoin itself showed little movement, remaining below $120,000 and losing just under 1% over the past week.

Market metrics reveal that Bitcoin’s 24-hour trading volume dropped 11.5% to $67.28 billion, while its market cap declined by 0.78% to $2.35 trillion. Bitcoin dominance increased slightly to 61.57%.

Futures open interest decreased marginally, and total liquidations over 24 hours amounted to $51.23 million, with long positions accounting for the majority of liquidations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Elon Musk’s firm consolidates $153 million in BTC

SpaceX has moved $153 million worth of Bitcoin for the first time since 2022, consolidating 1,308 BTC from 16 addresses into a single SegWit wallet, on-chain data reveals. The reason behind the move remains undisclosed.

The company, founded by Elon Musk, currently holds 8,285 BTC—worth nearly $989 million—according to bitcointreasuries.net. Its last Bitcoin transfer involved over 3,500 tokens sent to Coinbase. A SpaceX spokesperson declined to comment on the latest activity.

The transfer coincides with increased scrutiny of the firm’s government contracts, following a clash between Musk and Donald Trump. Despite speculation, SpaceX may not be selling its Bitcoin, as it is reportedly preparing a $1 billion share sale that could value the company at $400 billion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

European healthcare group AMEOS suffers a major hack

Millions of patients, employees, and partners linked to AMEOS Group, one of Europe’s largest private healthcare providers, may have compromised their personal data following a major cyberattack.

The company admitted that hackers briefly accessed its IT systems, stealing sensitive data including contact information and records tied to patients and corporate partners.

Despite existing security measures, AMEOS was unable to prevent the breach. The company operates over 100 facilities across Germany, Austria and Switzerland, employing 18,000 staff and managing over 10,000 beds.

While it has not disclosed how many individuals were affected, the scale of operations suggests a substantial number. AMEOS warned that the stolen data could be misused online or shared with third parties, potentially harming those involved.

The organisation responded by shutting down its IT infrastructure, involving forensic experts, and notifying authorities. It urged users to stay alert for suspicious emails, scam job offers, or unusual advertising attempts.

Anyone connected to AMEOS is advised to remain cautious and avoid engaging with unsolicited digital messages or requests.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI music tools arrive for YouTube creators

YouTube is trialling two new features to improve user engagement and content creation. One enhances comment readability, while the other helps creators produce music using AI for Shorts.

A new threaded layout is being tested to organise comment replies under the original post, allowing more explicit and focused conversations. Currently, this feature is limited to a small group of Premium users on mobile.

YouTube also expands Dream Track, an AI-powered tool that creates 30-second music clips from simple text prompts. Creators can generate sounds matching moods like ‘chill piano melody’ or ‘energetic pop beat’, with the option to include AI-generated vocals styled after popular artists.

Both features are available only in the US during the testing phase, with no set date for international release. YouTube’s gradual updates reflect a shift toward more intuitive user experiences and creative flexibility on the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepMind engineers join Microsoft’s AI team

Microsoft has aggressively expanded its AI workforce by hiring over 20 specialists from Google’s DeepMind research lab in recent months. Notable recruits, now part of Microsoft AI under EVP Mustafa Suleyman, include former DeepMind engineering head Amar Subramanya, product managers and research scientists such as Sonal Gupta, Adam Sadovsky, Tim Frank, Dominic King, and Christopher Kelly.

This talent influx aligns with Suleyman’s leadership of Microsoft’s consumer AI division, which is responsible for Copilot, Bing, and Edge, and underscores the company’s push to solidify its lead in personal AI experiences. Meanwhile, this hiring effort unfolds against a backdrop of 9,000 layoffs globally, highlighting Microsoft’s strategy to redeploy resources toward AI innovation.

However, regulators are scrutinising the move. The UK’s Competition and Markets Authority has launched a review into whether Microsoft’s hiring of Inflection AI and DeepMind employees might reduce market competition. Microsoft maintains that its practice fosters, rather than limits, industry advancement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ASEAN urged to unite on digital infrastructure

Asia stands at a pivotal moment as policymakers urge swift deployment of converging 5G and AI technologies. Experts argue that 5G should be treated as a foundational enabler for AI, not just a telecom upgrade, to power future industries.

A report from the Lee Kuan Yew School of Public Policy identifies ten urgent imperatives, notably forming national 5G‑AI strategies, empowering central coordination bodies and modernising spectrum policies. Industry leaders stress that aligning 5G and AI investment is essential to sustain innovation.

Without firm action, the digital divide could deepen and stall progress. Coordinated adoption and skilled workforce development are seen as critical to turning incremental gains into transformational regional leadership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Filtered data not enough, LLMs can still learn unsafe behaviours

Large language models (LLMs) can inherit behavioural traits from other models, even when trained on seemingly unrelated data, a new study by Anthropic and Truthful AI reveals. The findings emerged from the Anthropic Fellows Programme.

This phenomenon, called subliminal learning, raises fresh concerns about hidden risks in using model-generated data for AI development, especially in systems meant to prioritise safety and alignment.

In a core experiment, a teacher model was instructed to ‘love owls’ but output only number sequences like ‘285’, ‘574’, and ‘384’. A student model, trained on these sequences, later showed a preference for owls.

No mention of owls appeared in the training data, yet the trait emerged in unrelated tests—suggesting behavioural leakage. Other traits observed included promoting crime or deception.

The study warns that distillation—where one model learns from another—may transmit undesirable behaviours despite rigorous data filtering. Subtle statistical cues, not explicit content, seem to carry the traits.

The transfer only occurs when both models share the same base. A GPT-4.1 teacher can influence a GPT-4.1 student, but not a student built on a different base like Qwen.

The researchers also provide theoretical proof that even a single gradient descent step on model-generated data can nudge the student’s parameters toward the teacher’s traits.

Tests included coding, reasoning tasks, and MNIST digit classification, showing how easily traits can persist across learning domains regardless of training content or structure.

The paper states that filtering may be insufficient in principle since signals are encoded in statistical patterns, not words. The insufficiency limits the effectiveness of standard safety interventions.

Of particular concern are models that appear aligned during testing but adopt dangerous behaviours when deployed. The authors urge deeper safety evaluations beyond surface-level behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Half of Americans still unsure how crypto works

A new NCA survey shows 70% of Americans without crypto want more information before considering digital assets. Half of respondents said they don’t understand crypto, while others voiced concerns about scams and unknown project founders.

Despite this uncertainty, 34% of those polled said they were open to learning more. The NCA’s report summarised the mood as ‘curiosity high, confidence low,’ noting that a large number of people are interested in crypto but unsure how to take the first step.

The NCA, a nonprofit launched in March and led by Ripple Labs’ chief legal officer Stuart Alderoty, has been tasked with helping Americans better understand crypto. Backed by $50 million from Ripple, the organisation aims to build trust and boost crypto literacy through education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Altman warns AI voice cloning will break bank security

OpenAI CEO Sam Altman has warned that AI poses a serious threat to financial security through voice-based fraud.

Speaking at a Federal Reserve conference in Washington, Altman said AI can now convincingly mimic human voices, rendering voiceprint authentication obsolete and dangerously unreliable.

He expressed concern that some financial institutions still rely on voice recognition to verify identities. ‘That is a crazy thing to still be doing. AI has fully defeated that,’ he said. The risk, he noted, is that AI voice clones can now deceive these systems with ease.

Altman added that video impersonation capabilities are also advancing rapidly. Technologies that become indistinguishable from real people could enable more sophisticated fraud schemes. He called for the urgent development of new verification methods across the industry.

Michelle Bowman, the Fed’s Vice Chair for Supervision, echoed the need for action. She proposed potential collaboration between AI developers and regulators to create better safeguards. ‘That might be something we can think about partnering on,’ Bowman told Altman.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!