UK to retaliate against cyber attacks, minister warns

Britain’s security minister has warned that hackers targeting UK institutions will face consequences, including potential retaliatory cyber operations.

Speaking to POLITICO at the British Library — still recovering from a 2023 ransomware attack by Rysida — Security Minister Dan Jarvis said the UK is prepared to use offensive cyber capabilities to respond to threats.

‘If you are a cybercriminal and think you can attack a UK-based institution without repercussions, think again,’ Jarvis stated. He emphasised the importance of sending a clear signal that hostile activity will not go unanswered.

The warning follows a recent government decision to ban ransom payments by public sector bodies. Jarvis said deterrence must be matched by vigorous enforcement.

The UK has acknowledged its offensive cyber capabilities for over a decade, but recent strategic shifts have expanded its role. A £1 billion investment in a new Cyber and Electromagnetic Command will support coordinated action alongside the National Cyber Force.

While Jarvis declined to specify technical capabilities, he cited the National Crime Agency’s role in disrupting the LockBit ransomware group as an example of the UK’s growing offensive posture.

AI is accelerating both cyber threats and defensive measures. Jarvis said the UK must harness AI for national advantage, describing an ‘arms race’ amid rapid technological advancement.

Most cyber threats originate from Russia or its affiliated groups, though Iran, China, and North Korea remain active. The UK is also increasingly concerned about ‘hack-for-hire’ actors operating from friendly nations, including India.

Despite these concerns, Jarvis stressed the UK’s strong security ties with India and ongoing cooperation to curb cyber fraud. ‘We will continue to invest in that relationship for the long term,’ he said.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns AI voice cloning will break bank security

OpenAI CEO Sam Altman has warned that AI poses a serious threat to financial security through voice-based fraud.

Speaking at a Federal Reserve conference in Washington, Altman said AI can now convincingly mimic human voices, rendering voiceprint authentication obsolete and dangerously unreliable.

He expressed concern that some financial institutions still rely on voice recognition to verify identities. ‘That is a crazy thing to still be doing. AI has fully defeated that,’ he said. The risk, he noted, is that AI voice clones can now deceive these systems with ease.

Altman added that video impersonation capabilities are also advancing rapidly. Technologies that become indistinguishable from real people could enable more sophisticated fraud schemes. He called for the urgent development of new verification methods across the industry.

Michelle Bowman, the Fed’s Vice Chair for Supervision, echoed the need for action. She proposed potential collaboration between AI developers and regulators to create better safeguards. ‘That might be something we can think about partnering on,’ Bowman told Altman.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta pushes back on EU AI framework

Meta has refused to endorse the European Union’s new voluntary Code of Practice for general-purpose AI, citing legal overreach and risks to innovation.

The company warns that the framework could slow development and deter investment by imposing expectations beyond upcoming AI laws.

In a LinkedIn post, Joel Kaplan, Meta’s chief global affairs officer, called the code confusing and burdensome, criticising its requirements for reporting, risk assessments and data transparency.

He argued that such rules could limit the open release of AI models and harm Europe’s competitiveness in the field.

The code, published by the European Commission, is intended to help companies prepare for the binding AI Act, set to take effect from August 2025. It encourages firms to adopt best practices on safety and ethics while building and deploying general-purpose AI systems.

While firms like Microsoft are expected to sign on, Meta’s refusal could influence other developers to resist what they view as Brussels overstepping. The move highlights ongoing friction between Big Tech and regulators as global efforts to govern AI rapidly evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK and OpenAI deepen AI collaboration on security and public services

OpenAI has signed a strategic partnership with the UK government aimed at strengthening AI security research and exploring national infrastructure investment.

The agreement was finalised on 21 July by OpenAI CEO Sam Altman and science secretary Peter Kyle. It includes a commitment to expand OpenAI’s London office. Research and engineering teams will grow to support AI development and provide assistance to UK businesses and start-ups.

Under the collaboration, OpenAI will share technical insights with the UK’s AI Security Institute to help government bodies better understand risks and capabilities. Planned deployments of AI will focus on public sectors such as justice, defence, education, and national security.

According to the UK government, all applications will follow national standards and guidelines to improve taxpayer-funded services. Peter Kyle described AI as a critical tool for national transformation. ‘AI will be fundamental in driving the change we need to see across the country,’ he said.

He emphasised its potential to support the NHS, reduce barriers to opportunity, and power economic growth. The deal signals a deeper integration of OpenAI’s operations in the UK, with promises of high-skilled jobs, investment in infrastructure, and stronger domestic oversight of AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hidden malware in DNS records bypasses defences

Security researchers at DomainTools have revealed a novel and stealthy cyberattack method: embedding malware within DNS records. Attackers are storing tiny, encoded pieces of malicious code inside TXT records across multiple subdomains.

The fragments are individually benign, but once fetched and reassembled, typically using PowerShell, they form fully operational malware, including Joke Screenmate prankware and a more serious PowerShell stager that can download further payloads.

DNS traffic is often treated as trustworthy and bypasses many security controls. The growing use of encrypted DNS services like DoH and DoT makes visibility even harder, creating an ideal channel for covert malware delivery.

Reported cases include the fragmentation of Joke Screenmate across hundreds of subdomain TXT records and instances of Covenant C2 stagers hidden in this manner.

Security teams are urged to ramp up DNS analytics, monitor uncommon TXT query patterns, and utilize comprehensive threat intelligence feeds. While still rare in the wild, this technique’s simplicity and stealthiness suggest it could gain traction soon

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Critical minerals challenge AI’s sustainable expansion

Recent debates on AI’s environmental impact have overwhelmingly focused on energy use, particularly in powering massive data centres and training large language models.

However, a Forbes analysis by Saleem H. Ali warns that the material inputs for AI, such as phosphorus, copper, lithium, rare earths, and uranium, are being neglected, despite presenting similarly severe constraints to scaling and sustainability.

While major companies like Google and Blackstone invest heavily in data centre construction and hydroelectric power in places like Pennsylvania, these energy-focused solutions do not address looming material bottlenecks.

Many raw minerals essential for AI hardware are finite, regionally concentrated, and environmentally taxing to extract. However, this raises risks ranging from supply chain fragility to ecological damage and geopolitical tension.

Experts now say that sustainable AI development demands a dual focus, not only on low-carbon energy, but on keeping critical mineral supply chains resilient.

Without a coordinated approach, AI growth may stall or drive unsustainable resource extraction with long-term global consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan smashes internet speed record

Japan’s National Institute of Information and Communications Technology researchers pushed optical networking to its limits.

They successfully transmitted data at a blistering 1.02 petabits per second, a breakthrough speed equivalent to transferring Netflix-quality content or entire encyclopedias in under a second. The test covered nearly 1,800 km, showcasing raw capacity and long-haul viability.

A pioneering 19-core optical fibre, no thicker than typical single-core cables, enabled this achievement. Multiple wavelength bands were combined and amplified 21 times to ensure signal integrity across the distance.

However, this feat doubles last year’s record and retains compatibility with existing fibre infrastructure.

Beyond breaking records, the project signals that future networks could support the massive bandwidth demands of AI, 8K streaming, cloud computing and even 6G.

By demonstrating that modern infrastructure can handle this scale, the researchers hope to accelerate deployment in undersea cables, national backbones, and data centres.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands urges EU to reduce reliance on US cloud providers

The Dutch government has released a policy paper urging the European Union to take coordinated action to reduce its heavy dependence on non-EU cloud providers, especially from the United States.

The document recommends that the European Commission introduce a clearer and harmonized approach at the EU level.

Key proposals include creating a consistent definition of ‘cloud sovereignty,’ adjusting public procurement rules to allow prioritizing sovereignty, promoting open-source technologies and standards, setting up a common European decision-making framework for cloud choices, and ensuring sufficient funding to support the development and deployment of sovereign cloud technologies.

These measures aim to strengthen the EU’s digital independence and protect public administrations from external political or economic pressures.

A recent investigation found that over 20,000 Dutch institutions rely heavily on US cloud services, with Microsoft holding about 60% of the market.

The Dutch government warned this dependence risks national security and fundamental rights. Concerns escalated after Microsoft blocked the ICC prosecutor’s email following US sanctions, sparking political outrage.

In response, the Dutch parliament called for reducing reliance on American providers and urged the government to develop a roadmap to protect digital infrastructure and regain control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISA 2015 expiry threatens private sector threat sharing

Congress has under 90 days to renew the Cybersecurity Information Sharing Act (CISA) of 2015 and avoid a regulatory setback. The law protects companies from liability when they share cyber threat indicators with the government or other firms, fostering collaboration.

Before CISA, companies hesitated due to antitrust and data privacy concerns. CISA removed ambiguity by offering explicit legal protections. Without reauthorisation, fear of lawsuits could silence private sector warnings, slowing responses to significant cyber incidents across critical infrastructure sectors.

Debates over reauthorisation include possible expansions of CISA’s scope. However, many lawmakers and industry groups in the United States now support a simple renewal. Health care, finance, and energy groups say the law is crucial for collective defence and rapid cyber threat mitigation.

Security experts warn that a lapse would reverse years of progress in information sharing, leaving networks more vulnerable to large-scale attacks. With only 35 working days left for Congress before the 30 September deadline, the pressure to act is mounting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kazakhstan rises as an AI superpower

Since the launch of its Digital Kazakhstan initiative in 2017, the country has shifted from resource-dependent roots to digital leadership.

It ranks 24th globally on the UN’s e‑government index and among the top 10 in online service delivery. Over 90% of public services, such as registrations, healthcare access, and legal documentation, are digitised, aided by mobile apps, biometric ID and QR authentication.

Central to this is a Tier III data-centre-based AI supercluster, launching in July 2025, and the Alem.AI centre, both designed to supply computing power for universities, startups and enterprises.

Kazakhstan is also investing heavily in talent and innovation. It aims to train up to a million AI-skilled professionals and supports over 1,600 startups at Astana Hub. Venture capital surpassed $250 million in 2024, bolstered by a new $1 billion Qazaqstan Venture Group fund.

Infrastructure upgrades, such as a 3,700 km fibre-optic corridor between China and the Caspian Sea, support a growing tech ecosystem.

Regulatory milestones include planned AI law reforms, data‑sovereignty zones like CryptoCity, and digital identity frameworks. These prepare Kazakhstan to become Central Asia’s digital and AI nexus.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!