NVIDIA drives a new era of industrial AI cybersecurity

AI-driven defences are moving deeper into operational technology as NVIDIA leads a shift toward embedded cybersecurity across critical infrastructure.

The company is partnering with firms such as Akamai Technologies, Forescout, Palo Alto Networks, Siemens and Xage Security to protect energy, manufacturing and transport systems that increasingly operate through cloud-linked environments.

Modernisation has expanded capabilities across these sectors, yet it has widened the gap between evolving threats and ageing industrial defences.

Zero-trust adoption in operational environments is gaining momentum as Forescout and NVIDIA develop real-time verification models tailored to legacy devices and safety-critical processes.

Security workloads run on NVIDIA BlueField hardware to keep protection isolated from industrial systems and avoid any interference with essential operations. That approach enables more precise control over lateral movement across networks without disrupting performance.

Industrial automation is also adapting through Siemens and Palo Alto Networks, which are moving security enforcement closer to workloads at the edge. AI-enabled inspection via BlueField enhances visibility in highly time-sensitive environments, improving reliability and uptime.

Akamai and Xage are extending similar models to energy infrastructure and large-scale operational networks, embedding segmentation and identity-based controls where resilience is most critical.

A coordinated architecture is now emerging in which edge-generated operational data feeds central AI analysis, while enforcement remains local to maintain continuity.

The result is a security model designed to meet the pressures of cyber-physical systems, enabling operators to detect threats faster, reinforce operational stability and protect infrastructure that supports global AI expansion.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Global privacy regulators warn of rising AI deepfake harms

Privacy regulators from around the world have issued a joint warning about the rise of AI-generated deepfakes, arguing that the spread of non-consensual images poses a global risk instead of remaining a problem confined to individual countries.

Sixty-one authorities endorsed a declaration that draws attention to AI images and videos depicting real people without their knowledge or consent.

The signatories highlight the rapid growth of intimate deepfakes, particularly those targeting children and individuals from vulnerable communities. They note that such material often circulates widely on social platforms and may fuel exploitation or cyberbullying.

The declaration argues that the scale of the threat requires coordinated action rather than isolated national responses.

European authorities, including the European Data Protection Board and the European Data Protection Supervisor, support the effort to build global cooperation.

Regulators say that only joint oversight can limit the harms caused by AI systems that generate false depictions, rather than protecting individuals’ privacy as required under frameworks such as the General Data Protection Regulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Anthropic uncovers large-scale AI model theft operations

Three AI laboratories have been found conducting large-scale illicit campaigns to extract capabilities from Anthropic’s Claude AI, the company revealed.

DeepSeek, Moonshot, and MiniMax used around 24,000 fraudulent accounts to generate more than 16 million interactions, violating terms of service and regional access restrictions. The technique, called distillation, trains a weaker model on outputs from a stronger one, speeding AI development.

Distilled models obtained in this manner often lack critical safeguards, creating serious national security concerns. Without protections, these capabilities could be integrated into military, intelligence, surveillance, or cyber operations, potentially by authoritarian governments.

The attacks also undermine export controls designed to preserve the competitive edge of US AI technology and could give a misleading impression of foreign labs’ independent AI progress.

Each lab followed coordinated playbooks using proxy networks and large-scale automated prompts to target specific capabilities such as agentic reasoning, coding, and tool use.

Anthropic attributed the campaigns using request metadata, infrastructure indicators, and corroborating observations from industry partners. The investigation detailed how distillation attacks operate from data generation to model launch.

In response, Anthropic has strengthened detection systems, implemented stricter access controls, shared intelligence with other labs and authorities, and introduced countermeasures to reduce the effectiveness of illicit distillation.

The company emphasises that addressing these attacks will require coordinated action across the AI industry, cloud providers, and policymakers to protect frontier AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI data centre surge pushes electricity demand in the UK to new heights

The UK faces rising pressure on its electricity system as about 140 new data centre projects could demand more power than the country’s current peak consumption, according to Ofgem.

The regulator said developers are seeking about 50 gigawatts of capacity, a level driven by rapid growth in AI and far beyond earlier forecasts.

Connection requests have surged since late 2024, placing strain on a grid already struggling to support vital renewable projects that are key to national climate targets.

Work needed to connect expanding data centre capacity could delay schemes considered essential for decarbonisation and economic growth, instead of supporting the transition at the required pace.

The growing electricity footprint of AI infrastructure also threatens the aim of creating a virtually carbon-free power system by 2030, particularly as high costs and slow grid integration continue to hinder progress.

A proposed data centre in Lincolnshire has already raised concerns by projecting emissions greater than those of several international airports combined.

Ofgem now warns that speculative grid applications are blocking more viable projects, including those tied to government AI growth zones.

The regulator is considering more stringent financial requirements and new fees for access to grid connections, arguing that developers may need to build their own routes to the network rather than rely entirely on existing infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI faces legal action in South Korea from top networks

South Korea’s leading terrestrial broadcasters have filed a lawsuit against OpenAI, claiming that the company trained its ChatGPT model using their news content without permission. KBS, MBC, and SBS are seeking an injunction to halt the alleged infringement and to recover damages.

The Korea Broadcasters Association said OpenAI generates significant revenue from its GPT services and has licensing agreements with media organisations worldwide.

Despite this, the company has refused to negotiate with the South Korean networks, leaving them without recourse to ensure proper use of their content.

The lawsuit emphasises the protection of intellectual property and creators’ rights, arguing that domestic copyright holders face high legal costs and barriers when confronting global technology companies. It also raises broader questions about South Korea’s data sovereignty in the age of AI.

Earlier action against Naver set a precedent for copyright enforcement in AI applications.

Although KBS subsequently partnered with Naver for AI-driven media solutions, the current case underscores continuing disputes over lawful access to broadcast content for generative AI training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Medical AI risks in Turkey highlight data bias and privacy challenges

Ankara is seeing growing debate over the risks and benefits of medical AI as experts warn that poorly governed systems could threaten patient safety.

Associate professor Agah Tugrul Korucu said AI offers meaningful potential for healthcare only when supported by rigorous ethical rules and strong oversight instead of rapid deployment without proper safeguards.

Korucu explained that data bias remains one of the most significant dangers because AI models learn directly from the information they receive. Underrepresented age groups, regions or social classes can distort outcomes and create systematic errors.

Turkey’s national health database e-Nabiz provides a strategic advantage, yet raw information cannot generate value unless it is processed correctly and supported by clear standards, quality controls and reliable terminology.

He added that inconsistent hospital records, labelling errors and privacy vulnerabilities can mislead AI systems and pose legal challenges. Strict anonymisation and secure analysis environments are needed to prevent harmful breaches.

Medical AI works best as a second eye in fields such as radiology and pathology, where systems can reduce workloads by flagging suspicious areas instead of leaving clinicians to assess every scan alone.

Korucu said physicians must remain final decision makers because automation bias could push patients towards unnecessary risks.

He expects genomic data combined with AI to transform personalised medicine over the coming decade, allowing faster diagnoses and accurate medication choices for rare conditions.

Priority development areas for Turkey include triage tools, intensive care early warning systems and chronic disease management. He noted that the long-term model will be the AI-assisted physician rather than a fully automated clinician.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AWS warns of AI powered cybercrime

Amazon Web Services has revealed that a Russian-speaking threat actor used commercial AI tools to compromise more than 600 FortiGate firewalls across 55 countries. AWS described the campaign as an AI-powered assembly line for cybercrime.

According to AWS, the attacker relied on exposed management ports and weak single-factor credentials rather than exploiting software vulnerabilities. The campaign targeted FortiGate devices globally and focused on harvesting credentials and configuration data.

AWS said the potentially Russian group appeared unsophisticated but achieved scale through AI-assisted mass scanning and automation. When encountering stronger defences, the attackers reportedly shifted to easier targets rather than persist.

The company advised organisations using FortiGate appliances to secure management interfaces, change default credentials and enforce complex passwords. Amazon said it was not compromised during the campaign.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Wikipedia removes Archive.today links

Wikipedia editors have voted to remove all links to Archive.today, citing allegations that the web archive was involved in a distributed denial of service attack.

Editors said Archive.today, which also operates under domains such as archive.is and archive.ph, should not be linked because it allegedly used visitors’ browsers to target blogger Jani Patokallio. The site has also been accused of altering archived pages, raising concerns about reliability.

Archive.today had previously been blacklisted in 2013 before being reinstated in 2016. Wikipedia’s latest guidance calls for replacing Archive.today links with original sources or alternative archives such as the Wayback Machine.

The apparent owner of Archive.today denied wrongdoing in posts linked from the site and suggested the controversy had been exaggerated. Wikipedia editors nevertheless concluded that readers should not be directed to a service facing such allegations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ashford Port Health Authority rolls out AI-powered compliance checks at UK border control

The Ashford Port Health Authority, operated by Ashford Borough Council at the Sevington Border Control Post in Kent, has deployed an AI-enabled system to support import compliance checks.

This technology uses Intelligent Document Processing to automatically extract, structure and evaluate import documentation for agricultural products and other regulated goods, reducing the need for manual review in early screening stages.

Officials describe the system as the first of its kind in the UK to fully automate initial documentary compliance checks for imported goods, including products of animal origin (POAO), high-risk food not of animal origin (HRFNAO) and other regulated consignments.

By mimicking the workflows of human officers, it helps improve productivity, consistency and speed of border controls while allowing staff to focus on frontline services.

The rollout also allows Ashford Borough Council to freeze official control charges for the 2026/27 financial year, as automation gains offset cost pressures. The council emphasises that the AI system augments rather than replaces expert oversight, strengthening compliance without sacrificing professional judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenClaw exploits spark a major security alert

A wave of coordinated attacks has targeted OpenClaw, the autonomous AI framework that gained rapid popularity after its release in January.

Multiple hacking groups have taken advantage of severe vulnerabilities to steal API keys, extract persistent memory data, and push information-stealing malware instead of leaving the platform’s expanding user base unharmed.

Security analysts have linked more than 30,000 compromised instances to campaigns that intercept messages and deploy malicious payloads through channels such as Telegram.

Much of the damage stems from flaws such as the Remote Code Execution vulnerability CVE-2026-25253, supply chain poisoning, and exposed administrative interfaces. Early attacks centred on the ‘ClawHavoc’ campaign, which disguised malware as legitimate installation tools.

Users who downloaded these scripts inadvertently installed stealers capable of full compromise, enabling attackers to move laterally across enterprise systems instead of being confined to a single device.

Further incidents emerged on the OpenClaw marketplace, where backdoored ‘skills’ were published from accounts that appeared reliable. These updates executed remote commands that allowed attackers to siphon OAuth tokens, passwords, and API keys in real time.

A Shodan scan later identified more than 312,000 OpenClaw instances running on a default port with little or no protection, while honeypots recorded hostile activity within minutes of appearing online.

Security researchers argue that the surge in attacks marks a decisive moment for autonomous AI frameworks. As organisations experiment with agents capable of independent decision-making, the absence of security-by-design safeguards is creating opportunities for organised threat groups.

Flare’s advisory urges companies to secure credentials and isolate AI workloads instead of relying on default configurations that expose high-privilege systems to the internet.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!