Anthropic uncovers a major AI-led cyberattack

The US R&D firm, Anthropic, has revealed details of the first known cyber espionage operation largely executed by an autonomous AI system.

Suspicious activity detected in September 2025 led to an investigation that uncovered an attack framework, which used Claude Code as an automated agent to infiltrate about thirty high-value organisations across technology, finance, chemicals and government.

The attackers relied on recent advances in model intelligence, agency and tool access.

By breaking tasks into small prompts and presenting Claude as a defensive security assistant instead of an offensive tool, they bypassed safeguards and pushed the model to analyse systems, identify weaknesses, write exploit code and harvest credentials.

The AI completed most of the work with only a few moments of human direction, operating at a scale and speed that human hackers would struggle to match.

Anthropic responded by banning accounts, informing affected entities and working with authorities as evidence was gathered. The company argues that the case shows how easily sophisticated operations can now be carried out by less-resourced actors who use agentic AI instead of traditional human teams.

Errors such as hallucinated credentials remain a limitation, yet the attack marks a clear escalation in capability and ambition.

The firm maintains that the same model abilities exploited by the attackers are needed for cyber defence. Greater automation in threat detection, vulnerability analysis and incident response is seen as vital.

Safeguards, stronger monitoring and wider information sharing are presented as essential steps for an environment where adversaries are increasingly empowered by autonomous AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The AI soldier and the ethics of war

The rise of the machine soldier

For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, technological innovation has disrupted and redefined warfare for better or worse. However, the next evolution is not about weapons, it is about the soldier.

New AI-integrated systems such as Anduril’s EagleEye Helmet are transforming troops into data-driven nodes, capable of perceiving and responding with machine precision. This fusion of human and algorithmic capabilities is blurring the boundary between human roles and machine learning, redefining what it means to fight and to feel in war.

Today’s ‘AI soldier’ is more than just enhanced. They are networked, monitored, and optimised. Soldiers now have 3D optical displays that give them a god’s-eye view of combat, while real-time ‘guardian angel’ systems make decisions faster than any human brain can process.

Yet in this pursuit of efficiency, the soldier’s humanity and the rules-based order of war risk being sidelined in favour of computational power.

From soldier to avatar

In the emerging AI battlefield, the soldier increasingly resembles a character in a first-person shooter video game. There is an eerie overlap between AI soldier systems and the interface of video games, like Metal Gear Solid, where augmented players blend technology, violence, and moral ambiguity. The more intuitive and immersive the tech becomes, the easier it is to forget that killing is not a simulation.

By framing war through a heads-up display, AI gives troops an almost cinematic sense of control, and in turn, a detachment from their humanity, emotions, and the physical toll of killing. Soldiers with AI-enhanced senses operate through layers of mediated perception, acting on algorithmic prompts rather than their own moral intuition. When soldiers view the world through the lens of a machine, they risk feeling less like humans and more like avatars, designed to win, not to weigh the cost.

The integration of generative AI into national defence systems creates vulnerabilities, ranging from hacking decision-making systems to misaligned AI agents capable of escalating conflicts without human oversight. Ironically, the same guardrails that prevent civilian AI from encouraging violence cannot apply to systems built for lethal missions.

The ethical cost

Generative AI has redefined the nature of warfare, introducing lethal autonomy that challenges the very notion of ethics in combat. In theory, AI systems can uphold Western values and ethical principles, but in practice, the line between assistance and automation is dangerously thin.

When militaries walk this line, outsourcing their decision-making to neural networks, accountability becomes blurred. Without the basic principles and mechanisms of accountability in warfare, states risk the very foundation of rules-based order. AI may evolve the battlefield, but at the cost of diplomatic solutions and compliance with international law.  

AI does not experience fear, hesitation, or empathy, the very qualities that restrain human cruelty. By building systems that increase efficiency and reduce the soldier’s workload through automated targeting and route planning, we risk erasing the psychological distinction that once separated human war from machine-enabled extermination. Ethics, in this new battlescape, become just another setting in the AI control panel. 

The new war industry 

The defence sector is not merely adapting to AI. It is being rebuilt around it. Anduril, Palantir, and other defence tech corporations now compete with traditional military contractors by promising faster innovation through software.

As Anduril’s founder, Palmer Luckey, puts it, the goal is not to give soldiers a tool, but ‘a new teammate.’ The phrasing is telling, as it shifts the moral axis of warfare from command to collaboration between humans and machines.

The human-machine partnership built for lethality suggests that the military-industrial complex is evolving into a military-intelligence complex, where data is the new weapon, and human experience is just another metric to optimise.

The future battlefield 

If the past century’s wars were fought with machines, the next will likely be fought through them. Soldiers are becoming both operators and operated, which promises efficiency in war, but comes with the cost of human empathy.

When soldiers see through AI’s lens, feel through sensors, and act through algorithms, they stop being fully human combatants and start becoming playable characters in a geopolitical simulation. The question is not whether this future is coming; it is already here. 

There is a clear policy path forward, as states remain tethered to their international obligations. Before AI blurs the line between soldier and system, international law could enshrine a human-in-the-loop requirement for all lethal actions, while defence firms are compelled to maintain high ethical transparency standards.

The question now is whether humanity can still recognise itself once war feels like a game, or whether, without safeguards, it will remain present in war at all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Inside the rise and fall of a cybercrime kingpin

Ukrainian hacker Vyacheslav Penchukov, once known online as ‘Tank’, climbed from gaming forums in Donetsk to the top of the global cybercrime scene. As leader of the notorious Jabber Zeus and later Evil Corp affiliates, he helped steal tens of millions from banks, charities and businesses around the world while remaining on the FBI Most Wanted list for nearly a decade.

After years on the run, he was dramatically arrested in Switzerland in 2022 and is now serving time in a Colorado prison. In a rare interview, Penchukov revealed how cybercrime evolved from simple bank theft to organised ransomware targeting hospitals and major corporations. He admits paranoia became his constant companion, as betrayal within hacker circles led to his downfall.

Today, the former cyber kingpin spends his sentence studying languages and reflecting on the empire he built and lost. While he shows little remorse for his victims, his story offers a rare glimpse into the hidden networks that fuel global hacking and the blurred line between ambition and destruction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ICRC and Geneva Academy publish joint report on civilian involvement in cyber activities during conflicts

The International Committee of the Red Cross (ICRC) and the Geneva Academy of International Humanitarian Law and Human Rights have jointly released a report examining how international humanitarian law (IHL) applies to civilian participation in cyber and other digital activities during armed conflicts. The report is based on extensive global research and expert consultations conducted within the framework of their initiative.

The publication addresses key legal issues, including the protection of civilians and technology companies during armed conflict, and the circumstances under which such protections may be at risk. It further analyses the IHL obligations of civilians, such as individuals engaging in hacking, when directly involved in hostilities, as well as the responsibilities of states to safeguard civilians and civilian infrastructure and to ensure compliance with IHL by populations under their control.

The report echoes several key messages found in the second chapter of the Geneva Manual, an initiative under the Geneva Dialogue led by the Swiss Government and implemented by DiploFoundation with the support of several partners. The Manual gathers perspectives from non-state stakeholders on the implementation of cyber norms related to the protection of critical infrastructure.

In particular, both documents emphasise the need to minimise civilian harm, clarify responsibilities in cyberspace, and ensure that states and private actors uphold international obligations when digital tools are used during conflict.

The ICRC and Geneva Academy report also offers practical recommendations for governments, technology companies, and humanitarian organisations aimed at limiting civilian involvement in hostilities, minimising harm, and supporting adherence to international humanitarian law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australian government highlights geopolitical risks to critical infrastructure

According to the federal government’s latest Critical Infrastructure Annual Risk Review, Australia’s critical infrastructure is increasingly vulnerable due to global geopolitical uncertainty, supply chain vulnerabilities, and advancements in technology.

The report, released by the Department of Home Affairs, states that geopolitical tensions and instability are affecting all sectors essential to national functioning, such as energy, healthcare, banking, aviation and the digital systems supporting them.

It notes that operational environments are becoming increasingly uncertain both domestically and internationally, requiring new approaches to risk management.

The review highlights a combination of pressures, including cyber threats, supply chain disruptions, climate-related risks and the potential for physical sabotage. It also points to challenges linked to “malicious insiders”, geostrategic shifts and declining public trust in institutions.

According to the report, Australia’s involvement in international policy discussions has, at times, exposed it to possible retaliation from foreign actors through activities ranging from grey zone operations to preparations for state-sponsored sabotage.

It further notes that the effects of overseas conflicts have influenced domestic sentiment and social cohesion, contributing to risks such as ideologically driven vandalism, politically motivated violence and lone-actor extremism.

To address these challenges, the government emphasises the need for adaptable risk management strategies that reflect shifting dependencies, short- and long-term supply chain issues and ongoing geopolitical tensions.

The report divides priority risks into two categories: those considered most plausible and those deemed most harmful. Among the most convincing are extreme-impact cyber incidents and geopolitically driven supply chain disruption.

The most damaging risks include disrupted fuel supplies, major cyber incidents and state-sponsored sabotage. The review notes that because critical sectors are increasingly interdependent, disruption in one area could have cascading impacts on others.

Australia currently imports 61 percent of its fuel from the Middle East, with shipments transiting maritime routes that are vulnerable to regional tensions. Many global shipping routes also pass through the Taiwan Strait, where conflict would significantly affect supply chains.

Home Affairs Minister Tony Burke said the review aims to increase understanding of the risks facing Australia’s essential services and inform efforts to enhance resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN treaty sparks debate over digital cybersecurity

A new UN cybercrime treaty opened for signature on 25 October, raising concerns about digital cybersecurity and privacy protections. The treaty allows broad cross-border cooperation on serious crimes, potentially requiring states to assist investigations that conflict with domestic laws.

Negotiations revealed disagreements over the treaty’s scope and human rights standards, primarily because it grants broad surveillance powers without clearly specifying safeguards for privacy and digital rights. Critics warn that these powers could be misused, putting digital cybersecurity and the rights of citizens at risk.

Governments supporting the treaty are advised to adopt safeguards, including limiting intrusive monitoring, conditioning cooperation on dual criminality, and reporting requests for assistance transparently. Even with these measures, experts caution that the treaty could pose challenges to global digital cybersecurity protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cyber and energy leaders meet to harden EU power grid resilience

Europe’s 8th Cybersecurity Forum in Brussels brought together more than 200 officials and operators from energy, cybersecurity and technology to discuss how to protect the bloc’s increasingly digital, decentralised grids. ENISA said strengthening energy infrastructure security is urgent as geopolitics and digitalisation raise risk.

Discussions focused on turning new EU frameworks into real-world protection: the Cyber Resilience Act placing board-level responsibility for security, the NIS2 Directive updating obligations across critical sectors, and the Network Code on Cybersecurity setting common rules for cross-border electricity flows. Speakers pressed for faster implementation, better public-private cooperation and stronger supply-chain security.

Case studies highlighted live threats. Ukraine’s National Cybersecurity Coordination Center warned of the growing threat of hybrid warfare, citing repeated Russian cyberattacks on its power grid dating back to 2015. ENCS demonstrated how insecure consumer-energy devices like EV chargers, PV inverters, and home batteries can be easily exploited when security-by-design measures are absent.

Organisers closed with a call to standardise best practice, improve information sharing and coordinate operators, regulators and suppliers. As DG Energy’s Michaela Kollau noted, the resilience of Europe’s grids depends on a shared commitment to implementing current legislation and sector cybersecurity measures.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India deploys AI to modernise its military operations

In a move reflecting its growing strategic ambitions, India is rapidly implementing AI across its defence forces. The country’s military has moved from policy to practice, using tools from real-time sensor fusion to predictive maintenance to transform how it fights.

The shift has involved institutional change. India’s Defence AI Council and Defence AI Project Agency (established 2019) are steering an ecosystem that includes labs such as the Centre for Artificial Intelligence & Robotics of the Defence Research and Development Organisation (DRDO).

One recent example is the cross-border operation Operation Sindoor (May 2025), in which AI-driven platforms appeared in roles ranging from intelligence analysis to operational coordination.

This effort signals more than just a technological upgrade. It underscores a shift in warfare logic, where systems of systems, connectivity and rapid decision-making matter more than sheer numbers.

India’s incorporation of AI into its capabilities, drone swarming, combat simulation and logistics optimisation, is aligned with broader trends in defence innovation and digital diplomacy. The country’s strategy now places AI at the heart of its procurement demands and force design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lawmakers urge EU to curb Huawei’s role in solar inverters over security risks

Lawmakers and security officials are increasingly worried that Huawei’s dominant role in solar inverters could create a new supply-chain vulnerability for Europe’s power grids. Two MEPs have written to the European Commission urging immediate steps to limit ‘high-risk’ vendors in energy systems.

Inverters are a technology that transforms solar energy into the electrical current fed into the power network; many are internet-connected so vendors can perform remote maintenance. Cyber experts warn that remote access to large numbers of inverters could be abused to shut devices down or change settings en masse, creating surges, drops or wider instability across the grid.

Chinese firms, led by Huawei and Sungrow, supply a large share of Europe’s installed inverter capacity. SolarPower Europe estimates Chinese companies account for roughly 65 per cent of the market. Some member states are already acting: Lithuania has restricted remote access to sizeable Chinese installations, while agencies in the Czech Republic and Germany have flagged specific Huawei components for further scrutiny.

The European Commission is preparing an ICT supply-chain toolbox to de-risk critical sectors, with solar inverters listed among priority areas. Suspicion of Chinese technology has surged in recent years. Beijing, under President Xi Jinping, requires domestic firms to comply with government requests for data sharing and to report software vulnerabilities, raising Western fears of potential surveillance.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Netherlands and China in talks to resolve Nexperia dispute

The Dutch Economy Minister has spoken with his Chinese counterpart to ease tensions following the Netherlands’ recent seizure of Nexperia, a major Dutch semiconductor firm.

China, where most of Nexperia’s chips are produced and sold, reacted by blocking exports, creating concern among European carmakers reliant on its components.

Vincent Karremans said he had discussed ‘further steps towards reaching a solution’ with Chinese Minister of Commerce Wang Wentao.

Both sides emphasised the importance of finding an outcome that benefits Nexperia, as well as the Chinese and European economies.

Meanwhile, Nexperia’s China division has begun asserting its independence, telling employees they may reject ‘external instructions’.

The firm remains a subsidiary of Shanghai-listed Wingtech, which has faced growing scrutiny from European regulators over national security and strategic technology supply chains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!