The AI soldier and the ethics of war

The rise of the machine soldier

For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, technological innovation has disrupted and redefined warfare for better or worse. However, the next evolution is not about weapons, it is about the soldier.

New AI-integrated systems such as Anduril’s EagleEye Helmet are transforming troops into data-driven nodes, capable of perceiving and responding with machine precision. This fusion of human and algorithmic capabilities is blurring the boundary between human roles and machine learning, redefining what it means to fight and to feel in war.

Today’s ‘AI soldier’ is more than just enhanced. They are networked, monitored, and optimised. Soldiers now have 3D optical displays that give them a god’s-eye view of combat, while real-time ‘guardian angel’ systems make decisions faster than any human brain can process.

Yet in this pursuit of efficiency, the soldier’s humanity and the rules-based order of war risk being sidelined in favour of computational power.

From soldier to avatar

In the emerging AI battlefield, the soldier increasingly resembles a character in a first-person shooter video game. There is an eerie overlap between AI soldier systems and the interface of video games, like Metal Gear Solid, where augmented players blend technology, violence, and moral ambiguity. The more intuitive and immersive the tech becomes, the easier it is to forget that killing is not a simulation.

By framing war through a heads-up display, AI gives troops an almost cinematic sense of control, and in turn, a detachment from their humanity, emotions, and the physical toll of killing. Soldiers with AI-enhanced senses operate through layers of mediated perception, acting on algorithmic prompts rather than their own moral intuition. When soldiers view the world through the lens of a machine, they risk feeling less like humans and more like avatars, designed to win, not to weigh the cost.

The integration of generative AI into national defence systems creates vulnerabilities, ranging from hacking decision-making systems to misaligned AI agents capable of escalating conflicts without human oversight. Ironically, the same guardrails that prevent civilian AI from encouraging violence cannot apply to systems built for lethal missions.

The ethical cost

Generative AI has redefined the nature of warfare, introducing lethal autonomy that challenges the very notion of ethics in combat. In theory, AI systems can uphold Western values and ethical principles, but in practice, the line between assistance and automation is dangerously thin.

When militaries walk this line, outsourcing their decision-making to neural networks, accountability becomes blurred. Without the basic principles and mechanisms of accountability in warfare, states risk the very foundation of rules-based order. AI may evolve the battlefield, but at the cost of diplomatic solutions and compliance with international law.  

AI does not experience fear, hesitation, or empathy, the very qualities that restrain human cruelty. By building systems that increase efficiency and reduce the soldier’s workload through automated targeting and route planning, we risk erasing the psychological distinction that once separated human war from machine-enabled extermination. Ethics, in this new battlescape, become just another setting in the AI control panel. 

The new war industry 

The defence sector is not merely adapting to AI. It is being rebuilt around it. Anduril, Palantir, and other defence tech corporations now compete with traditional military contractors by promising faster innovation through software.

As Anduril’s founder, Palmer Luckey, puts it, the goal is not to give soldiers a tool, but ‘a new teammate.’ The phrasing is telling, as it shifts the moral axis of warfare from command to collaboration between humans and machines.

The human-machine partnership built for lethality suggests that the military-industrial complex is evolving into a military-intelligence complex, where data is the new weapon, and human experience is just another metric to optimise.

The future battlefield 

If the past century’s wars were fought with machines, the next will likely be fought through them. Soldiers are becoming both operators and operated, which promises efficiency in war, but comes with the cost of human empathy.

When soldiers see through AI’s lens, feel through sensors, and act through algorithms, they stop being fully human combatants and start becoming playable characters in a geopolitical simulation. The question is not whether this future is coming; it is already here. 

There is a clear policy path forward, as states remain tethered to their international obligations. Before AI blurs the line between soldier and system, international law could enshrine a human-in-the-loop requirement for all lethal actions, while defence firms are compelled to maintain high ethical transparency standards.

The question now is whether humanity can still recognise itself once war feels like a game, or whether, without safeguards, it will remain present in war at all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global AI adoption rises quickly but benefits remain unequal

Microsoft’s AI Economy Institute has released its 2025 AI Diffusion Report, detailing global AI adoption, innovation hubs, and the impact of digital infrastructure. AI has reached over 1.2 billion users in under three years, yet its benefits remain unevenly distributed.

Adoption rates in the Global North are roughly double those in the Global South, highlighting the risk of long-term inequalities.

AI adoption depends on strong foundational infrastructure, including electricity, data centres, internet connectivity, digital and AI skills, and language accessibility.

Countries with robust foundations- such as the UAE, Singapore, Norway, and Ireland- have seen rapid adoption, even without frontier-level model development. In contrast, regions with limited infrastructure and low-resource languages lag significantly, with adoption in some areas below 10%.

Ukraine exemplifies the potential for rapid AI growth, despite current disruptions from the war, with an adoption rate of 9.1%. Strategic investments in connectivity, AI skills, and language-inclusive solutions could accelerate recovery, strengthen resilience, and drive innovation.

AI is already supporting cybersecurity and helping businesses and organisations maintain operations amid ongoing challenges.

The concentration of AI infrastructure remains high, with the US and China hosting 86% of the global data centre capacity. A few countries dominate frontier AI development, yet the performance gap between leading models is narrowing.

Coordinated efforts across infrastructure, skills, and policy are crucial to ensure equitable access and maximise AI’s potential worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MK1 joins AMD to accelerate enterprise AI and reasoning technologies

AMD has completed the acquisition of MK1, a California-based company specialising in high-speed inference and reasoning-based AI technologies.

The move marks a significant step in AMD’s strategy to strengthen AI performance and efficiency across hardware and software layers. MK1’s Flywheel and comprehension engines are designed to optimise AMD’s Instinct GPUs, offering scalable, accurate, and cost-efficient AI reasoning.

The MK1 team will join the AMD Artificial Intelligence Group, where their expertise will advance AMD’s enterprise AI software stack and inference capabilities.

Handling over one trillion tokens daily, MK1’s systems are already deployed at scale, providing traceable and efficient AI solutions for complex business processes.

By combining MK1’s advanced AI software innovation with AMD’s compute power, the acquisition enhances AMD’s position in the enterprise and generative AI markets, supporting its goal of delivering accessible, high-performance AI solutions globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK strengthens AI safeguards to protect children online

The UK government is introducing landmark legislation to prevent AI from being exploited to generate child sexual abuse material. The new law empowers authorised bodies, such as the Internet Watch Foundation, to test AI models and ensure safeguards prevent misuse.

Reports of AI-generated child abuse imagery have surged, with the IWF recording 426 cases in 2025, more than double the 199 cases reported in 2024. The data also reveals a sharp rise in images depicting infants, increasing from five in 2024 to 92 in 2025.

Officials say the measures will enable experts to identify vulnerabilities within AI systems, making it more difficult for offenders to exploit the technology.

The legislation will also require AI developers to build protections against non-consensual intimate images and extreme content. A group of experts in AI and child safety will be established to oversee secure testing and ensure the well-being of researchers.

Ministers emphasised that child safety must be built into AI systems from the start, not added as an afterthought.

By collaborating with the AI sector and child protection groups, the government aims to make the UK the safest place for children to be online. The approach strikes a balance between innovation and strong protections, thereby reinforcing public trust in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Joint quantum partnership unites Canada and Denmark for global research leadership

Canada and Denmark have signed a joint statement to deepen collaboration in quantum research and innovation.

The agreement, announced at the European Quantum Technologies Conference 2025 in Copenhagen, reflects both countries’ commitment to advancing quantum science responsibly while promoting shared values of openness, ethics and excellence.

Under the partnership, the two nations will enhance research and development ties, encourage open data sharing, and cultivate a skilled talent pipeline. They also aim to boost global competitiveness in quantum technologies, fostering new opportunities for market expansion and secure supply chains.

Canadian Minister Mélanie Joly highlighted that the cooperation showcases a shared ambition to accelerate progress in health care, clean energy and defence.

Denmark’s Minister for Higher Education and Science, Christina Egelund, described Canada as a vital partner in scientific innovation. At the same time, Minister Evan Solomon stressed the agreement’s role in empowering researchers to deliver breakthroughs that shape the future of quantum technologies.

Both Canada and Denmark are recognised as global leaders in quantum science, working together through initiatives such as the NATO Transatlantic Quantum Community.

A partnership that supports Canada’s National Quantum Strategy, launched in 2023, and reinforces its shared goal of driving innovation for sustainable growth and collective security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IMY investigates major ransomware attack on Swedish IT supplier

Sweden’s data protection authority, IMY, has opened an investigation into a massive ransomware-related data breach that exposed personal information belonging to 1.5 million people. The breach originated from a cyberattack on IT provider Miljödata in August, which affected roughly 200 municipalities.

Hackers reportedly stole highly sensitive data, including names, medical certificates, and rehabilitation records, much of which has since been leaked on the dark web. Swedish officials have condemned the incident, calling it one of the country’s most serious cyberattacks in recent years.

The IMY said the investigation will examine Miljödata’s data protection measures and the response of several affected public bodies, such as Gothenburg, Älmhult, and Västmanland. The regulator’s goal is to identify security shortcomings for future cyber threats.

Authorities have yet to confirm how the attackers gained access to Miljödata’s systems, and no completion date for the investigation has been announced. The breach has reignited calls for tighter cybersecurity standards across Sweden’s public sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google flags adaptive malware that rewrites itself with AI

Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.

PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.

Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.

Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.

Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Data infrastructure growth in India raises environmental concerns

India’s data centre market is expanding rapidly, driven by rapid AI adoption, mobile internet growth, and massive foreign investment from firms such as Google, Amazon and Meta. The sector is projected to expand 77% by 2027, with billions more expected to be spent on capacity by 2030.

Rapid expansion of energy-hungry and water-intensive facilities is creating serious sustainability challenges, particularly in water-scarce urban clusters like Mumbai, Hyderabad and Bengaluru. Experts warn that by 2030, India’s data centre water consumption could reach 358 billion litres, risking shortages for local communities and critical services in India.

Authorities and industry players are exploring solutions including treated wastewater, low-stress basin selection, and zero-water cooling technologies to mitigate environmental impact. Officials also highlight the need to mandate renewable energy use to balance India’s digital ambitions with decarbonisation goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Central Bank warns of new financial scams in Ireland

The Central Bank of Ireland has launched a new campaign to alert consumers to increasingly sophisticated scams targeting financial services users. Officials warned that scammers are adapting, making caution essential with online offers and investments.

Scammers are now using tactics such as fake comparison websites that appear legitimate but collect personal information for fraudulent products or services. Fraud recovery schemes are also common, promising to recover lost funds for an upfront fee, which often leads to further financial loss.

Advanced techniques include AI-generated social media profiles and ads, or ‘deepfakes’, impersonating public figures to promote fake investment platforms.

Deputy Governor Colm Kincaid warned that scams now offer slightly above-market returns, making them harder to spot. Consumers are encouraged to verify information, use regulated service providers, and seek regulated advice before making financial decisions.

The Central Bank advises using trusted comparison sites, checking ads and investment platforms, ignoring unsolicited recovery offers, and following the SAFE test: Stop, Assess, Factcheck, Expose. Reporting suspected scams to the Central Bank or An Garda Síochána remains crucial to protecting personal finances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Inside the rise and fall of a cybercrime kingpin

Ukrainian hacker Vyacheslav Penchukov, once known online as ‘Tank’, climbed from gaming forums in Donetsk to the top of the global cybercrime scene. As leader of the notorious Jabber Zeus and later Evil Corp affiliates, he helped steal tens of millions from banks, charities and businesses around the world while remaining on the FBI Most Wanted list for nearly a decade.

After years on the run, he was dramatically arrested in Switzerland in 2022 and is now serving time in a Colorado prison. In a rare interview, Penchukov revealed how cybercrime evolved from simple bank theft to organised ransomware targeting hospitals and major corporations. He admits paranoia became his constant companion, as betrayal within hacker circles led to his downfall.

Today, the former cyber kingpin spends his sentence studying languages and reflecting on the empire he built and lost. While he shows little remorse for his victims, his story offers a rare glimpse into the hidden networks that fuel global hacking and the blurred line between ambition and destruction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot