Generative AI accelerates US defence strategies
AI technologies are being adopted by the Pentagon in the US aims to improve decision-making speed without bypassing ethical and human oversight, a senior official revealed.

The Pentagon is leveraging generative AI to accelerate critical defence operations, particularly the ‘kill chain’, a process of identifying, tracking, and neutralising threats. According to Dr Radha Plumb, the Pentagon’s Chief Digital and AI Officer, AI’s current role is limited to aiding planning and strategising phases, ensuring commanders can respond swiftly while maintaining human oversight over life-and-death decisions.
Major AI firms like OpenAI and Anthropic have softened their policies to collaborate with defence agencies, but only under strict ethical boundaries. These partnerships aim to balance innovation with responsibility, ensuring AI systems are not used to cause harm directly. Meta, Anthropic, and Cohere are tech giants working with defence contractors, providing tools that optimise operational planning without breaching ethical standards.
In the US, Dr Plumb emphasised that the Pentagon’s AI systems operate as part of human-machine collaboration, countering fears of fully autonomous weapons. Despite debates over AI’s role in defence, officials argue that working with the technology is vital to ensure its ethical application. Critics, however, continue to question the transparency and long-term implications of such alliances.
As AI becomes central to defence strategies, the Pentagon’s commitment to integrating ethical safeguards highlights the delicate balance between technological advancement and human control.