Anthropic’s Pentagon dispute and military AI governance in 2026

Military AI is moving from theory to practice, and Anthropic’s Pentagon dispute shows how contested its limits have become.

Pentagon and Anthropic logos separated by a bright burst, representing a clash over military AI and surveillance limits.

On 28 February 2026, Anthropic’s Claude rose to No. 1 in Apple’s US App Store free rankings, overtaking OpenAI’s ChatGPT. The surge came shortly after OpenAI announced a partnership with the US Department of Defense (DoD), making its technology available to the US Army. The development prompted discussion among users and observers about whether concerns over military partnerships were influencing the shift to alternative AI tools.

Mere hours before the USD $200 million OpenAI-DoD deal was finalised, Anthropic was informed that its potential deal with the Pentagon had fallen through, largely because the AI company refused to relinquish total control of its technology for domestic mass surveillance. According to reporting, discussions broke down after Anthropic declined to grant the US government unrestricted control over its models, particularly for potential uses related to large-scale surveillance.

Following the breakdown of negotiations, US officials reportedly designated Anthropic as a ‘supply chain risk to national security’. The decision effectively limited the company’s participation in certain defence-related projects and highlighted growing tensions between AI developers’ safety policies and government expectations regarding national security technologies.

The debate over military partnerships sparked internal and industry-wide discussion. Caitlin Kalinowski, the former head of AR glasses hardware at Meta and the hardware leader at OpenAI, resigned soon after the US DoD deal, citing ethical concerns about the company’s involvement in military AI applications.

AI has driven recent technological innovation, with companies like Anduril and Palantir collaborating with the US DoD to deploy AI on and off the battlefield. The debate over AI’s role in military operations, surveillance, and security has intensified, especially as Middle East conflicts highlight its potential uses and risks.

Against this backdrop, the dispute between Anthropic and the Pentagon reflects a wider debate on how AI should be used in security and defence. Governments are increasingly relying on private tech companies to develop the systems that shape modern military capabilities, while those same companies are trying to set limits on how their technologies can be used.

As AI becomes more deeply integrated into security strategies around the world, the challenge may no longer be whether the technology will be used, but how it should be governed. The question is: who should ultimately decide where the limits of military AI lie?

Anthropic’s approach to military AI

Anthropic’s approach is closely tied to its concept of ‘constitutional AI’, a training method that guides how the model behaves by embedding a set of principles directly into its responses. Such principles are intended to reduce harmful outputs and ensure the system avoids unsafe or unethical uses. While such safeguards are intended to improve reliability and trust, they can also limit how the technology can be deployed in more sensitive contexts such as military operations.

Anthropic’s Constitution says its AI assistant should be ‘genuinely helpful’ to people and society, while avoiding unsafe, unethical, or deceptive actions. The document reflects the company’s broader effort to build safeguards into model deployment. In practice, Anthropic has set limits on certain applications of its technology, including uses related to large-scale surveillance or military operations.

Anthropic presents these safeguards as proof of its commitment to responsible AI. Reports indicate that concerns over unrestricted model access led to the breakdown in talks with the US DoD.

At the same time, Anthropic clarifies that its concerns are specific to certain uses of its technology. The company does not generally oppose cooperation with national security institutions. In a statement following the Pentagon’s designation of the company as a ‘supply chain risk to national security’, CEO Dario Amodei said, ‘Anthropic has much more in common with the US DoD than we have differences.’ He added that the company remains committed to ‘advancing US national security and defending the American people.’

The episode, therefore, highlights a nuanced position. Anthropic appears open to defence partnerships but seeks to maintain clearer limits on the deployment of its AI systems. The disagreement with the Pentagon ultimately reflects not a fundamental difference in goals, but rather different views on how far military institutions should be able to control and use advanced AI technologies.

Anthropic’s position illustrates a broader challenge facing governments and tech companies as AI becomes increasingly integrated into national security systems. While military and security institutions are eager to deploy advanced AI tools to support intelligence analysis, logistics, and operational planning, the companies developing these technologies are also seeking to establish safeguards for their use. Anthropic’s willingness to step back from a major defence partnership and challenge the Pentagon’s response underscores how some AI developers are trying to set limits on military uses of their systems.

Defence partnerships that shape the AI industry

While Anthropic has taken a cautious approach to military deployment of AI, other technology companies have pursued closer partnerships with defence institutions. One notable example is Palantir, the US data analytics firm co-founded by Peter Thiel that has longstanding relationships with numerous government agencies. Documents leaked in 2013 suggested that the company had contracts with at least 12 US government bodies. More recently, Palantir has expanded its defence offering through its Artificial Intelligence Platform (AIP), designed to support intelligence analysis and operational decision-making for military and security institutions.

Another prominent player is Anduril Industries, a US defence technology company focused on developing AI-enabled defence systems. The firm produces autonomous and semi-autonomous technologies, including unmanned aerial systems and surveillance platforms, which it supplies to the US DoD.

Shield AI, meanwhile, is developing autonomous flight software designed to operate in environments where GPS and communications may be unavailable. Its Hivemind AI platform powers drones that can navigate buildings and complex environments without human control. The company has worked with the US military to test these systems in training exercises and operational scenarios, including aircraft autonomy projects aimed at supporting fighter pilots.

The aforementioned partnerships illustrate how the US government has increasingly embraced AI as a key pillar of national defence and future military operations. In many cases, these technologies are already being used in operational contexts. Palantir’s Gotham and AIP, for instance, have supported US military and intelligence operations by processing satellite imagery, drone footage, and intercepted communications to help analysts identify patterns and potential threats.

Other companies are contributing to defence capabilities through autonomous systems development and hardware integration. Anduril supplies the US DoD with AI-enabled surveillance, drone, and counter-air systems designed to detect and respond to potential threats. At the same time, OpenAI’s technology is increasingly being integrated into national security and defence projects through growing collaboration with US defence institutions.

Such developments show that AI is no longer a supporting tool but a fundamental part of military infrastructure, influencing how defence organisations process information and make decisions. As governments deepen their reliance on private-sector AI, the emerging interplay among innovation, operational effectiveness, and oversight will define the central debate on military AI adoption.

The potential benefits of military AI

The debate over Anthropic’s restrictions on military AI use highlights the reasons governments invest in such technologies: defence institutions are drawn to AI because it processes vast amounts of information much faster than human analysts. Military operations generate massive data streams from satellites, drones, sensors, and communication networks, and AI systems can analyse them in near real time.

In 2017, the US DoD launched Project Maven to apply machine learning to drone and satellite imagery, enabling analysts to identify objects, movements, and potential threats on the battlefield faster than with traditional manual methods.

AI is increasingly used in military logistics and operational planning. It helps commanders anticipate equipment failures, enables predictive maintenance, optimises supply chains, and improves field asset readiness.

Recent conflicts have shown that AI-driven tools can enhance military intelligence and planning. In Ukraine, for example, forces reportedly used software to analyse satellite imagery, drone footage, and battlefield data. Key benefits include more efficient target identification, real-time tracking of troop movements, and clearer battlefield awareness through the integration of multiple data sources.

AI-assisted analysis has been used in intelligence and targeting during the Gaza conflict. Israeli defence systems use AI tools to rapidly process large datasets for surveillance and intelligence operations. The tools help analysts identify potential militant infrastructure, track movements, and prioritise key intelligence, thus speeding up information processing for teams during periods of high operational activity.

More broadly, AI is transforming the way militaries coordinate across land, air, sea, and cyber domains. AI integrates data from diverse sources, equipping commanders to interpret complex operational situations and enabling faster, informed decision-making. The advances reinforce why many governments see AI as essential for future defence planning.

Ethical concerns and Anthropic’s limits on military AI

Despite the operational advantages of military AI, its growing role in national defence systems has raised ethical concerns. Critics warn that overreliance on AI for intelligence analysis, targeting, or operational planning could introduce risks if the systems produce inaccurate outputs or are deployed without sufficient human oversight. Even highly capable models can generate misleading or incomplete information, which in high-stakes military contexts could have serious consequences.

Concerns about the reliability of AI systems are also linked to the quality of the data they learn from. Many models still struggle to distinguish authentic information from synthetic or manipulated content online. As generative AI becomes more widespread, the risk that systems may absorb inaccurate or fabricated data increases, potentially affecting how these tools interpret intelligence or analyse complex operational environments.

Questions about autonomy have also become a major issue in discussions around military AI. As AI systems become increasingly capable of analysing battlefield data and identifying potential targets, debates have emerged over how much decision-making authority they should be given. Many experts argue that decisions involving the use of lethal force should remain under meaningful human control to prevent unintended consequences or misidentification of targets.

Another area of concern relates to the potential expansion of surveillance capabilities. AI systems can analyse satellite imagery, communications data, and online activity at a scale beyond the capacity of human analysts alone. While such tools may help intelligence agencies detect threats more efficiently, critics warn that they could also enable large-scale monitoring if deployed without clear legal and institutional safeguards.

It is within this ethical landscape that Anthropic has attempted to position itself as a more cautious actor in the AI industry. Through initiatives such as Claude’s Constitution and its broader emphasis on AI safety, the company argues that powerful AI systems should include safeguards that limit harmful or unethical uses. Anthropic’s reported refusal to grant the Pentagon unrestricted control over its models during negotiations reflects this approach.

The disagreement between Anthropic and the US DoD therefore highlights a broader tension in the development of military AI. Governments increasingly view AI as a strategic technology capable of strengthening defence and intelligence capabilities, while some developers seek to impose limits on how their systems are deployed. As AI becomes more deeply embedded in national security strategies, the question may no longer be whether these technologies will be used, but who should define the boundaries of their use.

Military AI and the limits of corporate control

Anthropic’s dispute with the Pentagon shows that the debate over military AI is no longer only about technological capability. Questions of speed, efficiency, and battlefield advantage now collide with concerns over surveillance, autonomy, human oversight, and corporate responsibility. Governments increasingly see AI as a strategic asset, while companies such as Anthropic are trying to draw boundaries around how far their systems can go once they enter defence environments.

Contrasting approaches across the industry make the tension even clearer. Palantir, Anduril, Shield AI, and OpenAI have moved closer to defence partnerships, reflecting a broader push to integrate advanced AI into military infrastructure. Anthropic, by comparison, has tried to keep one foot in national security cooperation while resisting uses it views as unsafe or unethical. A divide of that kind suggests that the future of military AI may be shaped as much by company policies as by government strategy.

The growing reliance on private firms to build national security technologies has made governance harder to define. Military institutions want flexibility, scale, and operational control, while AI developers increasingly face pressure to decide whether they are simply suppliers or active gatekeepers of how their models are deployed. Anthropic’s position does not outright defence cooperation, but it does expose how fragile the relationship becomes when state priorities and corporate safeguards no longer align.

Military AI will continue to expand, whether through intelligence analysis, logistics, surveillance, or autonomous systems. Governance, however, remains the unresolved issue at the centre of that expansion. As AI becomes more deeply embedded in defence policy and military planning, should governments alone decide how far these systems can go, or should companies like Anthropic retain the power to set limits on their use?

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!