Anthropic challenges Pentagon over military AI use
Ethical concerns over civilian safety and privacy challenge the Pentagon’s use of AI in defence operations.
Pentagon officials are at odds with AI developer Anthropic over restrictions designed to prevent autonomous weapons targeting and domestic surveillance. The disagreement has stalled discussions under a $200 million contract.
Anthropic has expressed concern about its tools being used in ways that could harm civilians or breach privacy. The company emphasises that human oversight is essential for national security applications.
The dispute reflects broader tensions between Silicon Valley firms and government use of AI. Pentagon officials argue that commercial AI can be deployed as long as it follows US law, regardless of corporate guidelines.
Anthropic’s stance may affect its Pentagon contracts as the firm prepares for a public offering. The company continues to engage with officials while advocating for ethical AI deployment in defence operations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
