OpenAI alters usage policy, removes explicit ban on military use

OpenAI has quietly updated its usage policy, removing explicit bans on military applications such as ‘weapons development’ and ‘military and warfare.’ While OpenAI spokesperson Niko Felix highlighted the universal principle of avoiding harm, concerns have been raised about the ambiguity of the new policy regarding military use.

 Aircraft, Airplane, Transportation, Vehicle, Robot

OpenAI has revised its usage policy, removing the explicit prohibition of military applications of its technology, as revealed in a recent update to its policy page. The previous ban on ‘weapons development’ and ‘military and warfare’ applications has been replaced with a broader injunction not to ‘use our service to harm yourself or others.’ This change is part of a significant rewrite aimed at making the document ‘clearer’ and ‘more readable,’ according to OpenAI.

While OpenAI spokesperson Niko Felix emphasised the universal principle of not causing harm, concerns have been raised about the vagueness of the new policy regarding military use. The revised language highlights the importance of legality over safety, raising questions about how OpenAI plans to enforce the updated policy.

Heidy Khlaaf, engineering director at Trail of Bits, noted that shifting from an explicit ban on military applications to a more flexible approach emphasising compliance with the law may have implications for AI safety. Despite not having direct lethal capabilities, the potential use of OpenAI’s tools in military contexts could contribute to imprecise and biassed operations, leading to increased harm and civilian casualties.

The policy changes have prompted speculation about OpenAI’s willingness to engage with military entities. Critics argue that the company is silently weakening its stance against doing business with militaries, emphasising the need for clarity in OpenAI’s approach to enforcement.

Why does it matter?

Experts, including Lucy Suchman and Sarah Myers West, point to OpenAI’s close partnership with Microsoft, a major defence contractor, as a factor that might influence the company’s evolving policy. OpenAI’s collaboration with Microsoft, which has invested $13 billion in the language model (LLM) maker, adds complexity to the discussion, particularly as militaries worldwide seek to integrate machine learning techniques into their operations.

The changes to OpenAI’s policy come at a time when various military entities, including the Pentagon, express interest in adopting large language models like ChatGPT for multiple purposes, ranging from paperwork processing to data analysis. The shift in language and the removal of explicit bans raise questions about the potential implications of OpenAI’s tools in military applications and the ethical considerations surrounding their use in the defence sector.