Ethical dilemmas: US Army is examining the feasibility of AI advisors on the battlefield
Researchers are specifically focusing on testing AI chatbots, in simulated scenarios where they function as military commander assistants. In this capacity, the chatbots propose various courses of action based on provided information related to the battlefield terrain, friendly and enemy forces, and mission objectives.
The US Army Research Laboratory is currently engaged in experiments involving the integration of AI chatbots as advisers in war game simulations. The primary objective of this initiative is to assess the potential contributions of AI to the planning and decision-making processes in a battlefield context.
Researchers are specifically focusing on testing AI chatbots, in simulated scenarios where they function as military commander assistants. In this capacity, the chatbots propose various courses of action based on provided information related to the battlefield terrain, friendly and enemy forces, and mission objectives.
Despite the demonstrated efficiency of AI chatbots in rapidly generating plans, they have faced challenges during experimentation, including a higher incidence of casualties compared to their human counterparts. Experts approach the application of this technology in real-world combat situations with caution, citing concerns such as automation bias and the technology’s readiness for high-stakes applications.
The US Department of Defense’s AI task force has identified numerous potential military applications for generative AI technologies, indicating a growing interest in incorporating AI into defense operations. Notably, the US military has publicly acknowledged the use of AI in target identification for airstrikes in the Middle East.
However, the promising results of these initiatives are accompanied by ethical and legal considerations regarding the practicality of deploying AI advisers in complex real-world conflicts. Additionally, some studies have raised alarms about the potential of AI models to rapidly escalate conflicts and even make decisions related to deploying nuclear weapons. These concerns highlight the need for careful evaluation and ethical considerations in the integration of AI technologies into military operations.
Why does it matter?
The integration of AI into warfare becomes increasingly inevitable, its deployment on the battlefield also raises ethical and moral dilemmas. There are numerous calls and initiatives to ban the use of AI in warfare, however, amidst the escalating AI arms race among global powers, the pressure to utilise this technology to secure military superiority will only intensify.
The UN Secretary Guterres is one of the most prominent individuals who calls for the restriction of use and the prohibition of the use of autonomous systems in war. Guterres detailed many concerns raised by fully autonomous weapons, also known as lethal autonomous weapons systems, and offered to support states to elaborate new measures such as ‘legally binding arrangements’ to ensure that ‘humans remain at all times in control over the use of force.’ At the high-level opening of the UN General Assembly on 25 September 2024, Guterres described weapons that could select and attack a target on their own as “morally repugnant” and called on states to address the ‘multiple alarms.’