Pentagon deploys AI for target identification in airstrikes

Per the Chief Technology Officer for US Central Command, the Pentagon uses computer vision to identify potential threats by analysing visual data.

 Handrail

Recent reports from Bloomberg reveal that the Pentagon has initiated the deployment of computer vision algorithms to facilitate target identification for airstrikes. This development marks a significant stride in integrating AI technologies into the battlefield, with more than 85 airstrikes already executed in the Middle East with the aid of these algorithms.

The airstrikes, conducted across various regions of Iraq and Syria on 2 February 2024, yielded destruction to rockets, missiles, drone storage facilities, and militia operations centres, among other targets, according to Bloomberg’s coverage. This coordinated response was triggered by the January drone attack in Jordan, claiming the lives of three US service members, for which Iranian-backed operatives have been held responsible by the government.

Schuyler Moore, the chief technology officer for US Central Command, outlined the Pentagon’s use of AI, stating, ‘We’ve been using computer vision to identify where there might be threats.’ Computer vision entails training algorithms to visually recognise specific objects, a crucial aspect of modern military tactics.

The algorithms employed in the recent bombings were cultivated under Project Maven, an initiative launched in 2017 to amplify the adoption of automation within the Department of Defense (DoD).

Why does it matter?

The trend of employing AI for target acquisition appears to be gaining momentum, albeit raising concerns. Reports from late last year revealed Israel’s use of AI software to determine bombing locations in Gaza, a program dubbed ‘The Gospel.’ This system aggregates vast datasets to recommend targets to human analysts, ranging from weapons and vehicles to live human beings. Israeli officials claim that the program can propose up to 200 targets within 10-12 days, emphasising that human oversight remains integral in the targeting process.

While the incorporation of AI in military operations promises enhanced efficiency and precision, ethical considerations surrounding its implications continue to emerge, urging a cautious approach to its implementation.