Article 36 reviews and emerging technologies: The case for autonomy in weapon systems

12 Apr 2018 02:00h

Event report

On 10 April 2018, the Stockholm International Peace Research Institute (SIPRI) held a side event at the Palais des Nations in Geneva, centered on legal review processes for autonomous weapons. Paramount to the discussion was Article 36 of the 1977 Geneva Convention, which requires states to conduct a full legal review before implementing new weapons technology.

The session explored possible best practices for an autonomous weapons review process, incorporating both legal and technical perspectives. Moderator Dr Vincent Boulanin, a SIPRI researcher focused on emerging military technologies, opened the discussions by stressing the importance of connecting legal boundaries to technological limitations as states develop new arms.

Boulanin handed the floor to Ms Netta Goussec, legal advisor to the International Committee of the Red Cross, who praised the requirements of Article 36, but bemoaned its lack of guidance. Implicit in the success of the policy, said Goussec, is a national review protocol against which a state can evaluate a new weapon, but few states have ever developed such a protocol. She urged nations to begin a weapon review early in the technology’s development process, paying special attention to the context(s) in which the weapon will be used and the empirical data concerning its potential impact. Goussec added that an ideal protocol would ensure the weapon’s compliance with the nation’s previously ratified arms treaties.

Mr Richard Moyes, founder and managing director of UK-based NGO Article 36, made an amendment to Goussec’s prescriptions. Central to Moyes’ analysis was ‘human control,’ which he defined as the degree of direct human involvement over a weapon’s behaviour. In legally reviewing autonomous arms, militaries must ensure that there exist multiple avenues of human control, Moyes argued, making it simple to assign accountability for the technology’s behaviour. He acknowledged that human control over an autonomous weapon requires accurate information about its timeframe, environment, and objectives, which is not always available.

Dr Martin Hagström, deputy research director for the Swedish Defence Research Agency (FOI), added a technical perspective to the conversation. Currently charged with managing the Swedish Armed Forces’ weapons research programme, Hagström opened by classifying engineering ‘buzzwords,’ such as ‘AI,’ ‘autonomy,’ and ‘automation,’ as meaningless from a technician’s perspective. AI, he asserted, is simply ‘a solution to problems we don’t know how to solve yet.’ Still, he acknowledged the concept’s growing importance in the weapons space, noting carefully its limitations. Critical to autonomy’s success, stated Hagström, is a simulated process of data collection and executive action that only works within the technology’s ‘design space.’ Remove the technology’s intended context, or design space, and it soon becomes unpredictable, he added.

Much of the surrounding conversation cited the importance of predictability and thorough testing to a proper Article 36 review. Hagström asserted that perfect predictability is impossible in the case of autonomous weapons. Even still, he stressed the importance of extensive testing and operator competency to a legal review process. Though controlling for all design spaces is unrealistic, a nation should at least understand an autonomous weapon’s potential and ensure that its human control is trained and competent, Hagström added.

The session concluded with audience Q&A, which centered on the impossibility of ‘in situ’ legal reviews and the role of human-developed algorithms in human control.