Defining lethal autonomous weapons systems (LAWS): Campaign to Stop Killer Robots
CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems – First 2018 Meeting
9 Apr 2018 11:00h - 14 Apr 2018 01:30h
12 Apr 2018 02:00h
On 9 April 2018, the session ‘Defining Lethal Autonomous Weapons Systems – Campaign to Stop Killer Robots’ took place at the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland. Ms Miriam Struk, acting director programs, PAX, moderated the discussion between the five panellists. The session highlighted global ambiguity regarding cybersecurity, cyber war, and autonomous weapons. Questions of undefined moral boundaries dominated the conversation.
Ms Allison Pytlak, programme manager, WLPF, Reaching Critical Will began by addressing the challenge to standardise definitions for cyber-related terminology. She stated that the term ‘cyber-attack’ has over 15 working definitions, in 10 different countries, NATO, and the East West Institute. Pytlak stated that stakeholders in this field talk past one another instead of trying to reach a common ground. Moreover, she argued that this terminology can sensationalise issues. Therefore, there is an urgent need for synchronisation across various disciplines in order to forward these discussions.
Commenting on moral responsibility, Prof. Peter Asaro, vice chair, International Committee for Robot Arms Control questioned who would be held legally accountable for the actions of autonomous weapons. He warned against ‘anthropomorphising AI,’ or assigning it human characteristics. Asaro emphasised that AI is advanced computation relying on a wealth of data, and morality cannot be automated. Continuing, he said that taking human life with an automated system undermines human dignity. Referencing Martens Clause, he remarked that in absence of law, morality still applies. However, Asaro restated Pytalk’s main point, underlining a need to codify norms for autonomous weaponry, creating requirements for parties to conform to.
Mr Richard Moyes, managing director, Article 36, discussed ‘meaningful human control.’ He stated that lethal autonomous weapons (LAWs) lack category management, blurring the boundaries of what systems are acceptable and unacceptable. Comparing two scenarios, he classified LAWs as: (i) a general category with some unacceptable weapons, or (ii) an unacceptable subcategory within a wider category of autonomous weapons systems. He remarked that the word ‘lethal’ in LAWs is not ‘fundamentally important in comparison to how targets are chosen and engaged with.’ Moyes stressed the role of human control in the operation of autonomous weapons. He believes it is unethical for intention to terminate to be executed by a ‘killer robot.’
Mr Johan H Andersen, chair, Norwegian Council on Ethics, shared an investors point of view. The organisation advises whether investments in financial instruments align with the Norwegian Government Global Pension Fund’s ethical guidelines. For example, companies in their portfolio cannot produce weapons that violate fundamental humanitarian principles through their normal use. Andersen explained that considering companies that produce autonomous weapons is uncharted territory. To mitigate risk and evaluate ethicality, the organisation is asking questions such as: Will the system be able to differentiate between combatants and civilians? Can it detect injury? Can it weigh the interest of protecting civilians vs. military necessity? Who is responsible for infringing the law? Anderson suggested that the investment community may become de-facto policy markers before the CCW reaches a policy consensus.
The session ended with Q&A focused on Anderson’s position as an investor and how that influences the direction of autonomous weaponry. The speakers reiterated the necessity to detail the boundaries of LAWs. Panellists hoped for progress during the remaining sessions at the conference.