CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems – First 2018 Meeting

9 Apr 2018 to 13 Apr 2018
Geneva, Switzerland

Resource4Events

Event report/s:
Marco Lotti

The event ‘The Human Role in LAWS: How could control look like?’ took place on 9 April 2018 during the meeting of the Group of Governmental Experts (GGE) on Lethal Autonomous Weapo

The event ‘The Human Role in LAWS: How could control look like?’ took place on 9 April 2018 during the meeting of the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS)​ at the United Nations Office in Geneva (UNOG).

The session was opened by Mr Ian McLeod who presented the third ‘Focus on’ report by the International Panel on Regulation of Autonomous Weapons (iPRAW). He explained that iPRAW is an independent scientific advisory panel, funded by the German Federal Foreign Office with the goal of assisting the discussion on autonomous weapons from a researcher’s perspective. The group includes robotics engineers, lawyers, and former military personnel and political scientists, and was initially formed in March 2017. The work of the panel focuses on the legal, ethical and operational challenges posed by the use of LAWS. In particular, its work has considered seven aspects:

  • International humanitarian law

  • State of technology and operations

  • Computational systems within the system of LAWS

  • Autonomy and human control

  • Ethics, norms and public perception

  • LAWS risks and opportunities: recommendations to the GGE.

More specifically, the third report focuses on the human role and control on the use of LAWS. He explained that the panel first considered the problematic definition of autonomy. Such a concept is usually discussed as a spectrum with respect to those specific functions that could be delegated to a machine. It follows that defining a system ‘autonomous’ is rather confusing because we can only speak of autonomous functions and not of autonomous systems per se.

Mr Marcel Dickow considered how autonomy is actually embedded in weapons. He maintained that from a technical point of view, a system cannot present a continuum of autonomy, rather offer the possibility of switching between two modes (i.e. fully automated or manual). He claimed that we cannot speak of ‘autonomy’ without also considering ‘control’. In particular, he distinguished two types of control:

  • Control by design: i.e. technical requirements built into the machine that enables human control (e.g. interface)

  • Control in use: procedural requirements to enable human control

In order to maintain control of LAWS, the operative system should require a two-step approach. Firstly, the ability of the human to understand the situation and its context, including the state of the weapon systems as well as the environment (situational understanding). Secondly, the option to appropriately intervene and get human control back should be available at all times (intervention).

Considering all the four aforementioned elements, Dickow illustrated a spectrum of options requiring human control which ranges from the highest level of human involvement (i.e when there is need for frequent situational understanding and need for human action to initiate the attack) to the lowest (i.e. when there is no immediate situational understanding and no immediate option for intervention).

Ms Anja Dahlmann focused on the scenario requiring the least human control. She defined this situation as ‘boxed autonomy’ based on precautionary resolutions without an immediate situational understanding. In this scenario, all safeguards and information gathering would have been done in advance and the machine would follow predefined failure modes in case it detects problems by itself.

The session was closed by the German ambassador, Mr Michael Biontino, who shared a word of caution towards such technology because computation mechanisms depend on the quality of data and they can fail; which is why human judgement is a crucial component of the targeting cycle.

Stefania Pia Grottola

The event was organised by the Campaign to Stop Killer Robots, a coalition of non-governmental organisations that advocates to ban weapons systems that would select, target, and use force without ‘

The event was organised by the Campaign to Stop Killer Robots, a coalition of non-governmental organisations that advocates to ban weapons systems that would select, target, and use force without ‘meaningful human control’. The discussion focused on human-machine interaction and how to retain human control of weapons systems and individual attacks in all circumstances. The event was moderated by Ms Rasha Abdul Rahim, from Amnesty International.

Mr Paul Scharre, from the Center for a New American Security, delivered a presentation on ‘The role of human judgment in war’. He argued that this is the fifth year of discussion about a topic which faces an extremely rapid technologic development. In 2014, technical experts did not think that machines would be good at targeting and recognising objects. Today, this is not just possible, but it is one of the main topics of discussion. To guide the discussion on the notion of human control when it comes to lethal autonomous weapons systems (LAWS), he proposed the following scenario: ‘If we had all the technology we could image, what role would we want humans to play in war? And why?’. Scharre also discussed the role of the principles of International Humanitarian Law (IHL) in trying to answer the question of what decisions would require human judgment in war. IHL indeed treats humans as legal agents, and not machines, who are obligated to respect the law. IHL applies to people; machines are not legal agents. Thus, autonomous weapons need to be regulated, and the discussion should be about the bounds and the limits of such weapons.

Ms Bonnie Docherty, campaign representative from the Human Rights Watch. She highlighted the different approaches and proposals moved by states in their working papers submitted in advance of the meeting of the CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems. However, she underlined that a basic consensus exists: it is essential to maintain human control over the use of force. Despite the fact that there are different terms used (human control, judgment, or intervention), they all frame the need for meaningful and appropriate decisions, especially in cases such as the distinctions between combatants and civilians according to IHL principles. Retaining human control over weapons is a moral imperative. Meaningful human control assures that legal accountability is possible regardless of the technology involved.

The second campaign representative was Prof. Noel Sharkey, from the International Committee for Robot Arms Control. He talked about the notion of ‘human supervising control’ of weapons, proposing the following levels of targeting supervision:

  • Human engagement in the selection of targets
  • The program suggests alternatives and the human selects one of them
  • The program selects the target that the human has to approve before the attack
  • The program selects the target, and the human has a short time to veto it

Through a psychologically based approach, he argued that there are two types of processes: automatic and deliberative decisions. The deliberative process consumes time and resources, as deliberation is easily distractive. The automatic reasoning does not require resources. In the case of LAWS, Sharkey analysed several scenarios of targeting supervision. The human engagement in the selection of targets is defined as the ideal scenario. In the case of a program suggesting different alternatives to target, one of which has to be chosen by human control, a perspective bias comes into play. Humans are characterised by ‘automation bias’ in believing that one of the alternatives is right by default: they will certainly choose one of the options. The case in which the human has to approve an attack selected by the program raises the following issue: the human does not search for contradictory information. When the human has instead a restricted time to veto the targeting, the risk is to focus and only accept existing evidence. 

After this presentation on human psychology and bias in taking decisions, Sharkey ended his speech with outlining the ‘most drastic scenario of autonomy in the critical function of selecting a target’, in which he is no human control. 

Lastly, it was noted that reframing autonomy in terms of human control can clarify the role of humans in war decisions, making the process more transparent and the accountability aspect clearer. 

Isabel Ashley

On 10 April 2018 at the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland, the session ‘The Human-Machine Relationship: Lessons from Military Doctrine and Operations’ was mode

On 10 April 2018 at the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland, the session ‘The Human-Machine Relationship: Lessons from Military Doctrine and Operations’ was moderated by Ms Merel Ekelhof, PhD Researcher, VU University Amsterdam. The session aimed to increase transparency about targeting practices in military operations, and it discussed issues related to ‘sufficient’ human control, building on existing doctrine and knowledge acquired from military operations. Panellists began by disclaiming their views as their own, stating that they are not reflective of their government’s opinions. Ekelhof continued by defining NATO’s six-phase deliberate targeting cycle. She explained that targeting is formalised in doctrine as guidance for military operations.

Lt Col Matthew King, chief, Air and Space Law, Headquarters Air Force Operations and International Law Directorate, discussed the legal considerations in the law of war. Military necessity, distinction, proportionality, and precautions in attack help to determine whether to use military action. King stated that the rules of engagement encompass two broad facets of target prosecution: validation and execution. This involves considering whether there is a valid target and appropriate authority; collateral damage is also estimated, and other feasible cautions are investigated.

Dr Larry Lewis, director, Center for Autonomy and AI, CNA, articulated the importance of assessing risk to civilians in using lethal autonomous weapon systems (LAWS). He emphasised the notion of ‘meaningful human control’ and noted that broader human involvement may be better than humans deciding final engagement. He cited autonomous systems as viable solution to reduce fratricide and civilian casualties. Claiming that humans are fallible, he shared two examples of failed missions that were the result of poor judgement and led to unwanted deaths. Lewis discussed the Patriot missile, a system that detects ballistic missiles and prompts the operator for approval of automatic missile launch. In one mission, after incorrectly qualifying an oncoming aircraft as ‘hostile,’ the system operator proceeded in launching an attack, killing the pilot. The other incident was a Special Forces predator drone falsely detecting an ambush from a civilian filled vehicle. Lewis pointed to misreading and acting upon autonomously gathered data as what led to the fatal mistakes.

The speakers shed light on the difficulty to make such decisions by involving the audience in hypothetical war missions with LAWS. This was followed by Q&A, where discussion continued about the balance between meaningful human control, human error, and relying on autonomous systems.  

Frank Kosarek

On 10 April 2018, the Stockholm International Peace Research Institute (SIPRI) held a side event at the Palais des Nations in Geneva, centered on legal review p

On 10 April 2018, the Stockholm International Peace Research Institute (SIPRI) held a side event at the Palais des Nations in Geneva, centered on legal review processes for autonomous weapons. Paramount to the discussion was Article 36 of the 1977 Geneva Convention, which requires states to conduct a full legal review before implementing new weapons technology.

The session explored possible best practices for an autonomous weapons review process, incorporating both legal and technical perspectives. Moderator Dr Vincent Boulanin, a SIPRI researcher focused on emerging military technologies, opened the discussions by stressing the importance of connecting legal boundaries to technological limitations as states develop new arms.

Boulanin handed the floor to Ms Netta Goussec, legal advisor to the International Committee of the Red Cross, who praised the requirements of Article 36, but bemoaned its lack of guidance. Implicit in the success of the policy, said Goussec, is a national review protocol against which a state can evaluate a new weapon, but few states have ever developed such a protocol. She urged nations to begin a weapon review early in the technology’s development process, paying special attention to the context(s) in which the weapon will be used and the empirical data concerning its potential impact. Goussec added that an ideal protocol would ensure the weapon’s compliance with the nation’s previously ratified arms treaties.

Mr Richard Moyes, founder and managing director of UK-based NGO Article 36, made an amendment to Goussec’s prescriptions. Central to Moyes’ analysis was ‘human control,’ which he defined as the degree of direct human involvement over a weapon’s behaviour. In legally reviewing autonomous arms, militaries must ensure that there exist multiple avenues of human control, Moyes argued, making it simple to assign accountability for the technology’s behaviour. He acknowledged that human control over an autonomous weapon requires accurate information about its timeframe, environment, and objectives, which is not always available.

Dr Martin Hagström, deputy research director for the Swedish Defence Research Agency (FOI), added a technical perspective to the conversation. Currently charged with managing the Swedish Armed Forces’ weapons research programme, Hagström opened by classifying engineering ‘buzzwords,’ such as ‘AI,’ ‘autonomy,’ and ‘automation,’ as meaningless from a technician’s perspective. AI, he asserted, is simply ‘a solution to problems we don’t know how to solve yet.’ Still, he acknowledged the concept’s growing importance in the weapons space, noting carefully its limitations. Critical to autonomy’s success, stated Hagström, is a simulated process of data collection and executive action that only works within the technology’s ‘design space.’ Remove the technology’s intended context, or design space, and it soon becomes unpredictable, he added.

Much of the surrounding conversation cited the importance of predictability and thorough testing to a proper Article 36 review. Hagström asserted that perfect predictability is impossible in the case of autonomous weapons. Even still, he stressed the importance of extensive testing and operator competency to a legal review process. Though controlling for all design spaces is unrealistic, a nation should at least understand an autonomous weapon’s potential and ensure that its human control is trained and competent, Hagström added.

The session concluded with audience Q&A, which centered on the impossibility of ‘in situ’ legal reviews and the role of human-developed algorithms in human control.

Isabel Ashley

On 9 April 2018, the session ‘Defining Lethal Autonomous Weapons Systems – Campaign to Stop Killer Robots’ took place at the Convention on Certain Conventional Weapons (

On 9 April 2018, the session ‘Defining Lethal Autonomous Weapons Systems – Campaign to Stop Killer Robots’ took place at the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland. Ms Miriam Struk, acting director programs, PAX, moderated the discussion between the five panellists. The session highlighted global ambiguity regarding cybersecurity, cyber war, and autonomous weapons. Questions of undefined moral boundaries dominated the conversation.

Ms Allison Pytlak, programme manager, WLPF, Reaching Critical Will began by addressing the challenge to standardise definitions for cyber-related terminology. She stated that the term ‘cyber-attack’ has over 15 working definitions, in 10 different countries, NATO, and the East West Institute. Pytlak stated that stakeholders in this field talk past one another instead of  trying to reach a common ground. Moreover, she argued that this terminology can sensationalise issues. Therefore, there is an urgent need for synchronisation across various disciplines in order to forward these discussions.

Commenting on moral responsibility, Prof. Peter Asaro, vice chair, International Committee for Robot Arms Control questioned who would be held legally accountable for the actions of autonomous weapons. He warned against ‘anthropomorphising AI,’ or assigning it human characteristics. Asaro emphasised that AI is advanced computation relying on a wealth of data, and morality cannot be automated. Continuing, he said that taking human life with an automated system undermines human dignity. Referencing Martens Clause, he remarked that in absence of law, morality still applies. However, Asaro restated Pytalk’s main point, underlining a need to codify norms for autonomous weaponry, creating requirements for parties to conform to.

Mr Richard Moyes, managing director, Article 36, discussed ‘meaningful human control.’ He stated that lethal autonomous weapons (LAWs) lack category management, blurring the boundaries of what systems are acceptable and unacceptable. Comparing two scenarios, he classified LAWs as: (i) a general category with some unacceptable weapons, or (ii) an unacceptable subcategory within a wider category of autonomous weapons systems. He remarked that the word ‘lethal’ in LAWs is not ‘fundamentally important in comparison to how targets are chosen and engaged with.’ Moyes stressed the role of human control in the operation of autonomous weapons. He believes it is unethical for intention to terminate to be executed by a ‘killer robot.’

Mr Johan H Andersen, chair, Norwegian Council on Ethics, shared an investors point of view. The organisation advises whether investments in financial instruments align with the Norwegian Government Global Pension Fund’s ethical guidelines. For example, companies in their portfolio cannot produce weapons that violate fundamental humanitarian principles through their normal use. Andersen explained that considering companies that produce autonomous weapons is uncharted territory. To mitigate risk and evaluate ethicality, the organisation is asking questions such as: Will the system be able to differentiate between combatants and civilians? Can it detect injury? Can it weigh the interest of protecting civilians vs. military necessity? Who is responsible for infringing the law? Anderson suggested that the investment community may become de-facto policy markers before the CCW reaches a policy consensus.  

The session ended with Q&A focused on Anderson’s position as an investor and how that influences the direction of autonomous weaponry. The speakers reiterated the necessity to detail the boundaries of LAWs. Panellists hoped for progress during the remaining sessions at the conference. 

The Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS)​ will have its first 2018 meeting on 9–13 April 2018, in Geneva, Switzerland. 

The Group was created by the Fifth Review Conference of the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW), with a mandate to examine issues related to emerging technologies in the area of LAWS in the context of the objectives and purposes of the CCW.

Priorities for the 2018 meetings of the CCW GGE include:

  • Characterisation of the systems under consideration in order to promote a common understanding on concepts and characteristics relevant to the objectives and purposes of the CCW
  • Further consideration of the human element in the use of lethal force; aspects of human-machine interaction in the development, deployment and use of emerging technologies in the area of lethal autonomous weapons systems
  • Review of potential military applications of related technologies in the context of the GGE’s work
  • Possible options for addressing the humanitarian and international security challenges posed by emerging technologies in the area of LAWS in the context of the objectives and purposes of the Convention on CCW without prejudging policy outcomes and taking into account past, present and future proposals

The final agenda for the meeting is underway. Working papers will be accepted until 23 March 2018, while registration will be available until 1 April 2018. For more information, visit the CCW GGE webpage.

 

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee



 

GIP Digital Watch is operated by

Scroll to Top