Expert Meeting on Lethal Autonomous Weapons Systems

15 November 2017

The ICRC welcomes this first meeting of the Group of Governmental Experts on "Lethal Autonomous Weapons Systems". With the High Contracting Parties to the Convention on Certain Conventional Weapons moving on to more formal discussions there is now an expectation that States will identify and build on common ground, such as the broad agreement that human control must be retained over weapon systems and the use of force.

The ICRC's view is that States must now work to establish limits on autonomy in weapon systems to ensure compliance with international law and to satisfy ethical concerns.

In determining these limits, the ICRC is convinced that the focus must remain on obligations and responsibilities of humans in decisions to use force. For this reason, it has proposed that States assess the type and degree of human control required in the use of autonomous weapon systems – broadly defined as weapons with autonomy in their critical functions of selecting and attacking targets – to ensure compliance with international law, and acceptability under the principles of humanity and the dictates of the public conscience

From the perspective of international humanitarian law, it is clear that the rules on the conduct of hostilities are addressed to those who plan, decide upon, and carry out an attack. These rules, which apply to all attacks regardless of the means or methods employed, give rise to obligations for human combatants, who are responsible for respecting them. These legal obligations, and accountability for them, cannot be transferred to a machine, a computer program, or a weapon system.

In the ICRC's view, compliance with these legal obligations would require that combatants retain a minimum level of human control over the use of weapon systems to carry out attacks in armed conflict.

An examination of the way in which human control can be exerted over autonomous weapon systems (as broadly defined by the ICRC) points to the following key elements of human control to ensure legal compliance: predictability; human supervision and ability to intervene; and various operational restrictions, including on tasks, types of targets, the operating environment, time frame of operation, and scope of movement.

Meaningful, effective or appropriate human control also requires that the operator have sufficient information on and understanding of the weapon system and operating environment, and the interaction between them.
These elements of human control are needed to link the decisions of the human commander or operator – which must comply with international humanitarian law and other applicable provisions of international law – to the outcome of a specific attack using the weapon system. The need for some degree of human control indicates that there will be limits to lawful levels of autonomy in weapon systems under international humanitarian law.

The need for human control also raises concerns about the technical aspects of weapon system design that may lead to unpredictability. In particular, the use of artificial intelligence (AI) machine-learning algorithms in targeting would raise fundamental legal concerns to the extent that their functioning and outcomes would be inherently unpredictable.

In recent months the ICRC has further evaluated the ethical aspects of autonomous weapon systems; it convened a small round-table of experts in August 2017 and has built on the outcomes of the expert meetings held with States in 2014 and 2016. Theprinciples of humanity and the dictates of the public conscience provide moral guidance for these discussions, and support the ICRC's call to set limits on autonomy in weapon systems.
Perhaps the most significant ethical issues raised by autonomous weapon systems are those which transcend both the context of their deployment – whether in armed conflict or in peacetime – and the technology involved – whether simple or sophisticated. These concerns focus on the loss of human agency and responsibility in decisions to kill and destroy.

Since moral responsibility for decisions to kill and destroy cannot be delegated to machines, meaningful, effective or appropriate human control – from an ethical point of view – would be the type and degree of control that preserves human agency and responsibility in these decisions.

With increasing autonomy in weapon systems, a point may be reached where humans are so far removed in time and space from the acts of selecting and attacking targets that human decision-making is effectively substituted with computer-controlled processes, and life-and-death decisions ceded to machines. This raises profound ethical questions about the role and responsibility of humans in the use of force and the taking of human life, which go beyond questions of compliance with international humanitarian law. With respect to the public conscience, there is a sense of deep discomfort with the idea of any weapon system that places the use of force beyond human control.

With these legal and ethical questions in mind, and conscious of rapid technological developments in the fields of robotics and artificial intelligence and their current application in weapon systems, the ICRC urges all States present at this Group of Governmental Experts to work on establishing limits on autonomy in weapon systems, so that humans remain fully responsible for decisions to use force.

We look forward to elaborate further on our views during the thematic sessions.