Fully autonomous weapon systems
Presentation by Kathleen Lawand, head of the arms unit, ICRC. Seminar on fully autonomous weapon systems, Mission permanente de France, Geneva, Switzerland.
In my presentation, I will provide the ICRC's perspective on the questions posed for the purposes of this seminar. I should mention from the outset that the ICRC has been interested in the question of autonomous weapon systems for some years now, and publicly outlined its concerns on these weapons in October 2011 in a report entitled International Humanitarian Law and challenges to contemporary armed conflicts (presented to the 31st International Conference of the Red Cross and Red Crescent). And as of yesterday (2 September), we have published a short "frequently asked questions" on our website, outlining our position and concerns.
I also wish at this time to take note of the reports and calls made within the last year on autonomous weapons by NGOs and other independent actors -- in particular the reports and calls of Human Rights Watch and the international campaign to "stop killer robots", as well as that of the UN Special Rapporteur on extrajudicial, summary and arbitrary executions, Prof. Christof Heyns on "lethal autonomous robots" (presented to the UN Human Rights Council in May). These actors have played a very important role in highlighting the risks posed by the development of these weapons and the need for the international community to pay urgent attention to this issue.
I should also stress from the outset that while the ICRC is not at this time joining calls for a moratorium or ban on autonomous weapon systems, we are asking States to fully consider the fundamental legal, ethical and societal issues related to the use of autonomous weapons well before they are developed, and to ensure such weapons are not used if there is no certainty that can comply with IHL.
The ICRC looks forward to further engaging States in the weeks leading up to the Meeting of States Parties of the Convention on Certain Conventional Weapons (CCW) in November, and well beyond, on this very important issue for the future of armed conflict, and indeed of humanity.
What are "fully" autonomous weapon systems? What is meant by "autonomous"?
Though there is a wealth expert literature on this subject, there is somewhat of a lack of consistency in the use of terms. Should States decide to discuss autonomous weapons in a more focussed way, in either the CCW or elsewhere, there would be a need to look more closely at terms and definitions.
What "autonomous weapons" are not:
"Autonomous weapons" are to be distinguished from "automated weapons" – sometimes called "semi-autonomous" weapon systems, the use of which is typically constrained in time and space.
Automated weapons function in a self-contained and independent manner although they may be initially deployed or directed by a human operator. Once activated, these systems can engage individual targets or specific "target groups" that have been selected (programmed) by a human operator. They execute precisely pre-programmed actions or sequences within a well-defined and controlled environment. Examples of such systems include automated sentry guns and sensor-fused munitions including certain anti-vehicle mines. On the higher end of the spectrum of automation, these kinds of weapons include highly automated defensive systems currently in use, such as for example C-RAM systems (i.e. Counter-Rocket, Artillery, Mortar systems) which are being used on warships or on land. Our understanding is that these highly automated systems in practice operate with a "human-on-the-loop", i.e. under human supervision.
In its 2011 Challenges Report, the ICRC stated that "[r]ecent technological developments are increasing the capacity of such systems to distinguish some classes of military objectives from civilian objects. However, it is not possible to predict the degree of discrimination that these systems will be able to achieve in the future, and the central challenge of such systems will remain how to ensure they are capable of being used in a manner that allows the distinction between military objectives and civilian objects as required under IHL."
"Autonomous weapons" must also be distinguished from "drones" – a.k.a. Unmanned Aerial Vehicless or Remotely Piloted Aircraft (RPA) – which are remote-controlled weapons.
RPAs are typically operated and controlled by a crew located outside of the area of combat, composed of a pilot and a payload operator, and supported by a team of signals, and imagery intelligence analysts. RPAs require human operators to select targets and activate, direct and fire the weapons concerned. In short, RPAs as a weapons platform are not per se unlawful, however their use gives rise to a number of concerns which were outlined in an interview with ICRC President Maurer (posted on the ICRC's website in May 2013).
What autonomous weapons are:
"Autonomous weapons" can be seen on the end of a continuum or spectrum of "incremental automation" of weapons that has been developing over time.
Based on the ICRC's understanding of the expert literature, an "autonomous weapon" is one that is programmed to learn or adapt its functioning in response to changing circumstances in the environment in which it is deployed. A truly autonomous weapon system would be capable of searching for, identifying and applying lethal force to a target, including a human target (enemy combatants), without any human intervention or control.
This definition connotes a mobile system with some form of artificial intelligence, capable of operating in a dynamic environment with no human control.
It should be stressed that such "fully" autonomous weapons are still only in the research phase and have not yet been developed, even less so deployed in armed conflict. Yet technological capabilities in this field are advancing at a high pace.
What challenges do autonomous weapons pose to international humanitarian law (IHL)?
Under IHL, any new weapon, means or method of warfare must be capable of being used in compliance with IHL's rules governing the conduct of hostilities, notably the rules of distinction, proportionality and precautions in attack which aim to protect civilians.
It should also be recalled that IHL seeks to prevent, or to restrict, the development and deployment of new technologies of warfare that may be prohibited in some or all circumstances by requiring each State to assess the legality of the new weapons they wish to develop or acquire (per Art. 36 of Additional Protocol I to the Geneva Conventions).
The ICRC has a number of concerns regarding the capability of autonomous weapon systems to comply with IHL. In particular, developing the capacity of autonomous weapon systems to fully comply with the IHL rules of distinction, proportionality and precautions in attack appears today to be a monumental programming challenge. Indeed, it may very well prove impossible.
First, would an autonomous weapon be capable of discriminating between a civilian and a combatant, as required by the rule of distinction? In particular, could it distinguish between civilians taking a direct part in hostilities and armed civilians such as law enforcement personnel or hunters, who remain protected against direct attack? Would it be capable of distinguishing between an active combatant and one that is wounded or incapacitated (hors de combat)? Would an autonomous weapon be capable of determining whether a soldier is making a genuine attempt to surrender? Achieving these capability would appear to be a formidable task.
Second, would an autonomous weapon system be capable of applying the IHL rule of proportionality ? – that is the prohibition to launch attacks that are likely to cause incidental civilian casualties and damage to civilian objects that are excessive in relation to the concrete and direct military advantage anticipated. Such assessment would appear to require uniquely human judgement, especially in the dynamic environment that is the battlefield, where the anticipated military advantage to be gained by attacking a given military objective may vary from minute to minute.
Thirdly, how would an autonomous weapon be capable of assessing, selecting and applying the required precautions in attack to minimize civilian casualties? This represents again a highly contextual assessment of what is possible and practical in the battlefield, and it may require uniquely human judgement.
Still, it may one day be technologically possible to program an autonomous weapon system to fully comply with the rules of distinction, proportionality and precautions in attack. An autonomous weapon may be able behave more ethically and more cautiously on the battlefield than a human being – taking for granted that a robot would not be affected by emotion or personal self-interest.
On the other hand, the deployment of such weapons would reflect a paradigm shift and a major qualitative change in the conduct of hostilities, and ultimately raises the question of whether the dictates of public conscience would allow machines to make life and death decisions and apply lethal force without human control.
Another key concern is determining who would be accountable for violations of IHL committed by an autonomous weapon system. State responsibility and individual criminal liability for serious violations of IHL (war crimes) are of course essential elements for the protection of victims of armed conflict. If the act can be attributable to particular State, that State would be responsible for compensation under both IHL and the rules of State responsibility.
But fixing individual criminal responsibility would be more challenging. It would seem to make little sense to attribute responsibility to a machine or a computer. Who would then be accountable for the decision made by the autonomous weapon system: the programmer, the manufacturer or the commander who deploys the system? If responsibility cannot be determined as required by IHL, is it legal or ethical to deploy such systems?
In relation to any emerging technology of warfare, such as autonomous weapons, it is important to ensure an informed discussion of all of the issues involved, to call attention to the necessity of assessing their potential human cost and IHL implications and to ensure that they are not employed prematurely under conditions in which respect for IHL cannot be guaranteed.