Peter Maurer: “We must decide what role we want human beings to play in life-and-death decisions during armed conflicts”

Peter Maurer: “We must decide what role we want human beings to play in life-and-death decisions during armed conflicts”

Statement 12 May 2021 Switzerland

Speech given by Mr Peter Maurer, President of the International Committee of the Red Cross (ICRC), during a virtual briefing on the new ICRC position on autonomous weapon systems.

Excellencies,

Ladies and Gentlemen,

Thank you for joining me in this virtual briefing. I am pleased to share with you an important development in the ICRC's analysis and recommendations about autonomous weapons and to hear your views on this topic this afternoon and beyond.

New developments in digital technologies are taking place at a startling pace, affecting the way we live and the way we work – even the way we think. They hold great promise for humanity, and I've talked regularly about how we at the ICRC are embracing the digital transformation to enhance humanitarian action around the world.

New developments in digital technologies also affect the ways in which wars are fought. New weapon technologies give rise to serious humanitarian, legal and ethical dilemmas, and these dilemmas will be the focus of my talk today.

Autonomous weapons, as the ICRC understands the term, are weapons that select and apply force to targets without human intervention. They do so on the basis of a generalized "target profile" and sensors. What sets them apart from other weapons is that, after being activated by a person, they fire themselves when triggered by their environment, not by the user. This means that the user does not choose the specific target.

Autonomous weapons raise many challenging questions from several perspectives – military, technical, legal, ethical, philosophical and of course humanitarian. This complexity also contributes to the political challenges governments face in building shared understandings of potential risks and necessary solutions.

These weapons are already being used: in limited circumstances, usually far from civilians, and against highly specific types of target, for example to defend warships at sea from incoming missiles.

And yet, current technology and military developments are fuelling interest in the use of autonomous weapons that attack a wider range of targets, over greater areas and longer durations, and even in urban areas – complex and dynamic environments.

In the ICRC's view, the unconstrained use of autonomous weapons brings significant risks of harm to civilians and combatants alike.

 Consider, for example, situations with a significant civilian population in the area of military operations. In the event an autonomous weapon is used: How will civilians be protected when the user of an autonomous weapon does not know exactly where or when, or what, they will destroy? Or imagine an autonomous weapon's sensor is triggered by civilian buses with a similar shape to soldiers' transport vehicles and starts striking all buses over a wide area without the user being able to intervene and deactivate?

Autonomous weapons also increase the danger that conflicts will escalate, for example if there is no time, or means, to switch off an autonomous weapon before it is too late.

The potential humanitarian consequences are concerning for the ICRC. These weapon systems raise serious challenges for compliance with international humanitarian law, whose rules require context-specific judgements by combatants. For instance, how will injured soldiers be spared when there is no one there to recognize they are hors de combat?

Moreover, autonomous weapons raise fundamental ethical concerns for humanity, in effect substituting human decisions about life and death with sensor, software and machine processes.

Ultimately, most of us can agree that an algorithm – a machine process – should not determine who lives or dies, that human life must not be reduced to sensor data and machine calculations.

Armed forces, in search of ever-increasing speed of attack and deploying increasing numbers of armed robots, are looking to autonomy for military advantage. At the same time, this alters the role of humans in decisions to use force. In this new era of machine learning software that writes its own rules, many fear that this is a dangerous prospect for civilian protection and for international security.

These developments prompt us to ask not just what these technologies can be used for, but what they should be used for. They prompt us to make responsible choices about the future of warfare. Ultimately, we must decide what role we – as a society – want human beings to play in life-and-death decisions during armed conflicts.

There is a distinct risk that we will see human control and judgement in life-and-death decisions gradually eroded to a point that is unacceptable. This has been stressed by many in foreign ministries, armed forces and humanitarian organizations, as well as by roboticists and artificial intelligence experts in industry.


Unfettered design and use of autonomous weapons present a fundamental challenge. They risk eroding current protections for the victims of war under international humanitarian law and the principles of humanity.


International humanitarian law itself seeks to preserve a measure of humanity in war. Its rules apply to the use of all means and methods of warfare, including novel ones. Yet States have regularly adopted specific rules to further protect civilians and those no longer fighting from the effects of new weapon technologies, including preventively. This was the case, for instance, of the 1868 Declaration of St Petersburg on exploding bullets, and the prohibition of blinding laser weapons over 100 years later.

International discussions among States about autonomous weapons have spanned the past decade, including at the Human Rights Council and meetings of the High Contracting Parties to the Convention on Certain Conventional Weapons. They have benefited from the engagement and ideas brought by diplomats, military professionals, civil society representatives, academics, and members of the scientific and technical communities.

How has the ICRC contributed to these discussions over the years?

  • As per our mandate, we analysed in depth how to ensure that human control and judgement in the use of force is retained, including through expert consultations.
  • We regularly shared our conclusions with all those involved.
  • We engaged actively in these debates and we have listened closely to the concerns of all stakeholders, including States.
  • Personally, I have really enjoyed the very constructive exchange of views with many of you – and your colleagues – both in Geneva and in capitals around the world.

Since 2015, we have urged States to adopt internationally agreed limits on autonomous weapons. Today, I bring for your consideration updated and refined recommendations to States on both the form and substance of such limits.



The ICRC is convinced that international limits should take the form of new legally binding rules to regulate autonomous weapons.


We believe new rules are needed both:

  • to clarify how existing rules of international law constrain these weapons
  • to supplement the legal framework, including to address fundamental ethical concerns.

On the substance of these rules, we would like to offer three specific recommendations.

First, our view is that unpredictable autonomous weapons should be ruled out, notably because of their indiscriminate effects, and that this would be best achieved through a prohibition of unpredictable autonomous weapons.

Second, we believe that the use of autonomous weapons to target human beings should be ruled out. This recommendation is grounded in ethical considerations to safeguard humanity and the need to uphold the international humanitarian law rules for the protection of civilians and combatants hors de combat. This would in our view be best achieved through a prohibition of anti-personnel autonomous weapons.

Third, and finally, we recommend that other autonomous weapons should be regulated, including through a combination of four types of limits:

  • first, limits on the types of target, such as constraining them to typically military objects, like tanks or incoming missiles
  • second, limits on the duration, geographical scope and scale of use
  • third, limits on situations of use, such as situations where civilians are not present
  • fourth, requirements for human–machine interaction, notably to ensure effective human supervision, and timely intervention and deactivation.

Further details on these recommendations and their rationale can be found in a paper that we will share with you today immediately after this briefing. All these recommendations will in any case need further elaboration beyond the paper we are distributing, and I look forward to hearing your views about them, today or in the coming weeks and months, with a view to refining our propositions and recommendations further.

It is indeed for States to decide whether to adopt new rules, and to determine their specific content. The ICRC offers these recommendations based on its legal expertise, in-depth analysis of the issues raised by the development and use of autonomous weapon systems, and operational experience of armed conflicts. Our purpose is to support the multilateral discussion and help States move forward towards identifying possible avenues of convergence.

In this respect, I believe these recommendations offer clear, principled and pragmatic guidance on how to effectively address humanitarian, legal and ethical concerns that have been raised by many States, civil society, leading scientists and the ICRC.

These recommendations do not bar the development and use of new digital technologies of warfare in other ways, such as to increase weapons' precision or to enhance human decision-making.

I hope that you will find these recommendations of value in your efforts to reach a common understanding and take political action at the international level, including at meetings of the High Contracting Parties to the Convention on Conventional Weapons and of its Group of Governmental Experts.

I can assure you that the ICRC stands ready to support all initiatives aimed at effectively addressing concerns raised by autonomous weapons in a timely manner, and to work with all governments – and their armed forces – to this end, as well as with other relevant stakeholders. This includes other efforts to develop aspects of the normative and operational framework such as a political declaration, common policy standards or good practice guidance. Such efforts can be complementary and mutually reinforcing to legally binding rules.

Considering the speed of developments in autonomous weapons, the ICRC believes it is urgent that new rules on autonomous weapons be adopted. There is of course much work to be done to build shared understandings of what action is required.

I want to conclude by stressing the opportunity that we have today.


We shape technology. And in turn, technology shapes us. These developments do not occur in a vacuum. But beyond calculations of costs and benefits, decisions about what technology should be used for are based on human values.

We have an opportunity to collectively draw a line that is in the interest of people. I emphasize the word "people" because, for the ICRC, the concerns raised by autonomous weapons are not solely – or even predominantly – about technology. They are about people – human beings. They are about the protections afforded to human beings during armed conflict. They are about the legal obligations and moral responsibilities of human beings who are conducting conflict. They are, ultimately, about our shared humanity.

Thank you.