Statement

UN Security Council: We cannot let AI be deployed on the battlefield without oversight and regulation

Delivered by Cordula Droege, Chief Legal Officer of the ICRC, at the Open Debate of the United Nations Security Council on "Artificial intelligence and international peace and security" in New York.
UNSC

Mr. President, excellencies,

The International Committee of the Red Cross (ICRC) welcomes this timely debate on artificial intelligence and international peace and security. 

What I say to you today is based on the ICRC’s 160 years’ experience on the battlefield, where, for every new weapon or method of warfare that has been introduced, our delegates have witnessed the results first-hand. 

Not the results claimed or hoped for by the developers and users of new weapons, but the actual results: the good, the bad, and the unconscionable.

It is perhaps useful to first bring some clarity by highlighting the specific risks we can see today, and then to zoom out and consider some of the lessons we must remember from all the instances when new technologies were weaponized – as they invariably have been throughout history. 

We have identified three applications of AI in the military domain that pose significant risks: AI in autonomous weapon systems; AI in military decision-making; and AI in information and communication technologies.

Promoters of these new AI applications underscore their military utility, but each of them also carries risks: 

AI-enabled autonomous weapons can search for and engage their targets in communications-denied environments; but AI increases the risk that human users will be unable to understand, predict, and control the weapon’s functioning and effects – making it indiscriminate and unlawful under international humanitarian law.

AI decision support systems can integrate and analyze vast amounts of data from multiple sources in seconds to produce recommendations on targeting or detention for the commander; but their speed and scale, exacerbated by automation bias, might lead to simple rubber-stamping by the human user, replacing human judgement rather than supporting it. 

AI-enabled cyber capabilities can identify and exploit new vulnerabilities in an adversary’s computer systems. However, this also increases the risk of indiscriminate attacks, incidental damage to civilian infrastructure, and uncontrolled escalation of conflict, particularly in complex and interconnected digital environments - not to speak of the Kafkaesque scenario of detention decisions being made by algorithms. 

Mr. President,

Let me take a step back. Promoters of the technology often claim that AI-based systems can enhance IHL compliance and minimize risks for civilians. 

The claim that new weapons will be “more humane”, less lethal, more precise has been made by all promoters of new technologies on battlefields for as long as the ICRC can remember. 

Chemical weapons were advertised as more benign than artillery in the First World War; blinding laser weapons as less lethal than bullets in the 1980s. 

But this is not what we have seen historically, and it is not what we are seeing now.

If we contemplate today’s most technologically sophisticated conflicts, we have not seen better outcomes for civilians but rather the most widespread and indiscriminate devastation. 

The qualities advertised in new technologies have led not to precision, distinction and precaution, but to acceleration, amplification and escalation of destruction, with appalling results for civilians.

So let ourselves be guided by the evidence, and assess the weaponization of AI with a measure of sobriety: we cannot – we must not – allow these systems to continue to be developed and used without oversight and regulation. 

This oversight and regulation must be based on a realistic assessment of their likely compliance with international humanitarian law and ethical principles, lest we find ourselves in the same situation with AI as that described by General Bradley in 1948 with respect to nuclear weapons: 

“The world has achieved brilliance without wisdom, power without conscience… Ours is a world of nuclear giants and ethical infants.”

So I will close with two requests:

First, reiterating the joint appeal by the Secretary-General and the President of the ICRC, states must conclude as soon as possible a legally binding instrument to set clear prohibitions and restrictions on autonomous weapon systems.

Second, we urge states to adopt a human-centered approach to military AI, in order to realistically assess its likely compliance with IHL, and to ensure that human control and judgement are preserved in all decisions that pose risks to the life and dignity of people affected by armed conflict. 

To this effect, states should pursue structured discussions on military AI both in the Security Council and in the General Assembly, drawing on the ICRC’s preliminary recommendations, the Secretary-General’s report, and other resources.

Thank you.