Artificial intelligence and machine learning in armed conflict: A human-centred approach

Artificial intelligence and machine learning in armed conflict: A human-centred approach

The ICRC, like many organizations across different sectors and regions, is grappling with the implications of artificial intelligence (AI) and machine learning for its work. Since these are software tools, or algorithms, that could be applied to many different tasks, the potential implications may be far reaching and yet to be fully understood.
Article 06 June 2019

There are two broad – and distinct – areas of application of AI and machine learning in which the ICRC has a particular interest: its use in the conduct of warfare or in other situations of violence; and its use in humanitarian action to assist and protect the victims of armed conflict.

This paper sets out the ICRC's perspective on the use of AI and machine learning in armed conflict, the potential humanitarian consequences, and associated legal obligations and ethical considerations that should govern its development and use.

AI and machine-learning systems could have profound implications for the role of humans in armed conflict, especially in relation to: increasing autonomy of weapon systems and other unmanned systems; new forms of cyber and information warfare; and, more broadly, the nature of decision-making.

In the view of the ICRC, there is a need for a genuinely human-centred approach to any use of these technologies in armed conflict. It will be essential to preserve human control and judgement in applications of AI and machine learning for tasks and in decisions that may have serious consequences for people's lives, especially where they pose risks to life, and where the tasks or decisions are governed by rules of international humanitarian law.

AI and machine-learning systems remain tools that must be used to serve human actors, and augment human decision-makers, not replace them.

Download our full report: