The evitability of autonomous robot warfare
30-06-2012 Article, International Review of the Red Cross, No. 886, by Noel E. Sharkey
- DownloadPDF 254 KB
This is a call for the prohibition of autonomous lethal targeting by free-ranging robots. This article will first point out the three main international humanitarian law (IHL)/ethical issues with armed autonomous robots and then move on to discuss a major stumbling block to their evitability: misunderstandings about the limitations of robotic systems and artificial intelligence. This is partly due to a mythical narrative from science fiction and the media, but the real danger is in the language being used by military researchers and others to describe robots and what they can do. The article will look at some anthropomorphic ways that robots have been discussed by the military and then go on to provide a robotics case study in which the language used obfuscates the IHL issues. Finally, the article will look at problems with some of the current legal instruments and suggest a way forward to prohibition.
Keywords: autonomous robot warfare, armed autonomous robots, lethal autonomy, artificial intelligence, international humanitarian law.
Noel E. Sharkey is Professor of Artificial Intelligence and Robotics and Professor of Public Engagement in the Department of Computer Science at the University of Sheffield, UK, and currently holds a Leverhulme Research Fellowship on an ethical and technical assessment of battlefield robots.