Technological advances in weaponry mean that decisions about the use of force on the battlefield could increasingly be taken by machines operating without human intervention. A recent event in Canberra, Crossing the Rubicon: the path to offensive autonomous weapons, focused on the range of issues associated with the potential use of these types of systems. Following the event, key speaker Professor Chris Jenks, current fellow at the Asia Pacific Centre for Military Law, spoke to ICRC about his perspectives on the legal, moral and ethical issues raised by autonomous weapons.
Your recent presentation in Canberra was premised on the notion that offensive autonomous weapons systems are inevitable. Why do you think this is the case?
We are on a path, and have been for some time, towards offensive autonomous weapons. The primary reason will be increases in defensive systems, which through advances in autonomy will very soon include swarms. By swarming I mean scores or even hundreds of defensive systems which are largely autonomous. These systems, whether small boats, planes, or ground systems, will be capable of operating in ways human piloted or controlled systems simply cannot. Eventually the only way to effectively attack swarming defensive autonomous systems will be with swarming offensive autonomous systems. But this would mean weapons systems attacking weapon systems and not directly targeting people. So while I do think we'll reach a stage where we are using offensive autonomous systems, I believe these systems will be used deep under water and at high altitude. To the extent that they are used on land, it will be against other autonomous systems. Contrary to some claims, robots won't be hunting humans. A secondary issue is I think societally we are at the cusp of desensitization or acceptance of increasingly autonomous systems in our lives. This will begin with the introduction of driverless cars. It will be interesting to see to what extent the role of increasingly autonomous systems in our daily lives does or does not play in our accepting their military applications as well.
You discussed in your presentation that autonomous weapons sit on a spectrum. Why is this important?
It's important because often autonomy is incorrectly framed in binary terms, that something is either autonomous, or it's not. Understanding autonomy as a spectrum of integrated human and machine capabilities is important in terms of accuracy but also in demonstrating how difficult it will be to demarcate permissible from impermissible weapons systems.
Has there been sufficient consideration of the legal implications of the potential use of autonomous weapons?
In my opinion, the legal questions are relatively straight forward. The ethical and moral issues associated with autonomous weapons are far broader. Before an autonomous system is employed the State will have to do a review of the weapons to determine whether their employment would be prohibited under international law as described in Article 36 of Additional Protocol I. After that legal scrutiny it will be clear whether the system will be capable of distinction and not causing unnecessary suffering, or not. We should keep in mind that lethal autonomous weapons, which were subjected to legal review, have been used for decades now and by a number of states. This area is more evolutionary than revolutionary, and autonomous weapon systems are more a path than a destination. However, it's true that for now and the foreseeable future, it's unclear whether humankind could develop a system that would pass an Article 36 review.
What are the most critical of these ethical and moral issues?
To me it's the increasing disassociation of human beings from employing lethal force. However that's not a concern unique to autonomous systems; disassociation is inherent in the nature of warfare. Whether a crossbow, rifle, or artillery, you want to use force in a manner which your opponent can't effectively counter. From that dissociation comes a concern: the risk that because you can employ lethal force without risk to your side that you will be quicker to use that force. The most recent example of this is drones, which allow militaries to employ lethal force with little risk to their own armed forces. So autonomous weapons systems are just one more step down the continuum that we've always been on. If we reach the point where we are considering the use of fully autonomous systems that are making decisions about lethal force, then I do think that would have profound ethical and moral issues associated with it. The challenge in having a morality discussion is that we firstly need to have a discussion about morality – are we talking about my sense of morality or your sense?
Your research at Asia Pacific Centre for Military Law is focusing on how emerging technologies impact accountability in armed conflict. There has been much discussion about who would be accountable for decisions taken by fully autonomous battlefield machines. What is your perspective on this?
I think we're making this question harder than it needs to be. The criminal liability analysis starts with the commander, the individual who directed the attack, who ordered the firing of artillery or the dropping of the bomb. I can't imagine how the use of increasingly autonomous systems would change that. We would look at the commander and assess the contextual reasonableness of his or her action in employing an autonomous weapon. The question is: how are we going to define reasonableness? It will be interesting to see how our conceptions of reasonableness, negligence, and responsibility change or not in the civilian area, as systems, like driverless cars, become more autonomous and prevalent.
Some proponents say that autonomous weapons systems could be programmed to operate more cautiously and accurately because they are not dealing with emotions, like humans are. What's your perspective on this?
We tend to display a form of cognitive bias in adopting a rosier view of human abilities than our performance warrants. People are subject to fatigue, being scared and angry. This can be mitigated by training and experience, but even disciplined soldiers are relatively inaccurate when using force in stressful circumstances. Humans are not the precise discriminating actors we like to think of ourselves as being. We have to keep the positive and negative aspects of the human condition in mind when we're discussing increasingly autonomous uses of force. When you look at what machines are currently capable of, admittedly in discreet settings, I can't help but wonder if autonomous weapons systems may employ force with greater discrimination than what human beings are capable of, leading to less civilian casualties and better force protection in the process. While the current discussion questions the legality of employing autonomous weapon systems, I believe we will reach a point when that is completely reversed, the illegality may flow not using such systems when they were otherwise available.
The views expressed in this article do not necessarily reflect those of the International Committee of the Red Cross.