Don’t blame me – my robots did it

Here’s a scary essay that needs writing.

Dr Robert Sparrow of the School of Philosophical, Historical and International Studies, Monash University, wrote a paper entitled “Killer robots” (Sparrow, R. 2007. Killer Robots. Journal of Applied Philosophy 24(1): 62-77, March).

Top and bottom, it says:

Abstract
The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime. A number of possible loci of responsibility for robot war crimes are canvassed; the persons who designed or programmed the system, the commanding officer who ordered its use, the machine itself. I argue that in fact none of these are ultimately satisfactory. Yet it is a necessary condition for fighting a just war, under the principle of jus in bellum [sic], that someone can be justly held responsible for deaths that occur in the course of the war. As this condition cannot be met in relation to deaths caused by an autonomous weapon system it would therefore be unethical to deploy such systems in warfare.

and

Conclusion
I have argued that it will be unethical to deploy autonomous systems involving sophisticated artificial intelligences in warfare unless someone can be held responsible for the decisions they make where these might threaten human life. While existing autonomous weapon systems remain analogous to other long-range weapons that may go awry, the more autonomous these systems become, the less it will be possible to properly hold those who designed them or ordered their use responsible for their actions. Yet the impossibility of punishing the machine means that we cannot hold the machine responsible. We can insist that the officer who orders their use be held responsible for their actions, but only at the cost of allowing that they should sometimes be held entirely responsible for actions over which they had no control. For the foreseeable future then, the deployment of weapon systems controlled by artificial intelligences in warfare is therefore unfair either to potential casualties in the theatre of war, or to the officer who will be held responsible for their use.

There is another conclusion that can be drawn from this work. One should use robots to kill people because then one is much less likely to be held responsible for the atrocities they may commit. I doubt I am the first person to think of that: the marketing people in the autonomous arms industry will already be selling that benefit.

Leave a Reply

Your email address will not be published. Required fields are marked *