By Daniel Stuke
If we don’t do something now, humanity could face extinction in a matter of years. That is the worry about the threat of lethal autonomous weapon systems, as expressed by numerous experts in the field of Artificial Intelligence and robotics. To the uninitiated, such overwrought predictions will sound more than a little paranoid. But the danger posed by so-called killer robots is real and imminent. What must be done?
Artificial Intelligences (or AIs) become continually more sophisticated as they embed themselves ever more deeply into the fabric of our societies. The military applications of AI are numerous, especially when it comes to weapon systems. When the human decision-making component is removed altogether, we refer to so-called lethal autonomous weapon systems, or “killer robots”.
It may sound like pure science fiction but it is in fact a very real contemporary development in military technology. Fully autonomous weapon systems are being developed as we speak, and they encompass a wide range of products, from predator drones and unmanned submarines to sentry guns and even missile systems. Some of them are already viable (if not operational), such as the British unmanned warplane Taranis, which is reputedly capable of carrying out missions on its own with little to no human oversight, or the Samsung SGR-A1 sentry gun, which can identify, track and fire upon hostiles without any human intervention whatsoever. Many other lethal autonomous weapon systems are in various stages of development.
At present, no military has weapons in use that are authorized to pull the trigger without human intervention. This is no guarantee for the future: no military is willing to preclude outsourcing “trigger authority” to its autonomous weapon systems, and the fact that it is technologically possible at all is cause for serious concern. Beyond doubt, these systems are set to change the face of war in the 21st century. Concretely, there are a number of different fears surrounding the use of lethal autonomous weapon systems.
Perhaps the most immediate concern is that of distinguishing between combatants and non-combatants, as demanded by the Law of War. This is difficult enough for human soldiers, and many experts wonder if it reasonable to expect autonomous combat AIs to reliably distinguish between, for instance, hostile insurgents and innocent civilians in an ambiguous combat situation. It is currently very unclear if this is possible at all with contemporary technology, and this uncertainty could result in undesired casualties.
Assuming it is in fact possible to program autonomous combat AIs to reliably ascertain the distinction between combatants and non-combatants, contradictory imperatives in the Rules of Engagement (rules used by militaries to determine when and how force may be applied) are liable to produce ethical quandaries an AI may not be able to solve. Consider the ethical imperative to avoid collateral damage. This is a sensible imperative to program into autonomous combat AIs by any measure, but what if it necessary to violate the imperative to avoid collateral damage in order to prevent greater casualties later on? Combat situations tend to be incredibly complicated, and it is simply highly unlikely for an AI to make a sensible judgment call in such a situation. An algorithm that complex has yet to be written.
Another major concern is the danger of losing control over our killer robots. Malfunctions are bound to happen, as they do with any technology. This is all the more problematic if that technology has the ability to make decisions about life and death. A nightmare scenario of robots running amok seems unlikely, at least as long as self-learning genetic algorithms are not implemented in military AI. The danger of hacking and reprogramming upon capture on the other hand, is quite real. What if the technology falls in the hands of terrorists or rogue regimes?
A more intangible danger, but arguably the most menacing, is the possibility of an escalating cycle of violence due to lethal autonomous weapon systems. A group of 115 experts in the field of AI and robotics warned about this danger in an open letter signed August 21st 2017: “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”[1] Elon Musk, one of the letter’s signatories, has stated that he believes this possibility to be a serious a threat to Humanity’s very survival.
What can be done to prevent the proliferation of lethal autonomous weapon systems? A wholesale ban enforced internationally, as is currently in place on biological and chemical weapons, is an obvious way to go, but only 14 countries currently support such a ban. Another possibility is to regulate their use, for instance by legally enshrining the necessity of human oversight over the decision to use lethal force. The problem with this is that it is nearly impossible to properly enforce.
As such, it is not clear what can be done at this point. What does seem clear is that sitting still is not an option. The technology is developing rapidly, and if we do not act proactively in its regulation, the consequences could be lethal. This week, a group of governmental experts established by the UN Convention on Certain Conventional Weapons is meeting to discuss the subject. Let us hope they can come up with a way to deal with the threat, because, as the 115 experts have stated in their open letter: “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
Sources