Conference Read: Killer Robots: Biases, Accountability, and Benefits

By: Natalie de Beer

Picture credits: David Lienemann, Public domain, via Wikimedia Commons

Continued technological advancement has posed opportunities for and threats to the protection of human rights. One particular area of concern has been the recent production and use of lethal autonomous weapon systems (LAWS), commonly known as killer robots, in the military domain. These killer robots have the ability to target and kill individuals without any human involvement. As countries including the United States, Russia, South Korea, and Israel continue to develop these weapons, controversy has arisen over the potentially detrimental implications for individuals’ dignity and equality.

International law does not currently ban the use of killer robots. Legal experts have, however, stressed that they should be banned under the Martens Clause. The Martens Clause is an international customary law of armed conflict that mandates that the legality of new technologies should be based on principles of humanity. Since LAWS have minimal elements of human control, experts have argued that they dishonour human life and should thus be prohibited. 

Killer robots have no conscience and no sense of fear, which leads them to take different risks and make different decisions than humans do during armed conflict. While some experts find these notions problematic, others have argued that the implementation of LAWS may lead to more justice. Since killer robots are not influenced by human emotion or desire for self-preservation, they may be able to more accurately distinguish between military targets and civilians. In light of an extensive global history of human rights violations during the chaos of armed conflict, experts have thus argued that it may be morally imperative to use these weapons, if they are capable of reducing casualties. 

When considering the ethics of killer robots, it is necessary to address the core issue of accountability. Under current international law, there are significant gaps regarding the avenues for criminal redress available if the killer robots commit human rights violations. Legal scholars have argued that commanders can blame war crimes committed by these robots on manufacturing errors; this leads the issue to fall outside of the jurisdiction of the International Criminal Court, which may lead to reduced opportunities for enforcing accountability. Importantly, this reduced accountability is not only problematic in the sense that victims may not receive justice, but it may also increase the occurrence and severity of war crimes; the United Nations has stressed the role of accountability in deterring future human rights violations, shedding light on potential negative implications of the legal gaps surrounding LAWS.

In response to the development of killer robots, organisations have formed transnational advocacy networks aimed at achieving a ban on the weapons. For example, the Campaign to Stop Killer Robots has called for laws that prevent the development, production, and use of LAWS. The campaign has argued that killer robots dehumanise individuals as people are viewed as pieces of code rather than dignified human beings. Notably, the campaign has stressed that the algorithms are not unbiased; many operate off programs that differentiate between race and gender, and they are often tested on marginalised communities first. When training data is biased, the systems may exacerbate existing inequalities if proper measures are not implemented to counteract the bias. 

The use of fully autonomous robots is not limited to armed conflict. Cities such as Dubai, Seoul, and New York City have previously deployed robot cops to monitor and react to crime. The implementation of these robots has sparked debate over whether the use of fully autonomous systems in policing is ethical. Some experts have pointed to the idea that the use of robots may limit implicit bias problems in policing, while others have cited privacy concerns and robots’ inability to use discretion as potentially significant issues.

When considering the possible implications of the use of LAWS for human rights violations in the context of armed conflict, it is important to note that the use of artificial intelligence systems in policing has already generated controversy. For example, the Netherlands uses a Crime Anticipation System that is aimed to prevent crime before it has occurred. The system uses previously collected data to identify potential trends of where and when crimes take place, which helps determine where police will complete their routes. Furthermore, the system uses a person-based algorithm to predict the characteristics and identities of those who will commit a crime. The UCLA Law Review has argued that predictive algorithms may create cyclic discrimination. If certain areas are over-policed in the past, the algorithm may send more police there to monitor potential crime. This may lead to even more arrests, especially because police officers may expect crime to occur, leading them to detain more people than they previously would have. This, in turn, leads the algorithm to predict that more crime will occur in those locations. The implications of predictive policing thus support the idea that algorithms are not truly “objective” and point to the potential harms of automated systems in general. 

As global technological advancement continues, the world will need to keep a close eye on the potential dangers that arise from automated systems. While these systems generate potential benefits for minimising human mistakes during conflict, they also have several potentially harmful implications. Currently, questions remain over the extent to which the systems are biased and who should be held accountable when the systems make mistakes. Given the rapid changes in the potential of fully autonomous weapons, continued attention and action are necessary to mitigate future risks that may arise. 

This article complements the seminar “Bridging the Future: The Role of Cyber in the Military Domain”, which will take place during the annual JASON conference on november 4th, 2023. Nonetheless, the views and opinions expressed in this article represent those of the writer of the article, and thus by no means reflect the position of the speaker that will be present at the seminar. More information on the conference and tickets for the conference can be found here.

Share this article

Newsletter

Join over 150,000 marketing managers who get our best social media insights, strategies and tips delivered straight to their inbox.