Autonomy in the Kill-chain: more than semantics? 

autonomous weapons

By: Aiofe Keogh
Picture credits: wikimediacommons

As the world witnesses the Russian invasion of Ukraine, one factor in this war which requires attention is the role technology will play in the violence. Some have speculated that this war will serve as a testing-ground for deploying weapons with varying degrees of autonomy. With both Ukraine and Russia possessing weapons with some autonomous functions –  Ukraine has the  Bayraktar TB2 drones from Turkish developer STM and Russia has the Lantset  “loitering drone ” also known as a “kamikaze drone”. While neither weapon is considered fully autonomous, both have varying degrees of autonomy built into their functions. This is not the first war which has witnessed the presence of weapons with autonomous capabilities, this has been increasingly prevalent in wars such as Nagorno-Karabakh, Ethiopia,  Afghanistan, Syria, Yemen, and Libya.


To understand the nuances of autonomy and the consequences of such developments in weaponry, this article homes in on a particular incident which occurred in Libya in 2020, of which the UN reported the use of a lethal autonomous weapon. 

The Report

In March 2021,  a UN report about the Libyan civil war was published. This report raised questions about the potential use of fully autonomous weapons in the targeted killing of combatants. The report which covers the events of the Libyan civil war from October 2019 to January 2021 contains a small section detailing a particular incident which happened near the country’s capital, Tripoli, in March 2020. The incident involved a clash between the UN-backed Government of National Accord (GNA) and troops loyal to the Libyan National Army of Khalifa Haftar (referred to in the report as the Haftar Affiliated Forces or HAF). The report tentatively describes how an unmanned aerial vehicle known as a STM Kargu-2 may have been operated autonomously without any human supervision. 

The report stated:  

“Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.” (p.17)

These few lines are the only reference to the use of a lethal autonomous weapon system within the report. The wording does not specify whether the STM Kargu-2 or other loitering munitions were responsible for the casualties. This raised speculation on whether a fully autonomous weapon was deployed and how the UN classifies autonomous weapons. The report generated traction among the international community due to the lack of clarity on the matter. 

The Response 

Some media outlets stated this could be the first fully autonomous weapon to have targeted and killed combatants without human supervision. The organization Stop Killer Robots called on states to officially respond to the UN report labelling it a “disturbing development”. However, other experts in the field disagreed. Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations who leads the ECFR’s Technology and European Power initiative, argues that “loitering munitions have been on the battlefield for a while – most notably in the Nagorno-Karabakh conflict”. She goes on to assert that the incident is nothing new, but rather the novelty lies in that the UN report has labelled these loitering munitions as “lethal autonomous weapon systems.” 

In the same vein, Jack McDonald,  a lecturer at the Department of War Studies at King’s College London, took the stance that “There are people who call ‘loitering munitions’ ‘lethal autonomous weapon systems’ and people who just call them ‘loitering munitions.’” 

Could it be that this is just an issue of semantics? According to McDonald,  the debate comes down to whether something is “autonomous” or “automated.” 

The Technology 

It is worth defining what is considered a loitering munition and how some experts differentiate these weapons from lethal autonomous weapons. A loitering munition is an unmanned aerial vehicle which can extend beyond the line of sight engaging targets using its explosive warhead. The deployment of loitering munitions in the battlefield is not a new phenomenon, and can date back to the 1980’s. The weapon gained its name from its ability to “loiter” in an area until it identifies its target. Originally created to target air-borne threats such as missiles as a form of defence, the weapon has more recently evolved to strike grounded targets such as military tanks, and by default the humans in close proximity to this weaponry. This was evident in the Nagorno-Karabakh conflict.  Loitering munitions strikes its targets through the identification of enemy radar or using “high-resolution electro-optical and infrared cameras” which are used to survey, identify, and lead the weapon to its target.”

How do loitering munitions differentiate from lethal autonomous weapons – the answer is complex, blurry, and often dependent on who is answering the question. There is currently no universal definition for autonomous weapon systems or the sub-group lethal autonomous weapons systems (commonly referred to as LAWS). As a result, many countries, and international institutions such as NATO and the ICRC have established their own definitions. However, it has been argued that without a universal definition , it is difficult to have meaningful monitoring and regulation of the production of these weapons and their use on the battlefield. 

In the case at hand, it is important to examine the difference between the Kargu-2 from other loitering munitions, in particular the degree of autonomy afforded to this weapon. The Kargu-2  is a quadcopter defined by its producers STM as a loitering munition. It has a dual capability of operating both with manual control or autonomously using “machine learning algorithms”. It is these machine learning algorithms that differentiate the weapon from other loitering munitions such as Israel’s Harpy. The Kargu 2’s ability to operate offline using these machine learning algorithms means that humans do not define who or what should be targeted, instead this becomes a function of pre-set criteria (programmed by humans into algorithms), and/or machine learning that occurs within the weapon system. This interaction often becomes a black box to humans once it starts, granting this weapon considerably more autonomy. Due to the lack of clarification within the UN report, it is unclear whether the weapon operated with human supervision in the way we know other loitering munitions do or whether machine learning algorithms were in fact responsible for the targeted killing of combatants. 

The Politics of Automated Targeting 

There are many unknowns in this case, however, what we can take from it is:

Firstly, clarity is needed on whether it was the Kargu-2 or the loitering munitions which was responsible for the death of the combatants.

Secondly, the controversy surrounding this case centres on whether lethal autonomous weapons were responsible for the targeted killing of combatants. There is a certain convenience regarding the lack of definition for a lethal autonomous weapon; without a clear understanding, and thereby regulation, of what the threshold is for a lethal autonomous weapon, states can continue to embrace new military technologies and weapons companies can continue to develop them. As a result, the public are obscured from the actual developments in autonomy and their incorporation into military weaponry. There is thus a need for a clear definition of what is considered a lethal autonomous weapons system, without a formal definition to the category, there can be no regulation on the development and/or use of these weapons.

Third, human supervision is the currently the accepted standard in targeted killing airstrikes, but the term human supervision or humans in the loop is an elusive term which does not deny the use of algorithms or machine learning within the process, it just contends that there is human input. While the focus on fully lethal autonomous weapons is important, there is still a need for an overarching interrogation of the kill-chain in targeted killings, in order to fully understand the degree of autonomy versus human supervision in both weaponry and military practices. 

Finally, in this interrogation of the kill chain, it is crucial to question what criteria and data is being used to train and test the ability of an algorithm and the learning capability of a machine to distinguish, for instance, between a privately owned SUV or a military tank? Or more importantly, and much more complex: between a rebel combatant and a civilian who look the same? And who will we hold to account when civilians are targeted?

If the current conflicts in Ukraine and Libya teach us anything, it is that war is ugly, messy, destructive and civilians bear the greatest cost. Herein, we have to keep asking ourselves: who stands to benefit from the experimentation with autonomous weapons and who stand to lose, and can we imagine standing in their shoes?

Aoife Keogh is a Researcher at the Intimacies of Remote Warfare (IRW) research programme at Utrecht University. An evidence-based research programme that aims to inform academic, policy and public debates on the intimate realities of the wars waged in our names.

Curious to learn more about autonomous weapons? Then make sure to listen to this JASON podcast in which Dr. Katharine Fortin and Lieutenant-Colonel Patrick Bolder are interviewed on the humanitarian issues concerning on the use of these drones.

Share this article

Newsletter

Join over 150,000 marketing managers who get our best social media insights, strategies and tips delivered straight to their inbox.