War and international crises, such as the Vietnam War, or the war between Israel and Hamas, strongly impact ordinary citizens’ lives. Wars change trade, diplomacy and affect freedoms such as the freedom of movement, freedom of speech, and, in most severe cases, internationally recognized human rights. Therefore, countries aim to take certain kinds of measures that have the ability to prevent these infringements. International security is the main field of thought that studies the actions taken to prevent and deal with conflicts and protect people and their way of life.[1] These include peacekeeping, diplomatic agreements, treaties/conventions, and military actions.[2] However, of the above-mentioned methods it is military security measures that are considered to be the most effective steps taken against national or international threats of harm to citizens.
A newly emerged tool of preventive measures is the use of lethal autonomous weapons (LAWS).[3] These are a class of weapon systems that use computer algorithms to identify targets and then use their weapon once the target has been found. LAWS can complete this process without any human control. Their autonomy is provided by artificial intelligence, an asset that can enable their more sophisticated operation in warfare.[4]
This essay aims to delve into this topic through the lens of Artificial Intelligence by studying the use and legal development of the regulation of lethal autonomous weapons. I aim to elaborate on the international legal framework of the regulation of LAWS, while recognizing the possible loopholes and missing areas of governance and research.
The question this essay poses is twofold: to what extent are AI regulations keeping up with new security developments in the field of autonomous weapon systems, and what missing legal gaps can be found in international law?
The thesis of this essay is that while an international framework for war and international security (through the UN Charter, the Geneva Conventions, UN Convention on Conventional Weapons, etc.) and Artificial Intelligence (e.g.: EU AI Act) exists,
there is no specific legal framework for the regulation of LAWS that would guarantee the prevention of possible threats the use of such weapons may pose.
Firstly, this essay will explain the utilization of LAWS powered by Artificial Intelligence through the case study of the use of Lavender in the war between Hamas and Israel by the Israeli defence forces, secondly, it will look at ethical considerations regarding LAWS. Thirdly, the existing regulatory framework will be analysed, which includes looking at International Agreements and Treaties, and other relevant national and regional frameworks. Finally, the challenges and the future challenges will be discussed, through both technical and legal aspects, as well as the dynamics of lethal autonomous weapons in diplomatic and geopolitical contexts, and the prospects of their use in wars and other conflicts.
The use of LAWS, according to the United States’ Congressional Research Center, dates back to the 1600’s, as naval mines were used already then as automatic defensive systems.[5] However, autonomous offensive and defensive systems were mostly used against non-human targets. The first instance of the use of LAWS against human targets occurred in 2020, through the Kargu 2 drone, in which a Turkish military drone incidentally attacked a person.
In his book The Human-Machine Team: How to Create Synergy Between Human and Artificial Intelligence That Will Revolutionize Our World, Brigadier General Y.S., the current commander of the elite Israeli intelligence unit 8200, explained the prospects and possibilities of the use of AI-powered LAWS in warfare.[6]
About a month ago it was revealed that the Israeli intelligence unit has already developed a LAWS programme called Lavender. From the start of the war between Israel and Hamas Lavender has played a pivotal role in the bombings of the Gaza strip. The programming of this system is designed to target all suspected operatives in the military wings of Palestinian Islamic Jihad (PIJ) and Hamas. According to the findings of +972 Magazine, the Israeli army has heavily relied on Lavender and hit almost 37,000 individuals as suspected targets.[7]
While autonomous defensive and offensive weapons have existed before Lavender, it is the first one that specifically targets humans in warfare.
After the 7 October Hamas terrorist attack, the Israeli Defense Forces have shifted to a less narrow targeting selection, which also involves lower ranked commanders of Hamas and PIJ.[8]
According to +927 Magazine, the AI-driven system of Lavender uses a wide range of datasets that includes the names of nearly every person in Gaza, video feeds, chat messages, social media data, and social network analyses to determine whether a person is in contact with either Hamas or PIJ. After that, this data, which includes 10 per cent of wrong targets, is reviewed by humans. [9] Nevertheless, some systems target humans in their homes, which also involves false positives, such as family members and neighbours, and the targeting of civilians through informational mistakes (such as similar names).
While the criminal liability of the IDF through these acts might seem evident, the regulations of international law suggest otherwise.
Through its above-detailed operations the IDF has likely infringed upon multiple aspects of international law. Relevant sources of laws of war include the Universal Declaration of Human Rights, the Charter of the United Nations, the United Nations Convention Against Torture, the Geneva Conventions, the Convention on Conventional Weapons, the Chemical Weapons Convention, and the Convention on the Physical Protection of Nuclear Material.[10] As far as human rights infringements are concerned, the ICCPR, CAT and CEDAW are also relevant.[11]
Evidently, the IDF is and should be under criminal liability for its actions, which included harming innocent civilians. While there have been no specific legal cases and trials involving the use of Lavender, it can still be concluded that using AI-powered LAWS, which have a high rate of errors in quantitative terms, the safety and human dignity of the Palestinians living in war zones have been violated.[12]
But the rigidity and uncovered areas, such as that of LAWS, of international law cannot hold the IDF liable and international courts may hold their actions permissible. One example that exonerates the IDF is the doctrine of ‘double effect’, which means that it is permitted to inflict harm as an unanticipated or properly anticipated side consequence (or ‘double effect’) of achieving a good outcome; however, it is not accepted to cause such harm as a means to achieve the same good purpose. This means that for instance in the case of an airstrike, the attacker cannot be held accountable for additional civil casualties.
Regardless, it is debatable whether the practice of IDF meets any reasonable standard of proportionality, and whether the programming of the Lavender system includes any forms of cultural or racial discrimination.[13]
However, criminal liability in such cases can only be demonstrated if proper regulations and legal definitions are initiated. Legal autonomous weapons and their commonly agreed definition are absent from UN regulations, which therefore cannot create exceptions in doctrines such as the ‘double effect’.
Although a UN Group of Governmental Experts have already published a characterization of these tools, stricter enforcement of the laws of war is needed.[14]
All this is obviously inseparable from the question of artificial intelligence. Stricter AI regulation is needed precisely because of the military use of the technology. Those opposing the legal regulation of artificial intelligence within the European Union have voiced reasonable arguments, such as the fact that overly strict regulations might harm trade within the European Union and can cause economic backlash. But restrictions connected to LAWS do not have such impacts, and their only purpose is to protect civilians and prevent the detrimental possible effects of unregulated attacks.
As proposed by the Chinese delegation of the Group of Governmental Experts mentioned above, ‘Basic characteristics of Unacceptable Autonomous Weapons Systems should include but not be limited to the following: Firstly, lethality, meaning sufficient lethal payload (charge) and means. Secondly, autonomy, meaning absence of human intervention and control during the entire process of executing a task. Thirdly, impossibility for termination, meaning that once started, there is no way to terminate the operation. Fourthly, indiscriminate killing, meaning that the device will execute the mission of killing and maiming regardless of conditions, scenarios and targets…’ [15]In line with this proposed definition, the United Nations should create a legal framework for LAWS and other similar military tools that make a distinction between the different kinds of artificial intelligence-powered weapons. A concise regulation could be created through following already existing artificial intelligence regulations and their combination with already existing conventions, such the Convention on Conventional Weapons.
The Artificial Intelligence Act of the European Union also contains certain classifications that cannot only be applied to social media products, or traded goods, but can be applied to elements of international security. The EU AI Act uses the concept of risk-based assessment, which involves multiple categories ranked on the basis of the extent of the threat these systems can pose on individual and collective security,[16] and the risks are categorized as acceptable and unacceptable. The problem is that based on that, Artificial Intelligence systems that pose only a minimal risk to those impacted can remain virtually unregulated.
All in all, while LAWS powered by AI may make military operations more efficient,
the high error rate of Lavender proves that this field is still highly undeveloped and may cause severe casualties among civilians.
Therefore, it is reasonable to conclude that these tools currently operate with high risks.
As an example, a lethal autonomous drone powered by artificial intelligence might not identify borders and its geographical location properly and might kill citizens in an area outside the conflict zone, which could undeniably further escalate geopolitical tensions.
The South Africa vs. Israel case at the International Court of Justice further has highlighted the severity of issues that LAWS can cause, as evidenced by the provisional measures requested by ICJ towards Israel in accordance with 1948 Genocide Convention.[17]
The Secretary-General reiterated this call in his 2023 New Agenda for Peace, recommending that States adopt a legally binding instrument by 2026 to prohibit lethal autonomous weapon systems that operate without human control or oversight and cannot be used in accordance with international humanitarian law, as well as to regulate all other types of autonomous weapons systems. He stated that in the absence of global restrictions, the design, development, and implementation of these systems raise humanitarian, legal, security, and ethical problems, as well as constitute a direct danger to human rights and basic freedoms. To sum up: the creation of LAWS poses severe potential harm to civilians. International security regulations do not keep up with new developments, and the non-liability of the use of AI- powered weapons poses significant security threats. While the further recognition and enforcement of laws in this field are on the agenda of the UN Secretary General, no recognizable steps have been taken to prevent the further escalation and casualties caused by weapons systems such as Lavender. This essay proposes the immediate acceptance of a regulatory framework for the tools in question, for the drafting of which the European Artificial Intelligence Act could be combined with already existing laws on weapons, such as the United Nations Convention on Conventional Weapons.
[1] Ryszard Szpyra, ‘Military Security within the Framework of Security Studies: Research Results’, Connections 13, no. 3 (2014), 59–82, https://www.jstor.org/stable/26326368?seq=2, accessed 10 Aug. 2024.
[2] Ryszard Szpyra, ‘Military Security within the Framework of Security’
[3] United Nations Office for Disarmament Affairs, ‘Background on LAWS in the CCW – UNODA,’ disarmament.unoda.org (2023), https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/, accessed 10 Aug. 2024.
[4] United Nations Office for Disarmament Affairs, ‘Background on LAWS in the CCW – UNODA’
[5] Autonomous Weapons, Homepage – Autonomous Weapons Systems, accessed 10 Aug. 2024
[6] Yuval Abraham, ‘“Lavender”: The AI Machine Directing Israel’s Bombing Spree in Gaza’ (3 (April 2024), +972 Magazine, https://www.972mag.com/lavender-ai-israeli-army-gaza/, accessed 10 Aug. 2024
[7] Yuval Abraham, ‘“Lavender”: The AI Machine Directing Israel’s Bombing Spree in Gaza’
[8] Yuval Abraham, ‘“Lavender”: The AI Machine Directing Israel’s Bombing Spree in Gaza’
[9] Yuval Abraham, ‘“Lavender”: The AI Machine Directing Israel’s Bombing Spree in Gaza’
[10] International Committee of the Red Cross, “What Are the Rules of War and Why Do They Matter?,” Www.icrc.org, October 19, 2016, https://www.icrc.org/en/document/what-are-rules-of-war-Geneva-Conventions.
[11] International Committee of the Red Cross, ‘What Are the Rules of War and Why Do They Matter?’ (19 October 2016), icrc.org, https://www.icrc.org/en/document/what-are-rules-of-war-Geneva-Conventions, accessed 10 Aug. 2024
[12] Simon Frankel Pratt, ‘When AI Decides Who Lives and Dies’, Foreign Policy (16 May 2024), https://foreignpolicy.com/2024/05/02/israel-military-artificial-intelligence-targeting-hamas-gaza-deaths-lavender/, accessed 10 Aug. 2024
[13] Alison McIntyre, ‘Doctrine of Double Effect’, Stanford Encyclopedia of Philosophy (last revised 17 July 2023), Stanford.edu, https://plato.stanford.edu/entries/double-effect/, accessed 10 Aug. 2024.
[14] Adrián Agenjo, ‘Lavender Unveiled: The Oblivion of Human Dignity in Israel’s War Policy on Gaza,’ (12 April 2024), Opinio Iuris, https://opiniojuris.org/2024/04/12/lavender-unveiled-the-oblivion-of-human-dignity-in-israels-war-policy-on-gaza/, accessed 10 Aug. 224
[15] Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, ‘Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects’ (10 March 2023), DocumentsLibrary – UNODA, https://docs-library.unoda.org/Convention_on_Certain_Conventional_Weapons_-Group_of_Governmental_Experts_on_Lethal_Autonomous_Weapons_Systems_(2023)/CCW_GGE1_2023_CRP.1_0.pdf, accessed 10 Aug. 2024.
[16] EU Artifical Intelligence Act, ‘High-Level Summary of the AI Act | EU Artificial Intelligence Act’ (27 February 2024), artificialintelligenceact.eu, https://artificialintelligenceact.eu/high-level-summary/, accessed 10 May 2024
[17] International Court of Justice, ‘Summary of the Order of 26 January 2024 | INTERNATIONAL COURT of JUSTICE’ (26 Jan. 2024), icj-cij.org, https://www.icj-cij.org/node/203454, accessed 10 Aug. 2024