Rizky Citra Anugrah*

Introduction

The humanitarian crisis in Gaza highlights the persistent struggle to enforce international humanitarian law (IHL), a legal framework aimed at mitigating the devastating consequences of armed attacks, particularly the loss of innocent lives. Within just one week, we were confronted with a toll of approximately 3,500 lives lost, with no less than 2,215 belonging to the Palestinian people. In contrast, over the years, Israel has consistently experienced significantly lower casualties. This asymmetry raises a complicated question at the intersection of ethics, technology, and justice: what role does cutting-edge technology play in this equation, and can it contribute to upholding humanitarian principles in the face of such immense suffering?

To understand Israel’s relatively low casualty rate, we must delve into the deployment of the world’s most advanced defensive autonomous weapons system (DAWS), known as the ‘Iron Dome.’ Marco Sassòli, a professor and leading expert on international law at the University of Geneva, Switzerland, affirmed that the Iron Dome has helped Israel reduce its civilian casualties despite the arbitrary targeting by Hamas’ traditional rockets. Introduced in 2011, this defense system has been claimed to have a 97 percent success rate in intercepting incoming missiles. Put into context, Israel’s weekly average of around 3,000 incoming missiles translates to approximately 415 successful interceptions daily.

To get this number, the Iron Dome operates through three key components: radar, control, and battery. First, its radar detects incoming missiles and other airborne threats, distinguishing their size, velocity, and type. Second, the control acts as the ‘brain’ of the operation, employing algorithms and, more recently, artificial intelligence (AI) to guide interceptor rockets toward their targets automatically, even when the incoming missiles exhibit erratic movements. Additionally, it can prioritize intercepting missiles aimed at populated areas. Lastly, the battery fires two interceptors at each incoming missile and can release up to 20 interceptors at a time.

While Israel stands out as a prime example of the application of the world’s most advanced AI-powered DAWS, it is far from being the sole user. The United States, a co-producer of the Iron Dome, has ventured into testing the system’s application in Guam. Meanwhile, the United Kingdom is making significant progress in developing the DragonFire, a DAWS that harnesses concentrated laser beams to safeguard both land and maritime targets. Israel has taken a similar approach to developing the Iron Beam, which is designed to be a cost-efficient alternative to the Iron Dome.

Generally, autonomous weapons systems (AWS) have seen decades of use, but not all have been employed exclusively for defensive purposes. Recently, we have witnessed their integration with AI technologies, enabling these systems to operate substantially independently from human interference. In response to this growing threat to human dignity, there is an uprising movement to limit the further developments of AWS, with some advocating for stopping its development entirely. Spearheaded by the Campaign to Stop Killer Robots, this coalition has made significant progress in raising global awareness about the escalating threats posed by autonomous weapons. Particularly noteworthy is the coalition’s Vote Against The Machine campaign, which prompted the United Nations (UN) General Assembly to adopt Resolution L.56, entitled “Promoting International Cooperation on Peaceful Uses in the Context of International Security,” in October 2023. Sponsored by 44 states, the proposal has garnered the support of 120 other states, calling for all states to address the humanitarian, legal, and ethical risks posed by AWS.

I. The Case for Defensive Autonomous Weapons Systems

While extensive discussions and policies have delved into the legal and ethical challenges associated with lethal autonomous weapons systems (LAWS), a noticeable gap exists with DAWS. In the discourse on AWS, the focus has predominantly gravitated toward LAWS, often overlooking the existence and potential of AWS designed exclusively for defensive purposes. Furthermore, the terminology used in these discussions has contributed to this oversight. The term AWS is frequently used interchangeably with ‘Killer Robots,’ emphasizing the perception of autonomy in weapons systems predominantly geared toward offensive actions. In Resolution L.56 itself, although the title explicitly concerns LAWS, the umbrella term of AWS is still used repeatedly throughout several clauses. It is essential to acknowledge that weapons, in general, are not exclusively developed for offensive purposes. In this context, Black’s Law Dictionary provides an inclusive legal definition of ‘weapon’ as “an instrument of offensive or defensive combat.” In the realm of lexical discourses, weapons are inherently recognized as serving two opposing functions. However, this duality is often overlooked in legal discussions surrounding AWS. In light of this reality, this article proposes introducing a new term, ‘Guardian Robots,’ as a synonym for DAWS, aiming to provide a balanced perspective.

The differentiation between DAWS and LAWS is crucial because several ethical and legal considerations driving the push for a ban on LAWS are not applicable to DAWS. First and foremost, LAWS are often challenged on the grounds that they cannot comply with IHL, which requires adherence to the principles of humanity, distinction, proportionality, and military necessity. These arguments are based on the fact that current AI technologies are incapable of making decisions to the extent humans can. Indeed, AI is not yet technologically advanced enough to differentiate a surrendering soldier from a civilian who might be carrying weapons for defense. However, DAWS do not even need to make such decisions because of its purpose to exclusively aim at offensive weapons. Conversely, it can help and has helped humans in upholding humanitarian principles. For instance, the Iron Dome’s capability to target only missiles directed at civilian areas can help both the aggressor and the defender align with the goals of the distinction principle in IHL, significantly protecting civilian lives.

Another common argument supporting legal limitations on the development and deployment of LAWS revolves around the concept of meaningful human control (MHC). MHC is rooted in the philosophical discussions surrounding AWS, with the primary objective of constraining the reduction of significant human oversight and deliberation in weapons deployment. Two fundamental principles guide the preservation of MHC. The first principle dictates that weapons systems should not be able to apply force and operate without any form of human control. The second principle highlights the notion of ‘meaningful’ control, asserting that pressing a ‘fire’ button falls short of constituting substantive human oversight.

Some scholars argue that, while permitted, automation must be largely restricted to ensure significant human control in AWS. Others have pointed out that even a limited role for automated systems in AWS decision-making promotes an authority imbalance, perpetuating automation bias and ultimately influencing the human operator who should be in charge of the system. Automation bias is a psychological phenomenon in which individuals tend to favor decisions made by automated systems over their own judgments, even when the automated decision is proven inaccurate. Undeniably, automation bias and systematic errors are not exclusive to LAWS and can also arise in the decision-making of DAWS. However, the substantial benefits of widespread DAWS deployment far outweigh the potential drawbacks. Unlike LAWS, in which error exacerbates its already-destructive nature, DAWS only poses a risk in cases of extreme malfunctions. So far, the Iron Dome’s failures have been linked almost exclusively to the inability to intercept missiles without any breaches of IHL principles. Moreover, the Iron Dome is classified as a weapon with a very short launch range. This limitation prevents the Iron Dome from becoming lethal. Given the Iron Dome’s exceptionally high success rate, DAWS’s lawful and comprehensive technological development remains unlikely to pose lethal concerns during errors.

This proposition can be further argued to assert that, in the evolving landscape of military weaponry, autonomous defenses are not only beneficial but also essential for upholding IHL principles. Even without LAWS and AI technologies, military weapons are developed in increasingly complex ways that often surpass human capacity for effective defense. Due to its precision and rapid response capabilities, DAWS can be strategically deployed in vulnerable areas or sectors where the threats are beyond human control. Even in cases where DAWS fail to completely stop an attack, its role in mitigating its consequences can significantly help uphold IHL’s principle of proportionality. When sufficiently developed, the utilization of DAWS is pivotal in significantly reducing civilian casualties, as evidenced by the Iron Dome.

II. The Challenges of Defensive Autonomous Weapons Systems

Nevertheless, the pursuit of lawful development for DAWS while eliminating LAWS is not without its unique set of challenges. The first fundamental concern revolves around the definition of ‘defense.’ To what extent does the use of AWS qualify as an act of defense? The Caroline Doctrine provides a clear framework for ‘anticipatory’ self-defense, allowing a response when the need to react is “instant, overwhelming, and leaves no choice of means, and no moment for deliberation.” It readily addresses the permissibility of actions based on whether they constitute an attack or a counterattack. If the counterattack aligns with the criteria outlined in the Caroline Doctrine, it can be considered a lawful and justifiable response to an action initiated by another party. However, this doctrine is only relevant to decide whether or not the start of a defense is justifiable.

The definition of defense becomes increasingly blurry when it comes to the proportion of the counterattack. Two common yet contradictory parameters are often used to define proportionality in times of defense: the ‘tit for tat’ and the ‘means-end’ parameters. The ‘tit for tat’ parameter suggests that defensive actions are permissible when the counterattack is proportionate to the initial attack. In contrast, the ‘means-end’ parameter focuses on completely deterring the attacker from the ability to launch further attacks, determining the legitimacy of a proportional counterattack based on the objective of using force. The utilization of systems like the Iron Dome aligns more closely with the ‘tit for tat’ approach, where defense matches the scale of the attack.

Proponents advocating for the complete prohibition of all forms of AWS may argue that DAWS could potentially be exploited as LAWS under the guise of self-defense. The lack of a universally agreed-upon definition for defensive weapons creates a vulnerability, allowing for the manipulation of international law principles and doctrines. However, this concern can be effectively addressed by establishing an international agreement that outlines the characteristics of DAWS. The international agreement could explicitly define the elements that categorize a weapon as a DAWS to enhance clarity and prevent misuse. Additionally, incorporating the ‘tit for tat’ parameter into the agreement would provide a specific criterion for assessing the legitimacy of an AWS in relation to its defensive or lethal nature. This criterion ensures that the evaluation of autonomous weapons aligns with the principle of proportionality, wherein the defensive response corresponds appropriately to the scale of the initial attack. By deliberately excluding the ‘means-end’ parameter from the assessment criteria, such an agreement would significantly reduce the potential for abuse of DAWS and uphold IHL principles.

The second fundamental concern revolves around the danger of reverse engineering. As Israeli Prime Minister Benjamin Netanyahu expressed on the Russo-Ukrainian war, “We’re concerned also with the possibility that [the Iron Dome] systems that we would give to Ukraine would fall into Iranian hands and could be reverse engineered.” He continued by adding that this is not a theoretical concern, as a similar case has happened previously with other anti-tank systems. The Iron Dome’s technologies are especially at a heightened risk for reverse engineering due to their high mobility nature. While this feature is an advantage for Israel to strategically place the weapon in densely populated areas, it means that it is also severely vulnerable to being captured. Beyond the concern of physical capture, as AI is also a highly adaptive technology, the other party can learn to continuously feed the system with false positives, which intentionally transforms DAWS into LAWS. In this scenario, the aggressor can use human shields and trick the AI into thinking that the ‘human baits’ are weapons to be attacked for defense.

Critics of DAWS may argue that even when adhering to a strict definition of DAWS to govern their permissibility, the inherent unpredictability of machine learning still introduces a great risk of reverse engineering. However, in the case where the adversary employs false positives to reverse engineer the DAWS, this issue can be overcome by continuous and close oversight from humans to evaluate the decisions carried out by the DAWS. Also, DAWS can incorporate mechanisms such as timers before taking actions to allow human intervention when it responds to false positives. While such a mechanism does not equate to MHC as it does not involve human decision in the firing process, it allows humans to override the system’s action when necessary. In addition to temporal safeguards, advanced physical and technical features can be embedded in DAWS to thwart potential misuse, particularly if the system is captured. These measures include a self-destruction feature, rendering the system inoperable if compromised. Moreover, incorporating custom-built proprietary hardware, which is not commercially available, adds complexity to reverse engineering attempts. Continuous code obfuscation, achieved through regularly updating the codebase with intricate modifications, makes understanding the system’s logic and functionality more challenging for those attempting to reverse engineer DAWS.

Reverse engineering also presents a unique legal issue surrounding the development of DAWS that is not present in the deployment of LAWS. By design, LAWS are made with the intention to attack, while DAWS are not. This distinction prompts a critical consideration regarding whether an act of reverse engineering can be categorized as an attack when the underlying intent to cause harm is absent. According to Article 8 of the Rome Statute of the International Criminal Court, intention, or mens rea, is an element of finding a war crime in an attack against civilians. In a reverse-engineered DAWS, the two parties’ responsibility for the attack becomes divided. The weapon user becomes accountable for the physical act of the offense, referred to as actus reus, while the party manipulating the system holds the mens rea element. So far, there is no international law governing reverse engineering. This dilemma poses another layer of complexity in AWS’s accountability.

Conclusion

Cutting-edge defensive technologies, particularly when integrated with AI, play an indispensable role in upholding humanitarian principles. However, our current global governance on AWS seems to overlook the promising potential of such technology despite the remarkable success evident with the Iron Dome. The L.56 Resolution stands as evidence of the denial of this potential, as it fails to acknowledge the crucial distinction between DAWS and LAWS. The Iron Dome’s pivotal role in safeguarding civilian lives is a testament to years of continuous development. Restricting the freedom to explore such technology further jeopardizes its promise to enhance civilian protection, as emphasized in the resolution’s preamble. While this article presents a diverse set of arguments justifying the use of DAWS, it is undeniable that further elaboration and detailed implementation are required. Beyond advocating for the promotion and protection of the lawful development of DAWS, comprehensive governance should encompass other essential aspects. This governance involves clarifying the definition of defense, establishing a legal foundation for cases of reverse engineering, and researching the possibility of further technical restrictions on DAWS.

Planned to conclude in a legally binding instrument by 2026, the UN is set to have the next provisional agenda on AWS next year. In the following forum, the UN plans to involve various parties to start taking action on the issue and revisit Resolution L.56 to develop it further. This article advocates for a strong and explicit recognition of the distinction between DAWS and LAWS within the UN General Assembly’s resolution on AWS. This recognition can be achieved by refining the legal and political language related to AWS, steering clear of the ambiguous use of the term “LAWS” in the current resolution. Additionally, this article emphasizes the imperative for separate legal instruments governing DAWS and LAWS to duly acknowledge their inherent differences in impacting human lives. While reaching international consensus on military and warfare-related laws remains a challenging endeavor, the adoption of soft laws by the UN to acknowledge the significance of DAWS can carry significant political influence. Such recognition can contribute to the promotion of a peaceful, legally sound, and ethically responsible.


*Rizky Citra Anugrah is an S.H. (LL.B.) Candidate at Universitas Gadjah Mada, specializing in international law. Rizky has received numerous awards from several international institutions for his proficiency in writing and researching within various aspects of international studies. As an undergraduate student, he actively engages in multiple international youth organizations, promoting multilateral cooperation through people-to-people diplomacy. The author is grateful for the guidance and support given by Mr. Haekal Al Asyari, S.H., LL.M., during the process of writing this article.


Cover image credit