By Philip Alexander*

I. Introduction

Earlier this year, Google announced a ‘Code Red’ at its California headquarters, instructing its employees to prioritize developing newer, more advanced Artificial Intelligence (AI) projects, with CEO Sundar Pichai describing AI as the next ‘electricity or fire.’ Notably, the spotlight on ChatGPT and its successor GPT4 has elevated the domain of AI as a revolutionary piece of technology. The benefits arising from this global movement in medicine, language and engineering are unquestionably significant. However, on the other end of its vast spectrum lies an array of globally disruptive Artificial Intelligence (GDAI), that could have far-reaching implications on life, liberty and humanity. This note examines the rise and effects of such AI and devises an international governance framework that balances automated technology with International Humanitarian Law (IHL) and Algorithmic Accountability. Specifically, I focus on Autonomous Weapon Systems (AWS) as an example of GDAI, where the nexus between automated technology and human rights is clearly established.

The development of international law and the proliferation of military technology are directly proportional. In 1648, the Treaty of Westphalia was drafted to restrict gunpowder weaponry after the Thirty Years’ War. In 1918, World War I and the rise of military trench warfare resulted in the formation of the Permanent Court of International Justice. Finally, nuclear weaponry used in World War II formed the rational basis behind the conception of the United Nations. In 2023, we exist in a stage of weaponry described as the ‘Third Wave of Warfare,’ where automated AI-enabled systems define the contours of military warfare. For example, Iran announced its development of a series of automated miniature tanks that hold military grade weaponry. In another instance, Turkish defense technology company STM engineered a fully autonomous combat drone, the Kargu-2, capable of precision-guided munition. These drones were first used in the Libyan internecine conflict of 2020 and now in the Russian-Ukraine armed conflict.

The emerging trend of automated weaponry raises questions on the urgency of developing regulatory mechanisms that govern autonomy. Should the international community push for the formation of a framework composed of soft and hard laws that explicitly define the limits of automated engineering? As Peter Singer, an expert on drone engineering, puts it, drones’ ‘intelligence and autonomy [are] growing [and] the law’s not ready for this.’ In the following section, I examine the legality of AWS and propose a tiered governance framework that reconciles such weaponry with IHL.

II. The Unpredictability of Algorithms

International Humanitarian Law, a set of rules that seeks to limit the effects of armed conflict, contains two principles relevant to AWS’s validity: The Principle of Precaution and the Principle of Distinction. These principles guide the conduct of belligerents during armed conflict and have become customary international law through state practice.

The principle of precaution, embodied in Article 57(1) of Additional Protocol I of the Geneva Convention, states that ‘n the conduct of military operations, constant care shall be taken to spare the civilian population, civilians and civilian objects.’ The principle of precaution emphasizes the continuous and diligent effort required to safeguard civilian lives during military operations. It stresses the importance of taking necessary measures to avoid or minimize unintended harm to non-combatants, such as confirming targets as legitimate military objectives, allowing for the cancellation or suspension of attacks if civilians are at risk, and providing warnings when feasible to protect the civilian population.

The Trial Chamber of the International Criminal Tribunal for the former Yugoslavia (ICTY) determined in Prosecutor v. Kupreškić that Article 57 of the 1977 Additional Protocol I had attained the status of customary international law. This conclusion was reached based on two factors: Article 57 expanded upon and elaborated on existing customary norms. Secondly, it was observed that no State, including those that had not ratified the Protocol, appeared to contest the validity of this provision. This principle is also a fundamental axiom in several legal documents, manuals on military affairs and decisions by international and domestic judicial and quasi-judicial bodies, indicating its elevation as state practice.

The principle of distinction states that parties to a military armed conflict must distinguish between civilians and combatants, where ‘the only legitimate object which States should endeavor to accomplish during war is to weaken the military forces of the enemy.’ As described by the ICJ in the Nuclear Advisory Opinion as an ‘intransgressible [principle] of international customary law,’ it is codified in Articles 48 and 52(2) of Additional Protocol I and forms one of the central tenets of IHL.

The principal concern with AWS and other GDAI that could impede human rights is the unpredictability associated with the algorithms used to operate such technology. ‘Automatic’ machine systems are programmed to respond to rules, where ‘if X, then Y’ is followed. These systems do not necessarily pose concerns regarding their predictability. However, AWS is engineered using machine learning, where the software constantly responds to external factors and builds upon its existing knowledge repository, much like how ChatGPT operates. If the operation of this type of software malfunctions at any point during armed conflict, causing civilian harm, it violates customary humanitarian and international law.

Numerous instances of algorithmic bias and discrimination exist, where AI misperceives information. However, no party to the armed conflict could be held liable for this type of violation because of the absence of intent and knowledge required to establish a crime under international criminal law. In the following sections, I first discuss examples of algorithmic bias, providing precedent on the unpredictability of machine learning algorithms. I then devise a framework for AI governance that attempts to bridge some of the gaps in the status quo.

Algorithmic Bias

Algorithms in the context of machine learning are computational methods that utilize mathematical models to discover and comprehend underlying patterns present in data. These algorithms can recognize patterns, classify information, and make predictions based on their learning from existing data, known as the training set. There is an excessive reliance on the objectivity of such algorithms employed in artificial intelligence. This phenomenon is known as ‘mathwashing,’ where the output created by AI is treated as an absolute value, immune to inaccuracy. This perception stems from the fact that mathematics forms the fundamental basis for the existence of AI and is objective by its nature. However, this approach needs to be revised because algorithmic bias has been well-documented in several cases in the public domain.

A study conducted in 2019 discovered that health insurance service provider Optum’s algorithm exhibited racial bias by favoring healthier white patients for insurance coverage and medication, while neglecting sicker Black patients. The algorithm initially suggested providing additional care to only 17.7% of Black patients. However, if this bias were removed, the percentage of Black patients recommended for extra care would rise to 46.5%.

In another study conducted by MIT, it was discovered that the margin of error for three different facial recognition software was 0.8% for white men but 35% for women of color. Algorithmic bias has also been documented in criminal prosecution trials. Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS for short, is a predictive algorithm that assesses recidivism risk, or the likelihood of someone reoffending in the future. This software has been adopted in states such as California, Florida, New York and Wisconsin. However, a report found that Black offenders were nearly twice as likely to be classified with a potential risk of reoffending at 48%, compared with white offenders at 28%. In 2014, Brisha Borden, a Black woman, was arrested for burglary and petty theft. The COMPAS software labeled her as ‘high risk’ for future violent crimes. On the other hand, Vernon Prater, a White male with a history of multiple criminal charges, was arrested for a similar crime but was classified as ‘low risk’ for reoffending. Today, Borden has been released from prison without any pending criminal charges, while Prater has returned to prison and is currently serving an eight-year sentence.

These are just some examples of how unreliable artificial intelligence and machine learning can be. Incorporating unpredictable algorithms in military weaponry can have disastrous effects on human life. For this reason, AWS must be regulated so that states with powerful militaries do not take indiscriminate decisions without weighing their potential consequences. In the following section, I propose a two-tiered framework for protecting civilian life that balances the necessity of automated decision-making during armed conflict with principles of algorithmic accountability and IHL.

III. Proposal for the International Governance on AWS

There is an increasing sense of urgency in the algorithmic regulation debate. Recently, the Council of Europe announced that it is in the process of drafting the ‘Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law.’ Ursula von der Leyen, President of the European Commission, submitted a proposal for the Artificial Intelligence Act with the intention of ‘addressing the opacity, complexity, bias, a certain degree of unpredictability and partially autonomous behavior of certain AI systems, to ensure their compatibility with fundamental rights and to facilitate the enforcement of legal rules.’ The Biden Administration is also working on AI governance, releasing a blueprint for the AI Bill of Rights with ‘principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.’ The impetus behind artificial intelligence provides the right opportunity to introduce domestic and international laws further, regulating specific kinds of AI such as AWS. In the following section, I examine an international governance framework that states can adopt as a possible structure for the regulation of AWS.

The proposal for this framework consists of two tiers. Tier I suggestions will permit the development and use of AWS during armed conflict, with accompanying restrictions that protect civilian life. These restrictions will expressly outline the limits of AWS in congruence with the rules of IHL. Tier II limitations will include stricter and enforceable restraints on AWS that states may enact at their discretion, either unilaterally, bilaterally with another state, or multilaterally with several states. A tiered code will precisely delineate to what extent AWS is permitted in compliance with IHL, while establishing stricter mechanisms on the attribution of liability.

Tier I

Firstly, states may adopt a declaration that expressly permits the development of machine learning in military weaponry in a way that respects civilian life. Constituting recommendations, guidelines and non-binding resolutions, this declaration would be a soft law instrument that lays down normative standards on what is and is not to be expected from member states concerning AWS. This instrument will also clarify the breach of the rules necessary to trigger state responsibility, including the extent, scope and nature of IHL, primarily codified in the four Geneva Conventions and two Additional Protocols.

Tier II

Tier II comprises binding and enforceable restrictions based on algorithmic accountability as opposed to general rules of IHL, that states must strictly adhere to while engineering AWS algorithms. Algorithmic accountability is the process where developers of algorithms are made responsible for situations where the algorithm renders a decision that has a disparate negative impact on an individual or a group of individuals. Stronger standards of accountability would significantly improve transparency in AI development, allowing members of the public to better understand what goes into building algorithms used for AWS.

A narrower approach, observed in such Tier II restrictions, would make it significantly easier to trace and attribute liability in accidental or intentionally unlawful conduct. These principles must be enforced alongside sanctions for violations and would jointly operate with the rules of IHL, creating a robust regulatory framework for AWS.

States must be obligated to conduct periodic Algorithmic Impact Assessments (AIA) at different stages of AWS’s life cycle. This achieves two goals.

Firstly, it allows the manufacturers of automated systems to think rationally and methodically about the potential implications of such technology before its implementation. This is especially crucial in technology that could violate individual rights and would ensure a greater likelihood that the final product reflects the principles and values determined in the initial impact assessment.

Secondly, it ensures the documentation of all decisions made in the development of AWS at different points of its life cycle, improving transparency and accountability to the public.      An example of algorithmic impact assessments can be seen in the Algorithmic Accountability Act introduced by Senator Ron Wyden before Congress in 2022. The Bill seeks ‘[t]o direct the Federal Trade Commission to require impact assessments of automated decision systems and augmented critical decision processes, and for other purposes.’ In another example, Article 35 of the GDPR imposes Data Protection Impact Assessments ‘[w]here a type of processing  […] using new technologies […] is likely to result in a high risk to the rights and freedoms of natural persons.’

Another measure that could be enforced is forming a bilateral or multilateral consultative commission that conducts regular inspections on a state’s AWS technology, ensuring that it complies with treaty restrictions. This step has been implemented in treaties that limit the non-proliferation of nuclear weaponry. For example, the New START Treaty is a nuclear arms reduction agreement between the United States and Russia that permits 18 on-site inspections yearly. An oversight body for AWS algorithms will ensure the compliance of AWS technology with the Tier II rules and obligations outlined in any treaty or agreement formed between states on AI regulation.

IV. Conclusion

The Tier I and II recommendations provide a viable solution to AI governance, balancing its utility while creating a robust regulatory framework that restricts the misuse of its autonomy. The proposed governance framework serves as a structure for future policy decisions regarding AWS and other GDAI. To this effect, the international community must deliberate upon these developments with the intention of framing broader domestic and international policy on AI and its intersection with human rights.

*Philip Alexander is a law student at the West Bengal National University of Juridical Sciences, India.


Cover image credit