Algorithmic Detention and International Human Rights Law
Hannah Kannegieter[*]
Two regimes in international law regulate detention—international humanitarian law (“IHL”) and international human rights law (“IHRL”). Both regimes may operate simultaneously and in the same place.[1] The International Court of Justice has explained that “the [non-derogable] protection[s] offered by human rights conventions do not cease in cases of armed conflict.”[2] Detention algorithms are predictive risk assessment algorithms used by militaries and justice systems to determine who to detain, and for how long. They have been analyzed extensively in the IHL context, with varying levels of optimism as to their legality.[3] This piece explores the legality of detention algorithms under IHRL, and uses the United States as a case study. It concludes that these algorithms are generally unlawful and also unwise. Article 9 of the 1948 Universal Declaration of Human Rights (“UDHR”) instructs that “no one shall be subjected to arbitrary arrest, detention or exile.”[4] Similarly, the International Covenant on Civil and Political Rights (“ICCPR”), the Convention on the Rights of the Child (“CRC”), and various other international instruments also create requirements, in addition to those imposed by IHL, with respect to grounds for detention and processes for extension.[5] Like the UDHR, both the ICCPR and the CRC mandate that no individual may be subject to arbitrary arrest or detention. The ICCPR’s mandates illustrate IHRL’s general policy regarding detention.
Most human rights under the ICCPR are not absolute, but may be limited “in a time of public emergency which threatens the life of the nation.”[6] Both international armed conflicts (“IACs”) and non-international armed conflict (“NIACs”) can trigger such a state of emergency. Nevertheless, the UN Human Rights Committee (“HRC”) has found that the prohibition on arbitrary deprivation of liberty is non-derogable, even in states of emergency, indicating its critical importance. Thus, regardless of a state’s intelligence or national security interest in detaining an individual, it must ensure that its detention procedures are not arbitrary.[7]
The ICCPR includes specific procedural requirements states must follow when detaining individuals, some of which raise questions for algorithmic detention. For example, “anyone arrested or detained on a criminal charge shall be brought promptly before a judge or other officer authorized by law to exercise judicial power and shall be entitled to trial within a reasonable time or to release.”[8] Could an algorithm alone be considered an “officer” authorized to consider the validity of an individual’s detention? Additionally, the ICCPR mandates that anyone deprived of liberty be able to “take proceedings before a court.”[9] Could a court simply refer a case to an algorithm for an automatic determination? Would this be lawful under IHRL, given that “proceedings before a court” typically involve deliberations by one or more human judges? The ICCPR itself does not contain definitions of “judge” or “court,” so advocates of detention algorithms could posit that an algorithm satisfies the ICCPR’s procedural requirements. Opponents counter that the lack of data in an armed conflict setting renders algorithmic detention determinations inherently arbitrary and thus violative of the ICCPR despite any procedural steps taken.[10] It remains to be seen how courts and states will interpret these terms. Interpreting the term “judge” to include detention algorithms, however, would require a significant deviation from the term’s traditional determination.
As discussed above, when designing a detention algorithm, designers must ensure that the algorithm’s decisions are not “arbitrary.” In Marques de Morais v. Angola, the HRC explained that arbitrariness must not “be equated with ‘against the law,’ but must be interpreted more broadly to include elements of inappropriateness, injustice, lack of predictability, and due process of law.” The HRC continued that “remand in custody must not only be lawful” under domestic law, “but reasonable and necessary in all circumstances, for example to prevent flight, interference with evidence or the recurrence of a crime.”[11] These considerations are similar to those present in U.S. domestic criminal bail and parole determinations.[12] Nevertheless, the HRC is clear that these requirements extend beyond domestic law: the detention must be necessary. Put another way, an algorithm would need to be certain that without detention, an individual would flee, tamper with evidence, or reoffend. Concerns about limited and unstable data in IACs and NIACs are relevant here, because unreliable data increases the risk that detention decisions would be arbitrary. Where law requires detention “only when absolutely necessary or for imperative reason of security, deferring too much to a predictive model that is likely to generate inaccurate results” about an individual’s security risk “could be seen as inconsistent with a state’s IHL obligations” as well as its IHRL obligations.[13]
In an IAC or a NIAC, a nation seeking to detain an individual would often lack objective data about that person, including a history of his or her past behavior. Without this data, an algorithm could not reliably predict an individual’s risk level. It would instead have to depend on data points like a person’s gender or nationality, and those factors’ correlation with risk levels. It is dangerous to rely on such data to make what should be an individualized risk assessment, and such reliance would violate IHRL because of its arbitrary nature. By contrast, where a state already has extensive information about an individual’s background, a detention algorithm would likely be able to comply with IHRL to make a decision that would not be arbitrary. Relatedly, an individual’s security risk is not synonymous with his or her risk of reoffending. Thus, it is unclear how an algorithm would weigh these factors to justify detaining an individual.
Detention algorithms have also generated significant problems in the domestic criminal context. For example, criminal defendants in the United States have a constitutional right to receive sentences based on their own actions, rather than the actions of other similar individuals.[14] In Wisconsin v. Loomis, the court prohibited the use of predictive risk scores in liberty decisions, because it was concerned that these scores violated the defendant’s due process rights.[15] Thus, it would make sense if the United States were hesitant to let those problems play out on an international stage. Accordingly, even if the United States military were to develop technology for detention algorithms, it will not necessarily use it. Other states and civil society groups might advocate forcefully against using detention algorithms, as they have done in the case of fully autonomous weapons.[16] And as of yet, no nation has deployed fully autonomous weapons technology. Therefore, though many scholars have warned about the possibility, the U.S. military’s creation and use of detention algorithms is not entirely inevitable.
Finally, the United States already has detailed processes in place for making detention determinations.[17] Assuming that the United States does develop detention algorithms, how would such algorithms map on to the existing procedure for making such decisions? The existing processes require agency deputies to evaluate factors including whether the suspect’s capture would further U.S. counterterrorism strategy and how the proposed action would implicate American regional and international political interests.[18] These determinations seem better suited for seasoned state officials familiar with the nuances of U.S. foreign policy. It is difficult to see where an algorithm could fit into these steps in the evaluation process. The U.S. procedure also requires an analysis of “feasibility of capture and risk to U.S. personnel.”[19] Similarly, in targeting, NATO follows a six-step process that includes feasibility calculations.[20] An algorithm would be less capable of interpreting these complexities.
Artificial intelligence and algorithmic calculations could play a helpful role in determining the feasibility of a capture operation. Yet AI should not take on the role of a review board in evaluating continued detention, or in selecting individuals for capture. Moreover, algorithmic reviews risk exacerbating biases while hiding behind a veil of objectivity. The power of this veil is a major driver in employing detention algorithms, because it makes it more difficult to oppose decisions and hold individuals responsible. Regardless of legal considerations, detention algorithms are not appropriate because they would likely seriously limit fairness for prospective and existing detainees. Time and resources would be better spent improving existing, human mechanisms of analysis and review.
The use of detention algorithms to make decisions about (1) which individuals to capture and detain; and (2) whether to extend an individual’s detention would likely place a state in violation of IHRL. It is difficult to imagine how these algorithms could avoid being “arbitrary,” without possessing extensive information about an individual. In a situation where a state does possess extensive information about a current or prospective detainee, it should make its detention determination on the basis of that data, using existing processes that comply with international law. While states might be able to use detention algorithms to assist humans with decision making, the risk of automation bias makes it best to steer clear of detention algorithms altogether.
[*] Harvard Law School, J.D. 2020
[1] See Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory, Advisory Opinion, 2004 I.C.J. Reports 131, ¶ 106 (July 9) (considering whether Israel’s actions in the Palestinian territory were lawful under both IHL and IHRL).
[2] Id.
[3] See, e.g., Tess Bridgeman, The Viability of Data-Reliant Predictive Systems in Armed Conflict Detention, ICRC Blog (Apr. 8, 2019) [hereinafter “Bridgeman”]; Ashley Deeks, Detaining by Algorithm, ICRC Blog (Mar. 25, 2019).
[4] Universal Declaration on Human Rights art. 9, Dec. 10, 1948, G.A. res. 217A (III), U.N. Doc A/810 at 71 (1948).
[5] See International Convention on Civil and Political Rights art. 9, Mar. 23, 1976, 999 U.N.T.S. 171 (“ICCPR”); Convention on the Rights of the Child art. 37, Sept. 2, 1990, U.N. Doc. A/44/49. Domestic law has codified many provisions of these treaties. Military manuals also discuss unlawful detention. See, e.g., ICRC Customary IHL database Rule 99: Deprivation of Liberty; US Department of Justice, Procedures for Approving Direct Action Against Terrorist Targets Located Outside the United States and Areas of Active Hostilities (May 22, 2013).
[6] ICCPR art. 4(1).
[7] But see Case of Brogan and Others v. the United Kingdom, App. No. 112309/84, at 34-35, ¶¶ 63-65 Eur. Ct. H.R. (1988) (concluding that a detention cannot be arbitrary if the arrested person is promptly released and there is no intention to place the detention under judicial control).
[8] ICCPR art. 9(3) (emphasis added).
[9] ICCPR art. 9(4).
[10] See Bridgeman, at 4.
[11]Human Rights Council, Marques de Morais v. Angola, U.N. Doc. CCPR/C/83/D/1128/2002 (2005); see also Human Rights Council, Gorji-Dinka v. Cameroon, U.N. Doc. CCPR/C/83/D/1134/2002 (HRC 2005); Human Rights Council, van Alphen v. the Netherlands, U.N. Doc. CCPR/C/39/D/305/1988 (1990).
[12] Bridgeman, at 2–3.
[13] Id. at 3.
[14] Report or Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System, Partnership on AI (2019) https://www.partnershiponai.org/wp-content/uploads/2019/04/Report-on-Algorithmic-Risk-Assessment-Tools.pdf (last visited Nov. 1, 2020).
[15]See Loomis v. Wisconsin, 881 N.W.2d 749 (Wis. 2016), cert. denied, 137 S. Ct. 2290 (2017).
[16] See, e.g., Harvard Law School International Human Rights Law Clinic and Human Rights Watch, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots (Aug. 2018).
[17] See, e.g., U.S. Department of Justice, Procedures for Approving Direct Action Against Terrorist Targets Located Outside the United States and Areas of Active Hostilities, (May 22, 2013).
[18] Id. at 10.
[19] Id.
[20] Merel Ekelhof, Lifting the Fog of Targeting: “Autonomous Weapons” and Human Control through the Lens of Military Targeting, 71 Naval War College Rev. 61 (2018).