Digitalizing Discrimination: How Actuarial Risk Assessment Instruments Perpetuate Social Inequality 

By: Taqbir Huda*

This article was selected as winner the 2025 HHRJ Student Essay Contest.

INTRODUCTION

Actuarial risk assessment instruments (ARAIs) are increasingly used to inform policing, sentencing, and parole decisions in liberal democracies such as the United States and the United Kingdom. However, growing evidence suggests that ARAIs perpetuate systemic biases, disproportionately labelling ethno-racial minorities, such as Black individuals, as high-risk, while disproportionately categorizing white individuals as low-risk. This algorithmic bias raises serious human rights concerns particularly the right to equality and the right to liberty of defendants within the criminal justice system. By overpredicting criminality for minority groups, ARAIs contribute to increased surveillance, harsher sentencing, and reduced access to bail or parole, leading to greater deprivation of liberty and unequal treatment under the law. This essay argues that ARAIs reproduce social inequality when risk is managed through punitive measures, such as incarceration, rather than through supportive interventions, like risk-reductive treatment and services. To safeguard human rights, this essay calls for a non-punitive politics of risk management that treats risk as an indicator of unmet needs, to be addressed through state support and services. In the first part of this essay, I demonstrate how racialized datasets and risk factors make algorithmic bias an inherent feature of ARAIs. In the second part, I show how a punitive politics of risk management allows this bias to deepen social inequality and violate fundamental rights. The third part will propose a human rights-centered, non-punitive approach to risk management, utilizing ARAIs in ways that address rather than exacerbate social injustices.

THE INESCAPABLE ALGORITHMIC BIAS OF ARAIS

Since ‘risk’ has emerged as ‘an umbrella term for widely differing hazards and degrees of liability’[1], it is important to note at the outset that risk assessment in the context of ARAIs generally refers to the algorithmic prediction of a given individual’s likelihood of reoffending.[2] Some ARAIs may measure the likelihood of being (re)arrested or charged while others may measure the likelihood of being (re)convicted of a (serious) crime in the future.[3] This evolution marks a shift from the ‘old penology’ focused on individual guilt to a ‘new penology’ aimed at managing groups deemed risky through actuarial justice, blending the preventive with the punitive.[4]

Despite the rapid technological advancements in ARAIs, their reliability is deeply contested. Should an individual be classified as risky simply because they happen to share characteristics with a group of past recidivists? To what extent can the input factors and data sources ARAIs rely on be considered scientifically objective? Hannah-Moffat and Montford argue that the input factors within ARAIs are not race-neutral correlates of recidivism as commonly assumed, but rather ‘correlates of recidivism within a largely White male prison population’ which are therefore based on ‘norms of whiteness’.[5] They denounce the existing ‘penology of racial innocence’, where race neutrality of social practices and institutions, including ARAIs, are assumed until proven otherwise. They rightly call for a ‘penology of racial accountability’, which recognises that racialisation is unavoidable since the data ARAIs rely on and the social practices which shape such data are inherently racialised. Therefore, ARAIs cannot be racially neutral.

Furthermore, the disproportionate interaction of racial and ethnic minorities with the criminal justice system inevitably renders datasets on crime to also become racialised.[6] As a result, criminal history becomes a highly racialised variable and using it as a risk factor in algorithmic prediction is bound to reproduce racial inequalities. Therefore, ARAIs are doubly racialised: first, they include input factors that are based on the norms of whiteness, and second, they use highly racialised risk factors such as criminal history. When such racialised data are unquestioningly used in ARAIs, risk reconstitutes race, and facially neutral risk factors become pernicious proxies for race and entrench algorithmic bias, threatening the defendant’s rights to equality and fair trial.[7]

This algorithmic bias has the potential to exacerbate social inequality in two main ways. First, it will overpredict criminality for ethno-racial minorities who are already disadvantaged by socioeconomic vulnerabilities and disproportionate criminal justice contact, such as black individuals, through disproportionate rates of false positives. Second, it will underpredict criminality for those with socioeconomic advantage and racial privilege, such as white individuals, through disproportionate rates of false negatives.[8] This is precisely the highly controversial ‘machine bias’ that was exposed by ProPublica  in the risk assessments made by COMPAS, a widely used ARAI in the US.[9] Most ARAIs such as COMPAS measure recidivism in terms of rearrest instead of conviction.[10] Since arrests can be based on mere suspicion, conviction (which is based on proof of guilt) provides a more accurate measure of reoffending. However, an evaluation of a widely used ARAI in the UK, OASys—which measures recidivism by convictions—shows that this measure can still have racially biased predictive accuracy.[11] If algorithmic bias is inherent to ARAIs, is it bound to reproduce social inequality? That is the under discussed (political) question we now turn to.

SOCIAL INEQUALITY AND THE PUNITIVE POLITICS OF RISK


Risk assessment and risk management are two distinct stages of actuarial justice: while risk assessment predicts future crime based on supposedly objective calculations, risk management involves inherently political decisions that shape how identified risks are addressed.[12] Rising penal populism fuels demands for public protection from dangerous ‘others,’ whose basic human rights to be presumed innocent until proven guilty and subject to proportionate punishment become ‘legal-luxuries ill-suited to present perils’.[13] This ‘locks in’ future policy directions, precluding alternative and more effective strategies for crime risk management.[14]

Why should ‘high risk’ individuals identified by ARAIs be dealt with through increased surveillance by police and harsher sentences by judges, rather than through non-punitive state interventions such as targeted risk reductive treatment and services? After the fall of the rehabilitative ideal in the 1980s, there was a general shift in the politics of punishment and risk from a welfare-oriented approach (where the state takes responsibility to treat and reduce the risk) to a neoliberal approach (where the individual is held responsible for being risky).[15] Through a process of ‘responsibilization’, neoliberal states transferred the burden of solving underlying causes of crime from the state authorities to the individual.[16] Individual life choices and circumstances become objects of scrutiny rather than the structural causes of crime. This in turn de-responsibilizes state actors, such as policy makers and criminal justice officials, even though their individual and collective choices can have criminogenic effects.[17]

When the state prioritizes certain risks as crime control issues and manages them through punitive measures, algorithmic biases in risk assessments disproportionately harm racial minorities, leading to increased surveillance, harsher sentencing, and reduced chances of bail or parole. This creates a feedback loop where high-risk classifications, often assigned to minorities, perpetuate cycles of criminalization and incarceration, while low-risk classifications— more commonly given to white individuals— confer systemic advantages throughout the justice process.[18]

Since ARAIs inform policing, sentencing and parole decisions, a punitive approach to risk management frequently entails deprivation of liberty when someone is deemed to be sufficiently risky.[19] ARAIs can erode the right to liberty by justifying indeterminate sentences for individuals deemed high-risk. In sum, the real danger is that if white defendants are disproportionately categorised as low risk, and minority defendants are disproportionately categorised as high risk, ARAIs subject racial minorities to increased surveillance, harsher sentences and reduced likelihood of parole or bail, resulting in far greater deprivation of liberty.[20] Additionally, because ARAIs produce racially disparate classifications through algorithmic bias, they severely threaten the fundamental right to equality for minority defendants by causing differential criminal justice outcomes.[21] Even if there is a net reduction in the number of detainees by diverting low risk offenders away from prisons as proponents of ARAIs assert in its defence[22]– is such a reduction desirable (or even justifiable) when it comes at the cost of amplifying social and racial inequality? Those who believe in the fundamental right to equality will surely answer in the negative.

TOWARD A NON-PUNITIVE POLITICS OF RISK MANAGEMENT

The Law Society of England and Wales conducted an extensive review on the use of algorithmic systems in the criminal justice system, including ARAIs, and found ‘significant challenges of bias and discrimination, opacity and due process’.[23] It proposed several recommendations to improve accountability, such as the introduction of an Information Commissioner to proactively examine algorithmic systems and the establishment of a Code of Practise for the use of algorithmic systems in the criminal justice system in line with the Data Protection Act 2018.  The Law Society also called for the establishment of a national register of algorithmic systems in criminal justice, including those not using personal data, with standardised metadata on transparency audits, discrimination audits, and datasets.  Separately, it recommended provisions to allow civil society organisations to file super-complaints and seek judicial remedies on behalf of affected groups. However, when applied to ARAIs, these measures may offer merely a veneer of accountability within a punitive politics of risk that is bound to reproduce social inequalities. 

The question of how the state ought to manage the risk of crime identified through ARAIs is part of the wider question about the restraints that must be placed on the ‘Preventive State’.[24] Crime risk management, like risk management in other areas of public policy, has arguably been influenced by the precautionary principle. Originally developed in the context of averting environmental disaster, the principle  justifies decisive action despite uncertainty about the anticipated harm.[25] Principle 15 of the Rio Declaration on Environment and Development provides a universal basis for the precautionary principle which has become hugely influential in shaping several areas of public policy and the wider context of risk management. It states: ‘where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation’. Legal academics have questioned the appropriateness of extending a precautionary approach to inaction towards global warming to justify coercive action against individuals deemed risky, despite ‘lack of full scientific certainty’.[26]

The precautionary principle calls for risk management, but does not mandate punitive measures. States using ARAIs to impose penalties on high-risk individuals, rather than offering support, are making a political choice favoring neoliberal penality over welfarist support—a choice that inevitably amplifies social inequality. If instead, the state adopted a socio-structural view of risk and applied a supportive, needs-based response, the impact of algorithmic bias on social inequality could be reversed.[27] Providing support to high-risk individuals would turn over-predictions for racial minorities, like false positives for Black individuals, into opportunities to reduce inequality rather than reinforce it.

The shift from second-generation ARAIs based on static factors to third and fourth generation ARAIs focusing on dynamic risk factors reflects a move toward recognizing that risk can be treated.[28] While static factors like gender and criminal history are unchangeable, dynamic factors such as substance abuse and employment can be addressed through interventions.[29] Third-generation ARAIs were initially developed as needs assessment tools for probation planning, but their focus has since narrowed to identifying criminogenic needs—modifiable behaviours linked to reoffending.[30]

Neoliberal responsibilization individualizes dynamic risk factors, treating them as personal deficits and predictors of reoffending while ignoring broader socio-structural causes like inequality. The UK Government defines criminogenic needs as ‘dynamic risk factors that are directly linked to criminal behaviour’, with OASys measuring eight such factors, including accommodation, employability, and substance misuse.[31] Based on these, various programs aim to change individual behavior rather than addressing the underlying social inequalities that contribute to crime.[32]

What role should ARAIs play in criminal justice under a non-punitive approach to risk management? While it should play no role at the policing stage due to its potential for racial bias, some critics accept ARAIs can be used in the backend of sentencing decisions (but not the frontend). The front end involves using ARAIs to guide the type and duration of a sentence imposed by the judge, whereas back end refers to assigning offenders to risk-reductive programmes.[33]Therefore, while the use of ARAIs in the frontend of sentencing is bound to perpetuate social inequality due to the algorithmic racial bias and should accordingly be prohibited,[34] their backend use aligns with a needs-based approach to risk and can be retained. This demarcated view on the use of ARAIs in sentencing was endorsed by the former Attorney General of the United States, Eric Holder.[35]

Does that mean frontend sentencing decisions should revert to clinical risk assessment by humans? Can human predictive sentencing be free of bias and pose fewer human rights concerns? Coglianese and Lai have argued that the choice is not between algorithmic prediction and human discretion, but rather between ‘human algorithms’ and digital ones.[36] If, per their argument, human decision making is indeed an algorithmic process, then algorithmic bias would remain a threat even if use of ARAIs in the criminal justice system is jettisoned. Indeed, one study from before the rapid use of ARAIs found that English probation officers constructed ‘Asianness’ to interpret the same variable (such as family ties) as increasing risk for Asians but decreasing risk for White offenders shows vividly that bias is certainly not exclusive to ARAIs.[37]

There are several reasons why human assessment (even if algorithmic and susceptible to bias) should be preferred over ARAIs. Clinical assessment is methodologically rigorous since it must comply with disciplinary standards and be subject to peer review, whereas ARAIs and the various sources of data they use face no such restrictions.[38]

 Although proponents of ARAIs over human discretion claim that the former has more accountability since its accuracy can be arithmetically verified, a recent systematic review of thirty-six validation studies relating to eleven prominent ARAIs (including COMPAS and OASys) paints a different picture. The review found that most validation studies had a high risk of bias and ‘did not report key performance measures, such as calibration and rates of false positives and negatives’ – which are key to identifying potential racial bias.[39] Instead, the studies measured the performance using area under the curve, which nevertheless showed moderate discriminatory power for most tools.[40]

Both sides in the debate over replacing ARAIs with human assessment share an a priori assumption: that predictive sentencing should be retained. We need to take a step back and ask if predictive sentencing, be it through human reasoning or algorithmic prediction, can at all be justified from a human rights lens. As Tonry argues, predictive sentencing violates four major principles of just punishment: proportionality, fairness, equal treatment, and parsimony.[41] By sentencing high risk individuals to more severe penalties than their counterparts who are classified as low-risk, predictive sentencing directly violates the principles of proportionality and parsimony. At the same time, due to the algorithmic bias which causes racially disparate sentencing outcomes, predictive sentencing violates the right to equal treatment. A further concern is that ARAIs are proprietary so their input factors are not subject to public disclosure, and the defendant is denied the right to know the information that is being used to assess his risk.[42] Additionally, ARAIs suffer from the lack of adequate validation, as already noted. Both of these factors undermine the principle of fairness. Therefore, Tonry concludes, predictive sentencing cannot be justified in a legal system which respects these four principles of justice, and so, abandoning it altogether would be the ‘correct approach’. Human rights defenders should surely come to the same conclusion. 

CONCLUSION

In this essay, I have argued that while algorithmic bias is inherent in ARAIs, the politics of risk management ultimately determines its impact on human rights. The shift from welfarist rehabilitation to punitive, neoliberal penality has allowed ARAIs to reproduce social inequalities, particularly through the overprediction of criminality for racial minorities. This bias subjects minority defendants to increased surveillance, harsher sentences, and reduced chances of bail or parole, leading to greater deprivation of liberty and violating their fundamental rights to equality and fair treatment. ARAIs can justify indeterminate sentences for those deemed high-risk, further eroding the right to liberty. However, a human rights-centered approach is possible. By adopting a needs-based, supportive politics of risk, states can treat higher risk as an indicator of greater need, ensuring that interventions focus on assistance rather than punishment. In such a framework, algorithmic bias would lead to increased support for marginalized communities instead of deepening their criminalization. ARAIs should guide supportive interventions, not punitive measures like policing, sentencing, or detention.  Some may summarily reject the very idea that ARAIs should be abandoned in punitive decision-making since it has become so commonplace in many criminal justice systems. However, that would be too defeatist. To quote Tonry ‘Better that officials treating other human beings unjustly be reminded again and again that that is what they are doing. Someday they may want and decide to do better.’[43]


*LLM Candidate, Harvard Law School; MSc in Criminology and Criminal Justice, Oxford. Email: thuda@llm25.law.harvard.edu


[1] Andrew Ashworth & Lucia Zedner, Preventive Justice 121 (Oxford Univ. Press 2014).

[2] Ellen van Ginneken, The Use of Risk Assessment in Sentencing, in Predictive Sentencing: Normative and Empirical Perspectives 489 (J.W. de Keijser, J.V. Roberts & J. Ryberg eds., Hart Publ’g 2019).

[3] Seena Fazel et al., The Predictive Performance of Criminal Risk Assessment Tools Used at Sentencing: Systematic Review of Validation Studies, 81 J. Crim. Just. 101902 (2022), Table 2.

[4] Malcolm Feeley & Jonathan Simon, The New Penology: Notes on the Emerging Strategy of Corrections and Its Implications, 30 Criminology449 (1992).

[5] Kelly Hannah-Moffat & Krystle S. Montford, Unpacking Sentencing Algorithms: Risk, Racial Accountability and Data Harms, in Predictive Sentencing: Normative and Empirical Perspectives 175 (J.W. de Keijser, J.V. Roberts & J. Ryberg eds., Hart Publ’g 2019).

[6] Michael Tonry, Predictions of Dangerousness in Sentencing: Déjà Vu All Over Again, Crime & Just. (2019).

[7] Hannah-Moffat & Montford, Unpacking Sentencing Algorithms, supra note 5.

[8] Tonry, Predicitions of Dangerousness, supra note 6. 

[9] Julia Angwin et al., Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks, ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing; See generally: Sandra G. Mayson, Bias In, Bias Out, 128 Yale L.J. 2218 (2019).

[10] Ibid. 

[11] See for example: UK Ministry of Justice, A Compendium of Research and Analysis on the Offender Assessment System (OASys) 2009–2013, 45 (2015), which found that OASys under-predicted reoffending for White offenders but over-predicted it for Asian and Black offenders.

[12] Lucia Zedner, Neither Safe Nor Sound? The Perils and Possibilities of Risk, Can. J. Criminology & Crim. Just. 423, 426 (2006).

[13] Lucia Zedner, Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age by Bernard E. Harcourt, 11 New Crim. L. Rev. 359, 361 (2008).

[14] Jonathan Simon, Reversal of Fortune: The Resurgence of Individual Risk Assessment in Criminal Justice, 1 Annu. Rev. L. & Soc. Sci. 397, 417 (2005).

[15] Feeley and Simon, The New Penology, supra note 4; Ginneken, Use of Risk Assessment, supra note 2.

[16] Ibid, see also: David Garland, The Culture of Control (Oxford Univ. Press 2001).

[17] See: Kelly Hannah-Moffat, Criminogenic Need and the Transformative Risk Subject: Hybridizations of Risk/Need in Penality, 7 Punishment & Soc’y 29 (2004).

[18] Simon, Reversal of Fortune, supra note 14; Hannah-Moffat, Criminogenic Need, supra note 17. 

[19] Ashworth and Zedner 2014, Preventive Justice, supra note 1.

[20] Tonry, Predicitions of Dangerousness, supra note 6.

[21] Id, see also: Pamela Ugwudike, Digital Prediction Technologies in the Justice System: The Implications of a “Race-Neutral” Agenda, 24 Theoretical Criminology 482 (2020).

[22] See for example: John Monahan & Jennifer L. Skeem, Risk Assessment in Criminal Sentencing, 12 Annu. Rev. Clin. Psychol. 489 (2016); Jennifer L. Skeem & Christopher T. Lowenkamp, Risk, Race, & Recidivism: Predictive Bias and Disparate Impact, 54 Criminology 680 (2016); Christopher Slobogin, A Defence of Modern Risk-Based Sentencing, in Predictive Sentencing: Normative and Empirical Perspectives 123 (J.W. de Keijser, J.V. Roberts & J. Ryberg eds., Hart Publ’g 2019).

[23] The Law Soc’y of Eng. & Wales, Algorithms in the Criminal Justice System 70 (2019), https://www.lawsociety.org.uk/topics/research/algorithm-use-in-the-criminal-justice-system-report

[24] Lucia Zedner & Andrew Ashworth, The Rise and Restraint of the Preventive State, 2 Annu. Rev. Criminology 429 (2019).

[25] Jude McCulloch & Dean Wilson, Pre-Crime: Pre-Emption, Precaution and the Future (Routledge 2017).

[26] See for example: Peter Ramsay, Imprisonment Under the Precautionary Principle, in Seeking Security: Pre-Empting the Commission of Criminal Harms 191 (G.R. Sullivan & Ian Dennis eds., Hart Publ’g 2012); Zedner & Ashworth, The Rise and Restraint of the Preventive State, supra note 25.

[27] Kelly Hannah-Moffat, A Conceptual Kaleidoscope: Contemplating ‘Dynamic Structural Risk’ and an Uncoupling of Risk from Need, 22 Psychol., Crime & L. 33 (2016); Mayson, Bias In, Bias Out, supra note 4.

[28] Ginneken, Use of Risk Assessment, supra note 2.

[29] Hannah-Moffat, Conceptual Kaleidoscope, supra note 27.

[30] Id

[31] UK Gov’t, Identified Needs of Offenders in Custody and the Community from the Offender Assessment System, 30 June 2021 (2022), https://www.gov.uk/government/statistics/identified-needs-of-offenders-in-custody-and-the-community-from-the-offender-assessment-system-30-june-2021

[32] Id

[33] Monahan & Skeem, Risk Assessment in Criminal Sentencing, supra note 22.

[34] Tonry, Predicitions of Dangerousness, supra note 6.

[35] Eric Holder, Attorney General Eric Holder Speaks at the National Association of Criminal Defense Lawyers 57th Annual Meeting, U.S. Dep’t of Justice (2014), http://www.justice.gov/opa/speech/attorney-general-eric-holder-speaks-national-association-criminal-defense-lawyers-57th

[36] Cary Coglianese & Alicia Lai, Algorithm vs. Algorithm, 72 Duke L.J. 1281 (2022).

[37] Barbara Hudson & G. Bramhall, Assessing the “Other,” 45 Br. J. Criminology 721 (2005).

[38] Kelly Hannah-Moffat & Krystle S. Montford, Unpacking Sentencing Algorithms: Risk, Racial Accountability and Data Harms, in Predictive Sentencing: Normative and Empirical Perspectives 175 (J.W. de Keijser, J.V. Roberts & J. Ryberg eds., Hart Publ’g 2019); Andrej Završnik, Algorithmic Crime and Punishment: A Criminological Perspective on Big Data and Technology (Springer 2021).

[39] Fazel et al., The Predictive Performance, 3, supra note 3.

[40] Id.

[41] Tonry, Predicitions of Dangerousness, supra note 6.

[42] Id

[43] Id, 444.

Scroll to Top