Almost Politically Acceptable Criminal Justice Risk Assessment

10/24/2019
by   Richard A. Berk, et al.
0

In criminal justice risk forecasting, one can prove that it is impossible to optimize accuracy and fairness at the same time. One can also prove that it is impossible optimize at once all of the usual group definitions of fairness. In the policy arena, one is left with tradeoffs about which many stakeholders will adamantly disagree. In this paper, we offer a different approach. We do not seek perfectly accurate and fair risk assessments. We seek politically acceptable risk assessments. We describe and apply to data on 300,000 offenders a machine learning approach that responds to many of the most visible charges of "racial bias." Regardless of whether such claims are true, we adjust our procedures to compensate. We begin by training the algorithm on White offenders only and computing risk with test data separately for White offenders and Black offenders. Thus, the fitted algorithm structure is exactly the same for both groups; the algorithm treats all offenders as if they are White. But because White and Black offenders can bring different predictors distributions to the white-trained algorithm, we provide additional adjustments if needed. Insofar are conventional machine learning procedures do not produce accuracy and fairness that some stakeholders require, it is possible to alter conventional practice to respond explicitly to many salient stakeholder claims even if they are unsupported by the facts. The results can be a politically acceptable risk assessment tools.

READ FULL TEXT
research
08/26/2020

Improving Fairness in Criminal Justice Algorithmic Risk Assessments Using Conformal Prediction Sets

Risk assessment algorithms have been correctly criticized for potential ...
research
03/27/2017

Fairness in Criminal Justice Risk Assessments: The State of the Art

Objectives: Discussions of fairness in criminal justice risk assessments...
research
05/08/2020

In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction

In recent years, academics and investigative journalists have criticized...
research
11/17/2021

Improving Fairness in Criminal Justice Algorithmic Risk Assessments Using Optimal Transport and Conformal Prediction Sets

In the United States and elsewhere, risk assessment algorithms are being...
research
10/13/2022

Walk a Mile in Their Shoes: a New Fairness Criterion for Machine Learning

The old empathetic adage, “Walk a mile in their shoes,” asks that one im...
research
05/31/2022

To the Fairness Frontier and Beyond: Identifying, Quantifying, and Optimizing the Fairness-Accuracy Pareto Frontier

Algorithmic fairness has emerged as an important consideration when usin...
research
11/28/2020

Feedback Effects in Repeat-Use Criminal Risk Assessments

In the criminal legal context, risk assessment algorithms are touted as ...

Please sign up or login with your details

Forgot password? Click here to reset