The age of secrecy and unfairness in recidivism prediction

11/02/2018
by   Cynthia Rudin, et al.
0

In our current society, secret algorithms make important decisions about individuals. There has been substantial discussion about whether these algorithms are unfair to groups of individuals. While noble, this pursuit is complex and ultimately stagnating because there is no clear definition of fairness and competing definitions are largely incompatible. We argue that the focus on the question of fairness is misplaced, as these algorithms fail to meet a more important and yet readily obtainable goal: transparency. As a result, creators of secret algorithms can provide incomplete or misleading descriptions about how their models work, and various other kinds of errors can easily go unnoticed. By partially reverse engineering the COMPAS algorithm -- a recidivism-risk scoring algorithm used throughout the criminal justice system -- we show that it does not seem to depend linearly on the defendant's age, despite statements to the contrary by the algorithm's creator. Furthermore, by subtracting from COMPAS its (hypothesized) nonlinear age component, we show that COMPAS does not necessarily depend on race, contradicting ProPublica's analysis, which assumed linearity in age. In other words, faulty assumptions about a proprietary algorithm lead to faulty conclusions that go unchecked without careful reverse engineering. Were the algorithm transparent in the first place, this would likely not have occurred. The most important result in this work is that we find that there are many defendants with low risk score but long criminal histories, suggesting that data inconsistencies occur frequently in criminal justice databases. We argue that transparency satisfies a different notion of procedural fairness by providing both the defendants and the public with the opportunity to scrutinize the methodology and calculations behind risk scores for recidivism.

READ FULL TEXT

page 1

page 11

research
11/08/2018

How Do Fairness Definitions Fare? Examining Public Attitudes Towards Algorithmic Definitions of Fairness

What is the best way to define algorithmic fairness? There has been much...
research
03/27/2017

Fairness in Criminal Justice Risk Assessments: The State of the Art

Objectives: Discussions of fairness in criminal justice risk assessments...
research
02/17/2023

On (assessing) the fairness of risk score models

Recent work on algorithmic fairness has largely focused on the fairness ...
research
04/12/2021

Towards Algorithmic Transparency: A Diversity Perspective

As the role of algorithmic systems and processes increases in society, s...
research
02/19/2020

Learning Fair Scoring Functions: Fairness Definitions, Algorithms and Generalization Bounds for Bipartite Ranking

Many applications of artificial intelligence, ranging from credit lendin...
research
04/15/2020

Hiring Fairly in the Age of Algorithms

Widespread developments in automation have reduced the need for human in...

Please sign up or login with your details

Forgot password? Click here to reset