Achieving Fairness through Adversarial Learning: an Application to Recidivism Prediction

06/30/2018
by   Christina Wadsworth, et al.
0

Recidivism prediction scores are used across the USA to determine sentencing and supervision for hundreds of thousands of inmates. One such generator of recidivism prediction scores is Northpointe's Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) score, used in states like California and Florida, which past research has shown to be biased against black inmates according to certain measures of fairness. To counteract this racial bias, we present an adversarially-trained neural network that predicts recidivism and is trained to remove racial bias. When comparing the results of our model to COMPAS, we gain predictive accuracy and get closer to achieving two out of three measures of fairness: parity and equality of odds. Our model can be generalized to any prediction and demographic. This piece of research contributes an example of scientific replication and simplification in a high-stakes real-world application like recidivism prediction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2023

Maximal Fairness

Fairness in AI has garnered quite some attention in research, and increa...
research
05/31/2022

Social Bias Meets Data Bias: The Impacts of Labeling and Measurement Errors on Fairness Criteria

Although many fairness criteria have been proposed to ensure that machin...
research
02/16/2020

Convex Fairness Constrained Model Using Causal Effect Estimators

Recent years have seen much research on fairness in machine learning. He...
research
11/05/2021

Increasing Fairness in Predictions Using Bias Parity Score Based Loss Function Regularization

Increasing utilization of machine learning based decision support system...
research
06/25/2019

Learning Fair and Transferable Representations

Developing learning methods which do not discriminate subgroups in the p...
research
08/22/2023

Interpretable Distribution-Invariant Fairness Measures for Continuous Scores

Measures of algorithmic fairness are usually discussed in the context of...
research
04/01/2021

Model Selection's Disparate Impact in Real-World Deep Learning Applications

Algorithmic fairness has emphasized the role of biased data in automated...

Please sign up or login with your details

Forgot password? Click here to reset