A statistical framework for fair predictive algorithms

10/25/2016
by   Kristian Lum, et al.
0

Predictive modeling is increasingly being employed to assist human decision-makers. One purported advantage of replacing human judgment with computer models in high stakes settings-- such as sentencing, hiring, policing, college admissions, and parole decisions-- is the perceived "neutrality" of computers. It is argued that because computer models do not hold personal prejudice, the predictions they produce will be equally free from prejudice. There is growing recognition that employing algorithms does not remove the potential for bias, and can even amplify it, since training data were inevitably generated by a process that is itself biased. In this paper, we provide a probabilistic definition of algorithmic bias. We propose a method to remove bias from predictive models by removing all information regarding protected variables from the permitted training data. Unlike previous work in this area, our framework is general enough to accommodate arbitrary data types, e.g. binary, continuous, etc. Motivated by models currently in use in the criminal justice system that inform decisions on pre-trial release and paroling, we apply our proposed method to a dataset on the criminal histories of individuals at the time of sentencing to produce "race-neutral" predictions of re-arrest. In the process, we demonstrate that the most common approach to creating "race-neutral" models-- omitting race as a covariate-- still results in racially disparate predictions. We then demonstrate that the application of our proposed method to these data removes racial disparities from predictions with minimal impact on predictive accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2020

Fair Adversarial Networks

The influence of human judgement is ubiquitous in datasets used across t...
research
10/18/2018

Removing the influence of a group variable in high-dimensional predictive modelling

Predictive modelling relies on the assumption that observations used for...
research
06/22/2023

Auditing Predictive Models for Intersectional Biases

Predictive models that satisfy group fairness criteria in aggregate for ...
research
03/23/2020

Fairway: SE Principles for Building Fairer Software

Machine learning software is increasingly being used to make decisions t...
research
11/05/2018

FairMod - Making Predictive Models Discrimination Aware

Predictive models such as decision trees and neural networks may produce...
research
05/07/2022

Accuracy Convergent Field Predictors

Several predictive algorithms are described. Highlighted are variants th...
research
06/16/2018

Right for the Right Reason: Training Agnostic Networks

We consider the problem of a neural network being requested to classify ...

Please sign up or login with your details

Forgot password? Click here to reset