Paradoxes in Fair Computer-Aided Decision Making

11/29/2017
by   Andrew Morgan, et al.
0

Computer-aided decision making, where some classifier (e.g., an algorithm trained using machine learning methods) assists human decision-makers in making important decisions, is becoming increasingly prevalent. For instance, judges in at least nine states are using algorithmic tools to determine recidivism risk-scores for criminal defendants, and these risk scores are next used for sentencing, parole, or bail decisions. A subject of much recent debate is whether such algorithmic tools are "fair" in the sense that they do not discriminate against certain groups (e.g., races) of people. In this work, we consider two notions of fairness for computer-aided decision making: (a) fair treatment requires the classifier to "treat" different groups of individuals (e.g., different races) that fall into the same class (e.g., defendants that actually recidivate) similarly, and (b) rational fairness requires a rational decision-maker to not discriminate between individuals from different groups receiving the same output (e.g., risk score) from the classifier. Our main result provides a complete characterization of classification contexts which admit classifiers satisfying these notions of fairness--roughly speaking, such contexts are "trivial" in the sense that the set of classes can be partitioned into subsets such that (1) a classifier can perfectly predict which subset an individual belongs to, and (2) conditioned on any subset, the "base rates" for different groups are close. Thus, for any "non-trivial" classification context, either the classifier must be discriminatory, or a rational decision-maker using the output of the classifier is forced to be discriminatory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2017

On Fairness, Diversity and Randomness in Algorithmic Decision Making

Consider a binary decision making process where a single machine learnin...
research
08/28/2018

Investigating Human + Machine Complementarity for Recidivism Predictions

When might human input help (or not) when assessing risk in fairness-rel...
research
03/25/2019

Learning Optimal and Fair Decision Trees for Non-Discriminative Decision-Making

In recent years, automated data-driven decision-making systems have enjo...
research
09/07/2019

Equalizing Recourse across Groups

The rise in machine learning-assisted decision-making has led to concern...
research
06/20/2022

Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax Audit Models

This study examines issues of algorithmic fairness in the context of sys...
research
04/04/2018

Qualitätsmaße binärer Klassifikationen im Bereich kriminalprognostischer Instrumente der vierten Generation

This master's thesis discusses an important issue regarding how algorith...
research
10/21/2021

"Computer Says No": Algorithmic Decision Support and Organisational Responsibility

Algorithmic decision support is increasingly used in a whole array of di...

Please sign up or login with your details

Forgot password? Click here to reset