Fair Prediction with Endogenous Behavior

02/18/2020
by   Christopher Jung, et al.
0

There is increasing regulatory interest in whether machine learning algorithms deployed in consequential domains (e.g. in criminal justice) treat different demographic groups "fairly." However, there are several proposed notions of fairness, typically mutually incompatible. Using criminal justice as an example, we study a model in which society chooses an incarceration rule. Agents of different demographic groups differ in their outside options (e.g. opportunity for legal employment) and decide whether to commit crimes. We show that equalizing type I and type II errors across groups is consistent with the goal of minimizing the overall crime rate; other popular notions of fairness are not.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/26/2020

DeBayes: a Bayesian method for debiasing network embeddings

As machine learning algorithms are increasingly deployed for high-impact...
11/05/2020

Convergent Algorithms for (Relaxed) Minimax Fairness

We consider a recently introduced framework in which fairness is measure...
05/31/2021

Model Mis-specification and Algorithmic Bias

Machine learning algorithms are increasingly used to inform critical dec...
06/08/2019

Maximum Weighted Loss Discrepancy

Though machine learning algorithms excel at minimizing the average loss ...
01/25/2021

Violent Crime in London: An Investigation using Geographically Weighted Regression

Violent crime in London is an area of increasing interest following poli...
06/19/2020

Fair clustering via equitable group representations

What does it mean for a clustering to be fair? One popular approach seek...
11/13/2020

An example of prediction which complies with Demographic Parity and equalizes group-wise risks in the context of regression

Let (X, S, Y) ∈ℝ^p ×{1, 2}×ℝ be a triplet following some joint distribut...