Fair Prediction with Endogenous Behavior

02/18/2020
by   Christopher Jung, et al.
0

There is increasing regulatory interest in whether machine learning algorithms deployed in consequential domains (e.g. in criminal justice) treat different demographic groups "fairly." However, there are several proposed notions of fairness, typically mutually incompatible. Using criminal justice as an example, we study a model in which society chooses an incarceration rule. Agents of different demographic groups differ in their outside options (e.g. opportunity for legal employment) and decide whether to commit crimes. We show that equalizing type I and type II errors across groups is consistent with the goal of minimizing the overall crime rate; other popular notions of fairness are not.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2023

Compatibility of Fairness Metrics with EU Non-Discrimination Laws: Demographic Parity Conditional Demographic Disparity

Empirical evidence suggests that algorithmic decisions driven by Machine...
research
02/26/2020

DeBayes: a Bayesian method for debiasing network embeddings

As machine learning algorithms are increasingly deployed for high-impact...
research
05/31/2021

Model Mis-specification and Algorithmic Bias

Machine learning algorithms are increasingly used to inform critical dec...
research
06/08/2019

Maximum Weighted Loss Discrepancy

Though machine learning algorithms excel at minimizing the average loss ...
research
05/31/2023

Doubly Constrained Fair Clustering

The remarkable attention which fair clustering has received in the last ...
research
12/02/2020

Towards Fairness in Classifying Medical Conversations into SOAP Sections

As machine learning algorithms are more widely deployed in healthcare, t...
research
06/12/2023

Unprocessing Seven Years of Algorithmic Fairness

Seven years ago, researchers proposed a postprocessing method to equaliz...

Please sign up or login with your details

Forgot password? Click here to reset