Paradoxes in Fair Computer-Aided Decision Making

11/29/2017
by   Andrew Morgan, et al.
0

Computer-aided decision making, where some classifier (e.g., an algorithm trained using machine learning methods) assists human decision-makers in making important decisions, is becoming increasingly prevalent. For instance, judges in at least nine states are using algorithmic tools to determine recidivism risk-scores for criminal defendants, and these risk scores are next used for sentencing, parole, or bail decisions. A subject of much recent debate is whether such algorithmic tools are "fair" in the sense that they do not discriminate against certain groups (e.g., races) of people. In this work, we consider two notions of fairness for computer-aided decision making: (a) fair treatment requires the classifier to "treat" different groups of individuals (e.g., different races) that fall into the same class (e.g., defendants that actually recidivate) similarly, and (b) rational fairness requires a rational decision-maker to not discriminate between individuals from different groups receiving the same output (e.g., risk score) from the classifier. Our main result provides a complete characterization of classification contexts which admit classifiers satisfying these notions of fairness--roughly speaking, such contexts are "trivial" in the sense that the set of classes can be partitioned into subsets such that (1) a classifier can perfectly predict which subset an individual belongs to, and (2) conditioned on any subset, the "base rates" for different groups are close. Thus, for any "non-trivial" classification context, either the classifier must be discriminatory, or a rational decision-maker using the output of the classifier is forced to be discriminatory.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset