On Fairness and Calibration

09/06/2017
by   Geoff Pleiss, et al.
0

The machine learning community has become increasingly concerned with the potential for bias and discrimination in predictive models. This has motivated a growing line of work on what it means for a classification procedure to be "fair." In this paper, we investigate the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. We show that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. These unsettling findings, which extend and generalize existing results, are empirically confirmed on several datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2020

A Resolution in Algorithmic Fairness: Calibrated Scores for Fair Classifications

Calibration and equal error rates are fundamental conditions for algorit...
research
06/19/2019

Inherent Tradeoffs in Learning Fair Representation

With the prevalence of machine learning in high-stakes applications, esp...
research
09/29/2022

Proportional Multicalibration

Multicalibration is a desirable fairness criteria that constrains calibr...
research
09/30/2022

Variable-Based Calibration for Machine Learning Classifiers

The deployment of machine learning classifiers in high-stakes domains re...
research
08/29/2018

Group calibration is a byproduct of unconstrained learning

Much recent work on fairness in machine learning has focused on how well...
research
07/31/2018

Probability Calibration Trees

Obtaining accurate and well calibrated probability estimates from classi...
research
06/12/2023

Unprocessing Seven Years of Algorithmic Fairness

Seven years ago, researchers proposed a postprocessing method to equaliz...

Please sign up or login with your details

Forgot password? Click here to reset