Is calibration a fairness requirement? An argument from the point of view of moral philosophy and decision theory

05/11/2022
by   Michele Loi, et al.
0

In this paper, we provide a moral analysis of two criteria of statistical fairness debated in the machine learning literature: 1) calibration between groups and 2) equality of false positive and false negative rates between groups. In our paper, we focus on moral arguments in support of either measure. The conflict between group calibration vs. false positive and false negative rate equality is one of the core issues in the debate about group fairness definitions among practitioners. For any thorough moral analysis, the meaning of the term fairness has to be made explicit and defined properly. For our paper, we equate fairness with (non-)discrimination, which is a legitimate understanding in the discussion about group fairness. More specifically, we equate it with prima facie wrongful discrimination in the sense this is used in Prof. Lippert-Rasmussen's treatment of this definition. In this paper, we argue that a violation of group calibration may be unfair in some cases, but not unfair in others. This is in line with claims already advanced in the literature, that algorithmic fairness should be defined in a way that is sensitive to context. The most important practical implication is that arguments based on examples in which fairness requires between-group calibration, or equality in the false-positive/false-negative rates, do no generalize. For it may be that group calibration is a fairness requirement in one case, but not in another.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/06/2020

Fairness in machine learning: against false positive rate equality as a measure of fairness

As machine learning informs increasingly consequential decisions, differ...
research
08/05/2020

Machine Learning Fairness in Justice Systems: Base Rates, False Positives, and False Negatives

Machine learning best practice statements have proliferated, but there i...
research
09/29/2022

Proportional Multicalibration

Multicalibration is a desirable fairness criteria that constrains calibr...
research
05/05/2023

Statistical Inference for Fairness Auditing

Before deploying a black-box model in high-stakes problems, it is import...
research
03/16/2021

Predicting Early Dropout: Calibration and Algorithmic Fairness Considerations

In this work, the problem of predicting dropout risk in undergraduate st...
research
06/11/2019

ProPublica's COMPAS Data Revisited

In this paper I re-examine the COMPAS recidivism score and criminal hist...
research
10/28/2018

On preserving non-discrimination when combining expert advice

We study the interplay between sequential decision making and avoiding d...

Please sign up or login with your details

Forgot password? Click here to reset