De-biasing "bias" measurement

05/11/2022
by   Kristian Lum, et al.
0

When a model's performance differs across socially or culturally relevant groups–like race, gender, or the intersections of many such groups–it is often called "biased." While much of the work in algorithmic fairness over the last several years has focused on developing various definitions of model fairness (the absence of group-wise model performance disparities) and eliminating such "bias," much less work has gone into rigorously measuring it. In practice, it important to have high quality, human digestible measures of model performance disparities and associated uncertainty quantification about them that can serve as inputs into multi-faceted decision-making processes. In this paper, we show both mathematically and through simulation that many of the metrics used to measure group-wise model performance disparities are themselves statistically biased estimators of the underlying quantities they purport to represent. We argue that this can cause misleading conclusions about the relative group-wise model performance disparities along different dimensions, especially in cases where some sensitive variables consist of categories with few members. We propose the "double-corrected" variance estimator, which provides unbiased estimates and uncertainty quantification of the variance of model performance across groups. It is conceptually simple and easily implementable without statistical software package or numerical optimization. We demonstrate the utility of this approach through simulation and show on a real dataset that while statistically biased estimators of model group-wise model performance disparities indicate statistically significant between-group model performance disparities, when accounting for statistical bias in the estimator, the estimated group-wise disparities in model performance are no longer statistically significant.

READ FULL TEXT
research
02/09/2023

On Fairness and Stability: Is Estimator Variance a Friend or a Foe?

The error of an estimator can be decomposed into a (statistical) bias te...
research
08/05/2023

Group Membership Bias

When learning to rank from user interactions, search and recommendation ...
research
05/23/2023

FITNESS: A Causal De-correlation Approach for Mitigating Bias in Machine Learning Software

Software built on top of machine learning algorithms is becoming increas...
research
09/03/2019

Quantifying Infra-Marginality and Its Trade-off with Group Fairness

In critical decision-making scenarios, optimizing accuracy can lead to a...
research
05/29/2023

Beyond Confidence: Reliable Models Should Also Consider Atypicality

While most machine learning models can provide confidence in their predi...
research
07/06/2020

Comparing representational geometries using the unbiased distance correlation

Representational similarity analysis (RSA) tests models of brain computa...
research
02/08/2022

Group Fairness Is Not Derivable From Justice: a Mathematical Proof

We argue that an imperfect criminal law procedure cannot be group-fair, ...

Please sign up or login with your details

Forgot password? Click here to reset