On Fairness and Stability: Is Estimator Variance a Friend or a Foe?

02/09/2023
by   Falaah Arif Khan, et al.
0

The error of an estimator can be decomposed into a (statistical) bias term, a variance term, and an irreducible noise term. When we do bias analysis, formally we are asking the question: "how good are the predictions?" The role of bias in the error decomposition is clear: if we trust the labels/targets, then we would want the estimator to have as low bias as possible, in order to minimize error. Fair machine learning is concerned with the question: "Are the predictions equally good for different demographic/social groups?" This has naturally led to a variety of fairness metrics that compare some measure of statistical bias on subsets corresponding to socially privileged and socially disadvantaged groups. In this paper we propose a new family of performance measures based on group-wise parity in variance. We demonstrate when group-wise statistical bias analysis gives an incomplete picture, and what group-wise variance analysis can tell us in settings that differ in the magnitude of statistical bias. We develop and release an open-source library that reconciles uncertainty quantification techniques with fairness analysis, and use it to conduct an extensive empirical analysis of our variance-based fairness measures on standard benchmarks.

READ FULL TEXT

page 10

page 12

research
05/11/2022

De-biasing "bias" measurement

When a model's performance differs across socially or culturally relevan...
research
07/09/2023

On The Impact of Machine Learning Randomness on Group Fairness

Statistical measures for group fairness in machine learning reflect the ...
research
05/26/2022

Flexible Group Fairness Metrics for Survival Analysis

Algorithmic fairness is an increasingly important field concerned with d...
research
06/24/2020

On Fair Selection in the Presence of Implicit Variance

Quota-based fairness mechanisms like the so-called Rooney rule or four-f...
research
02/17/2023

The Unbearable Weight of Massive Privilege: Revisiting Bias-Variance Trade-Offs in the Context of Fair Prediction

In this paper we revisit the bias-variance decomposition of model error ...
research
09/14/2020

Justicia: A Stochastic SAT Approach to Formally Verify Fairness

As a technology ML is oblivious to societal good or bad, and thus, the f...
research
03/11/2020

Fairness by Explicability and Adversarial SHAP Learning

The ability to understand and trust the fairness of model predictions, p...

Please sign up or login with your details

Forgot password? Click here to reset