Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds

04/26/2020
by   Kawin Ethayarajh, et al.
0

Most NLP datasets are not annotated with protected attributes such as gender, making it difficult to measure classification bias using standard measures of fairness (e.g., equal opportunity). However, manually annotating a large dataset with a protected attribute is slow and expensive. Instead of annotating all the examples, can we annotate a subset of them and use that sample to estimate the bias? While it is possible to do so, the smaller this annotated sample is, the less certain we are that the estimate is close to the true bias. In this work, we propose using Bernstein bounds to represent this uncertainty about the bias estimate as a confidence interval. We provide empirical evidence that a 95 bias. In quantifying this uncertainty, our method, which we call Bernstein-bounded unfairness, helps prevent classifiers from being deemed biased or unbiased when there is insufficient evidence to make either claim. Our findings suggest that the datasets currently used to measure specific biases are too small to conclusively identify bias except in the most egregious cases. For example, consider a co-reference resolution system that is 5 accurate on gender-stereotypical sentences – to claim it is biased with 95 confidence, we need a bias-specific dataset that is 3.8 times larger than WinoBias, the largest available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2019

Effectiveness of Equalized Odds for Fair Classification under Imperfect Group Information

Most approaches for ensuring or improving a model's fairness with respec...
research
05/05/2022

Optimising Equal Opportunity Fairness in Model Training

Real-world datasets often encode stereotypes and societal biases. Such b...
research
06/14/2020

Fairness Under Feature Exemptions: Counterfactual and Observational Measures

With the growing use of AI in highly consequential domains, the quantifi...
research
11/20/2022

Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness

Data-driven predictive solutions predominant in commercial applications ...
research
10/09/2022

A Differentiable Distance Approximation for Fairer Image Classification

Naively trained AI models can be heavily biased. This can be particularl...
research
12/11/2014

Certifying and removing disparate impact

What does it mean for an algorithm to be biased? In U.S. law, unintentio...
research
10/27/2021

Feature and Label Embedding Spaces Matter in Addressing Image Classifier Bias

This paper strives to address image classifier bias, with a focus on bot...

Please sign up or login with your details

Forgot password? Click here to reset