Evaluating Fairness Metrics in the Presence of Dataset Bias

09/24/2018
by   J. Henry Hinnefeld, et al.
0

Data-driven algorithms play a large role in decision making across a variety of industries. Increasingly, these algorithms are being used to make decisions that have significant ramifications for people's social and economic well-being, e.g. in sentencing, loan approval, and policing. Amid the proliferation of such systems there is a growing concern about their potential discriminatory impact. In particular, machine learning systems which are trained on biased data have the potential to learn and perpetuate those biases. A central challenge for practitioners is thus to determine whether their models display discriminatory bias. Here we present a case study in which we frame the issue of bias detection as a causal inference problem with observational data. We enumerate two main causes of bias, sampling bias and label bias, and we investigate the abilities of six different fairness metrics to detect each bias type. Based on these investigations, we propose a set of best practice guidelines to select the fairness metric that is most likely to detect bias if it is present. Additionally, we aim to identify the conditions in which certain fairness metrics may fail to detect bias and instead give practitioners a false belief that their biased model is making fair decisions.

READ FULL TEXT
research
11/14/2018

Aequitas: A Bias and Fairness Audit Toolkit

Recent work has raised concerns on the risk of unintended bias in algori...
research
07/12/2020

The Impossibility Theorem of Machine Fairness – A Causal Perspective

With the increasing pervasive use of machine learning in social and econ...
research
01/05/2021

Evaluating Fairness in the Presence of Spatial Autocorrelation

Fairness considerations for spatial data often get confounded by the und...
research
01/24/2019

Pretending Fair Decisions via Stealthily Biased Sampling

Fairness by decision-makers is believed to be auditable by third parties...
research
08/26/2022

LUCID: Exposing Algorithmic Bias through Inverse Design

AI systems can create, propagate, support, and automate bias in decision...
research
01/17/2022

Visual Identification of Problematic Bias in Large Label Spaces

While the need for well-trained, fair ML systems is increasing ever more...

Please sign up or login with your details

Forgot password? Click here to reset