FlipTest: Fairness Auditing via Optimal Transport

06/21/2019
by   Emily Black, et al.
0

We present FlipTest, a black-box auditing technique for uncovering subgroup discrimination in predictive models. Combining the concepts of individual and group fairness, we search for discrimination by matching individuals in different protected groups to each other, and their comparing classifier outcomes. Specifically, we formulate a GAN-based approximation of the optimal transport mapping, and use it to translate the distribution of one protected group to that of another, returning pairs of in-distribution samples that statistically correspond to one another. We then define the flipset: the set of individuals whose classifier output changes post-translation, which intuitively corresponds to the set of people who were harmed because of their protected group membership. To shed light on why the model treats a given subgroup differently, we introduce the transparency report: a ranking of features that are most associated with the model's behavior on the flipset. We show that this provides a computationally inexpensive way to identify subgroups that are harmed by model discrimination, including in cases where the model satisfies population-level group fairness criteria.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2018

Obtaining fairness using optimal transport theory

Statistical algorithms are usually helping in making decisions in many a...
research
03/18/2019

Multi-Differential Fairness Auditor for Black Box Classifiers

Machine learning algorithms are increasingly involved in sensitive decis...
research
07/28/2023

LUCID-GAN: Conditional Generative Models to Locate Unfairness

Most group fairness notions detect unethical biases by computing statist...
research
02/23/2023

Counterfactual Situation Testing: Uncovering Discrimination under Fairness given the Difference

We present counterfactual situation testing (CST), a causal data mining ...
research
04/19/2023

Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness

Group fairness is achieved by equalising prediction distributions betwee...
research
03/14/2022

Ethical and Fairness Implications of Model Multiplicity

While predictive models are a purely technological feat, they may operat...
research
12/21/2017

A continuous framework for fairness

Increasingly, discrimination by algorithms is perceived as a societal an...

Please sign up or login with your details

Forgot password? Click here to reset