Eliminating Latent Discrimination: Train Then Mask

11/12/2018
by   Soheil Ghili, et al.
2

How can we control for latent discrimination in predictive models? How can we provably remove it? Such questions are at the heart of algorithmic fairness and its impacts on society. In this paper, we define a new operational fairness criteria, inspired by the well-understood notion of omitted variable-bias in statistics and econometrics. Our notion of fairness effectively controls for sensitive features and provides diagnostics for deviations from fair decision making. We then establish analytical and algorithmic results about the existence of a fair classifier in the context of supervised learning. Our results readily imply a simple, but rather counter-intuitive, strategy for eliminating latent discrimination. In order to prevent other features proxying for sensitive features, we need to include sensitive features in the training phase, but exclude them in the test/evaluation phase while controlling for their effects. We evaluate the performance of our algorithm on several real-world datasets and show how fairness for these datasets can be improved with a very small loss in accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2019

Learning Fair Naive Bayes Classifiers by Discovering and Eliminating Discrimination Patterns

As machine learning is increasingly used to make real-world decisions, r...
research
07/07/2021

Bias-Tolerant Fair Classification

The label bias and selection bias are acknowledged as two reasons in dat...
research
09/14/2020

Active Fairness Instead of Unawareness

The possible risk that AI systems could promote discrimination by reprod...
research
02/26/2020

Fairness-Aware Learning with Prejudice Free Representations

Machine learning models are extensively being used to make decisions tha...
research
12/05/2022

Certifying Fairness of Probabilistic Circuits

With the increased use of machine learning systems for decision making, ...
research
04/11/2017

Optimized Data Pre-Processing for Discrimination Prevention

Non-discrimination is a recognized objective in algorithmic decision mak...
research
10/16/2017

Fair Kernel Learning

New social and economic activities massively exploit big data and machin...

Please sign up or login with your details

Forgot password? Click here to reset