Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation

01/11/2021
by   Umang Gupta, et al.
7

Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between different groups in downstream applications. A naive solution is to transform the data so that it is statistically independent of group membership, but this may throw away too much information when a reasonable compromise between fairness and accuracy is desired. Another common approach is to limit the ability of a particular adversary who seeks to maximize parity. Unfortunately, representations produced by adversarial approaches may still retain biases as their efficacy is tied to the complexity of the adversary used during training. To this end, we theoretically establish that by limiting the mutual information between representations and protected attributes, we can assuredly control the parity of any downstream classifier. We demonstrate an effective method for controlling parity through mutual information based on contrastive information estimators and show that they outperform approaches that rely on variational bounds based on complex generative models. We test our approach on UCI Adult and Heritage Health datasets and demonstrate that our approach provides more informative representations across a range of desired parity thresholds while providing strong theoretical guarantees on the parity of any downstream algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2020

Learning Smooth and Fair Representations

Organizations that own data face increasing legal liability for its disc...
research
07/02/2018

Debiasing representations by removing unwanted variation due to protected attributes

We propose a regression-based approach to removing implicit biases in re...
research
10/05/2020

Conditional Negative Sampling for Contrastive Learning of Visual Representations

Recent methods for learning unsupervised visual representations, dubbed ...
research
02/17/2018

Learning Adversarially Fair and Transferable Representations

In this work, we advocate for representation learning as the key to miti...
research
05/15/2022

Fair Bayes-Optimal Classifiers Under Predictive Parity

Increasing concerns about disparate effects of AI have motivated a great...
research
09/27/2019

Learning Generative Adversarial RePresentations (GAP) under Fairness and Censoring Constraints

We present Generative Adversarial rePresentations (GAP) as a data-driven...
research
12/17/2018

BriarPatches: Pixel-Space Interventions for Inducing Demographic Parity

We introduce the BriarPatch, a pixel-space intervention that obscures se...

Please sign up or login with your details

Forgot password? Click here to reset