Debiasing representations by removing unwanted variation due to protected attributes

07/02/2018
by   Amanda Bower, et al.
0

We propose a regression-based approach to removing implicit biases in representations. On tasks where the protected attribute is observed, the method is statistically more efficient than known approaches. Further, we show that this approach leads to debiased representations that satisfy a first order approximation of conditional parity. Finally, we demonstrate the efficacy of the proposed approach by reducing racial bias in recidivism risk scores.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2019

Learning Fair Representations via an Adversarial Framework

Fairness has become a central issue for our research community as classi...
research
11/04/2022

Decorrelation with conditional normalizing flows

The sensitivity of many physics analyses can be enhanced by constructing...
research
02/06/2023

Erasure of Unaligned Attributes from Neural Representations

We present the Assignment-Maximization Spectral Attribute removaL (AMSAL...
research
01/11/2021

Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation

Controlling bias in training datasets is vital for ensuring equal treatm...
research
07/15/2021

Auditing for Diversity using Representative Examples

Assessing the diversity of a dataset of information associated with peop...
research
06/09/2022

Unlearning Protected User Attributes in Recommendations with Adversarial Training

Collaborative filtering algorithms capture underlying consumption patter...
research
09/22/2021

Contrastive Learning for Fair Representations

Trained classification models can unintentionally lead to biased represe...

Please sign up or login with your details

Forgot password? Click here to reset