Effectiveness of Equalized Odds for Fair Classification under Imperfect Group Information

06/07/2019
by   Pranjal Awasthi, et al.
0

Most approaches for ensuring or improving a model's fairness with respect to a protected attribute (such as race or gender) assume access to the true value of the protected attribute for every data point. In many scenarios, however, perfect knowledge of the protected attribute is unrealistic. In this paper, we ask to what extent fairness interventions can be effective even with imperfect information about the protected attribute. In particular, we study this question in the context of the prominent equalized odds method of Hardt et al. (2016). We claim that as long as the perturbation of the protected attribute is somewhat moderate, one should still run equalized odds if one would run it knowing the true protected attribute: the bias of the classifier that we obtain using the perturbed attribute is smaller than the bias of the original classifier, and its error is not larger than the error of the equalized odds classifier obtained when working with the true protected attribute.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2021

Fair Balance: Mitigating Machine Learning Bias Against Multiple Protected Attributes With Data Balancing

This paper aims to improve machine learning fairness on multiple protect...
research
07/07/2020

README: REpresentation learning by fairness-Aware Disentangling MEthod

Fair representation learning aims to encode invariant representation wit...
research
09/14/2022

CAT: Controllable Attribute Translation for Fair Facial Attribute Classification

As the social impact of visual recognition has been under scrutiny, seve...
research
04/17/2022

Fair Classification under Covariate Shift and Missing Protected Attribute – an Investigation using Related Features

This study investigated the problem of fair classification under Covaria...
research
11/09/2021

Can Information Flows Suggest Targets for Interventions in Neural Circuits?

Motivated by neuroscientific and clinical applications, we empirically e...
research
04/26/2020

Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds

Most NLP datasets are not annotated with protected attributes such as ge...
research
04/09/2022

Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks

We show that deep neural networks that satisfy demographic parity do so ...

Please sign up or login with your details

Forgot password? Click here to reset