Fair Classification with Noisy Protected Attributes

06/08/2020
by   L. Elisa Celis, et al.
0

Due to the growing deployment of classification algorithms in various social contexts, developing methods that are fair with respect to protected attributes such as gender or race is an important problem. However, the information about protected attributes in datasets may be inaccurate due to either issues with data collection or when the protected attributes used are themselves predicted by algorithms. Such inaccuracies can prevent existing fair classification algorithms from achieving desired fairness guarantees. Motivated by this, we study fair classification problems when the protected attributes in the data may be “noisy”. In particular, we consider a noise model where any protected type may be flipped to another with some fixed probability. We propose a “denoised” fair optimization formulation that can incorporate very general fairness goals via a set of constraints, mitigates the effects of such noise perturbations, and comes with provable guarantees. Empirically, we show that our framework can lead to near-perfect statistical parity with only a slight loss in accuracy for significant noise levels.

READ FULL TEXT

page 29

page 30

page 31

page 32

research
07/06/2023

When Fair Classification Meets Noisy Protected Attributes

The operationalization of algorithmic fairness comes with several practi...
research
11/09/2020

Mitigating Bias in Set Selection with Noisy Protected Attributes

Subset selection algorithms are ubiquitous in AI-driven applications, in...
research
06/10/2021

Fair Classification with Adversarial Perturbations

We study fair classification in the presence of an omniscient adversary ...
research
06/15/2018

Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees

Developing classification algorithms that are fair with respect to sensi...
research
04/30/2019

Learning Fair Representations via an Adversarial Framework

Fairness has become a central issue for our research community as classi...
research
02/24/2023

Intersectional Fairness: A Fractal Approach

The issue of fairness in AI has received an increasing amount of attenti...
research
02/26/2020

Fair Learning with Private Demographic Data

Sensitive attributes such as race are rarely available to learners in re...

Please sign up or login with your details

Forgot password? Click here to reset