When Fair Classification Meets Noisy Protected Attributes

07/06/2023
by   Avijit Ghosh, et al.
0

The operationalization of algorithmic fairness comes with several practical challenges, not the least of which is the availability or reliability of protected attributes in datasets. In real-world contexts, practical and legal impediments may prevent the collection and use of demographic data, making it difficult to ensure algorithmic fairness. While initial fairness algorithms did not consider these limitations, recent proposals aim to achieve algorithmic fairness in classification by incorporating noisiness in protected attributes or not using protected attributes at all. To the best of our knowledge, this is the first head-to-head study of fair classification algorithms to compare attribute-reliant, noise-tolerant and attribute-blind algorithms along the dual axes of predictivity and fairness. We evaluated these algorithms via case studies on four real-world datasets and synthetic perturbations. Our study reveals that attribute-blind and noise-tolerant fair classifiers can potentially achieve similar level of performance as attribute-reliant algorithms, even when protected attributes are noisy. However, implementing them in practice requires careful nuance. Our study provides insights into the practical implications of using fair classification algorithms in scenarios where protected attributes are noisy or partially available.

READ FULL TEXT
research
06/08/2020

Fair Classification with Noisy Protected Attributes

Due to the growing deployment of classification algorithms in various so...
research
11/09/2020

Mitigating Bias in Set Selection with Noisy Protected Attributes

Subset selection algorithms are ubiquitous in AI-driven applications, in...
research
02/26/2020

Fair Learning with Private Demographic Data

Sensitive attributes such as race are rarely available to learners in re...
research
07/19/2019

Fair quantile regression

Quantile regression is a tool for learning conditional distributions. In...
research
08/07/2019

Paired-Consistency: An Example-Based Model-Agnostic Approach to Fairness Regularization in Machine Learning

As AI systems develop in complexity it is becoming increasingly hard to ...
research
04/03/2020

FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret

Algorithmic decision making based on computer vision and machine learnin...
research
02/23/2023

Auditing for Spatial Fairness

This paper studies algorithmic fairness when the protected attribute is ...

Please sign up or login with your details

Forgot password? Click here to reset