Fair Classification with Group-Dependent Label Noise

10/31/2020
by   Jialu Wang, et al.
0

This work examines how to train fair classifiers in settings where training labels are corrupted with random noise, and where the error rates of corruption depend both on the label class and on the membership function for a protected subgroup. Heterogeneous label noise models systematic biases towards particular groups when generating annotations. We begin by presenting analytical results which show that naively imposing parity constraints on demographic disparity measures, without accounting for heterogeneous and group-dependent error rates, can decrease both the accuracy and the fairness of the resulting classifier. Our experiments demonstrate these issues arise in practice as well. We address these problems by performing empirical risk minimization with carefully defined surrogate loss functions and surrogate constraints that help avoid the pitfalls introduced by heterogeneous label noise. We provide both theoretical and empirical justifications for the efficacy of our methods. We view our results as an important example of how imposing fairness on biased data sets without proper care can do at least as much harm as it does good.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2020

A Second-Order Approach to Learning with Instance-Dependent Label Noise

The presence of label noise often misleads the training of deep neural n...
research
11/01/2018

A Neural Network Framework for Fair Classifier

Machine learning models are extensively being used in decision making, e...
research
12/02/2019

Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?

Multiple fairness constraints have been proposed in the literature, moti...
research
05/22/2022

Addressing Strategic Manipulation Disparities in Fair Classification

In real-world classification settings, individuals respond to classifier...
research
06/29/2022

Fairness via In-Processing in the Over-parameterized Regime: A Cautionary Tale

The success of DNNs is driven by the counter-intuitive ability of over-p...
research
11/29/2021

Learning Fair Classifiers with Partially Annotated Group Labels

Recently, fairness-aware learning have become increasingly crucial, but ...
research
12/09/2019

In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors

We propose to study the generalization error of a learned predictor ĥ in...

Please sign up or login with your details

Forgot password? Click here to reset