Fair Classification with Group-Dependent Label Noise

10/31/2020
by   Jialu Wang, et al.
0

This work examines how to train fair classifiers in settings where training labels are corrupted with random noise, and where the error rates of corruption depend both on the label class and on the membership function for a protected subgroup. Heterogeneous label noise models systematic biases towards particular groups when generating annotations. We begin by presenting analytical results which show that naively imposing parity constraints on demographic disparity measures, without accounting for heterogeneous and group-dependent error rates, can decrease both the accuracy and the fairness of the resulting classifier. Our experiments demonstrate these issues arise in practice as well. We address these problems by performing empirical risk minimization with carefully defined surrogate loss functions and surrogate constraints that help avoid the pitfalls introduced by heterogeneous label noise. We provide both theoretical and empirical justifications for the efficacy of our methods. We view our results as an important example of how imposing fairness on biased data sets without proper care can do at least as much harm as it does good.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/22/2020

A Second-Order Approach to Learning with Instance-Dependent Label Noise

The presence of label noise often misleads the training of deep neural n...
11/01/2018

A Neural Network Framework for Fair Classifier

Machine learning models are extensively being used in decision making, e...
12/02/2019

Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?

Multiple fairness constraints have been proposed in the literature, moti...
11/27/2014

Classification with Noisy Labels by Importance Reweighting

In this paper, we study a classification problem in which sample labels ...
10/13/2020

FaiR-N: Fair and Robust Neural Networks for Structured Data

Fairness in machine learning is crucial when individuals are subject to ...
11/29/2021

Learning Fair Classifiers with Partially Annotated Group Labels

Recently, fairness-aware learning have become increasingly crucial, but ...
01/25/2022

Beyond the Frontier: Fairness Without Accuracy Loss

Notions of fair machine learning that seek to control various kinds of e...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.