Bias-Tolerant Fair Classification

07/07/2021
by   Yixuan Zhang, et al.
0

The label bias and selection bias are acknowledged as two reasons in data that will hinder the fairness of machine-learning outcomes. The label bias occurs when the labeling decision is disturbed by sensitive features, while the selection bias occurs when subjective bias exists during the data sampling. Even worse, models trained on such data can inherit or even intensify the discrimination. Most algorithmic fairness approaches perform an empirical risk minimization with predefined fairness constraints, which tends to trade-off accuracy for fairness. However, such methods would achieve the desired fairness level with the sacrifice of the benefits (receive positive outcomes) for individuals affected by the bias. Therefore, we propose a Bias-TolerantFAirRegularizedLoss (B-FARL), which tries to regain the benefits using data affected by label bias and selection bias. B-FARL takes the biased data as input, calls a model that approximates the one trained with fair but latent data, and thus prevents discrimination without constraints required. In addition, we show the effective components by decomposing B-FARL, and we utilize the meta-learning framework for the B-FARL optimization. The experimental results on real-world datasets show that our method is empirically effective in improving fairness towards the direction of true but latent labels.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2023

On Testing and Comparing Fair classifiers under Data Bias

In this paper, we consider a theoretical model for injecting data bias, ...
research
01/06/2020

Fair Active Learning

Bias in training data and proxy attributes are probably the main reasons...
research
11/12/2018

Eliminating Latent Discrimination: Train Then Mask

How can we control for latent discrimination in predictive models? How c...
research
02/23/2018

An Algorithmic Framework to Control Bias in Bandit-based Personalization

Personalization is pervasive in the online space as it leads to higher e...
research
04/01/2021

Model Selection's Disparate Impact in Real-World Deep Learning Applications

Algorithmic fairness has emphasized the role of biased data in automated...
research
10/08/2021

Fair Regression under Sample Selection Bias

Recent research on fair regression focused on developing new fairness no...
research
09/03/2019

Quantifying Infra-Marginality and Its Trade-off with Group Fairness

In critical decision-making scenarios, optimizing accuracy can lead to a...

Please sign up or login with your details

Forgot password? Click here to reset