Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for Introduced Unfairness

02/22/2022
by   Carolyn Ashurst, et al.
0

In addition to reproducing discriminatory relationships in the training data, machine learning systems can also introduce or amplify discriminatory effects. We refer to this as introduced unfairness, and investigate the conditions under which it may arise. To this end, we propose introduced total variation as a measure of introduced unfairness, and establish graphical conditions under which it may be incentivised to occur. These criteria imply that adding the sensitive attribute as a feature removes the incentive for introduced variation under well-behaved loss functions. Additionally, taking a causal perspective, introduced path-specific effects shed light on the issue of when specific paths should be considered fair.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2018

Path-Specific Counterfactual Fairness

We consider the problem of learning fair decision systems in complex sce...
research
04/10/2022

Real order total variation with applications to the loss functions in learning schemes

Loss function are an essential part in modern data-driven approach, such...
research
10/19/2021

fairadapt: Causal Reasoning for Fair Data Pre-processing

Machine learning algorithms are useful for various predictions tasks, bu...
research
11/11/2022

Striving for data-model efficiency: Identifying data externalities on group performance

Building trustworthy, effective, and responsible machine learning system...
research
06/07/2022

Improving Fairness in Graph Neural Networks via Mitigating Sensitive Attribute Leakage

Graph Neural Networks (GNNs) have shown great power in learning node rep...

Please sign up or login with your details

Forgot password? Click here to reset