xFAIR: Better Fairness via Model-based Rebalancing of Protected Attributes

10/03/2021
by   Kewen Peng, et al.
0

Machine learning software can generate models that inappropriately discriminate against specific protected social groups (e.g., groups based on gender, ethnicity, etc). Motivated by those results, software engineering researchers have proposed many methods for mitigating those discriminatory effects. While those methods are effective in mitigating bias, few of them can provide explanations on what is the cause of bias. Here we propose xFAIR, a model-based extrapolation method, that is capable of both mitigating bias and explaining the cause. In our xFAIR approach, protected attributes are represented by models learned from the other independent variables (and these models offer extrapolations over the space between existing examples). We then use the extrapolation models to relabel protected attributes, which aims to offset the biased predictions of the classification model via rebalancing the distribution of protected attributes. The experiments of this paper show that, without compromising(original) model performance,xFAIRcan achieve significantly better group and individual fairness (as measured in different metrics)than benchmark methods. Moreover, when compared to another instance-based rebalancing method, our model-based approach shows faster runtime and thus better scalability

READ FULL TEXT

page 4

page 8

page 9

research
07/17/2021

Fair Balance: Mitigating Machine Learning Bias Against Multiple Protected Attributes With Data Balancing

This paper aims to improve machine learning fairness on multiple protect...
research
12/26/2022

Bias Mitigation Framework for Intersectional Subgroups in Neural Networks

We propose a fairness-aware learning framework that mitigates intersecti...
research
04/10/2019

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

There is a growing body of work that proposes methods for mitigating bia...
research
01/27/2022

Fairness implications of encoding protected categorical attributes

Protected attributes are often presented as categorical features that ne...
research
06/02/2023

Affinity Clustering Framework for Data Debiasing Using Pairwise Distribution Discrepancy

Group imbalance, resulting from inadequate or unrepresentative data coll...
research
11/03/2022

Can Querying for Bias Leak Protected Attributes? Achieving Privacy With Smooth Sensitivity

Existing regulations prohibit model developers from accessing protected ...
research
05/23/2023

FITNESS: A Causal De-correlation Approach for Mitigating Bias in Machine Learning Software

Software built on top of machine learning algorithms is becoming increas...

Please sign up or login with your details

Forgot password? Click here to reset