FITNESS: A Causal De-correlation Approach for Mitigating Bias in Machine Learning Software

05/23/2023
by   Ying Xiao, et al.
0

Software built on top of machine learning algorithms is becoming increasingly prevalent in a variety of fields, including college admissions, healthcare, insurance, and justice. The effectiveness and efficiency of these systems heavily depend on the quality of the training datasets. Biased datasets can lead to unfair and potentially harmful outcomes, particularly in such critical decision-making systems where the allocation of resources may be affected. This can exacerbate discrimination against certain groups and cause significant social disruption. To mitigate such unfairness, a series of bias-mitigating methods are proposed. Generally, these studies improve the fairness of the trained models to a certain degree but with the expense of sacrificing the model performance. In this paper, we propose FITNESS, a bias mitigation approach via de-correlating the causal effects between sensitive features (e.g., the sex) and the label. Our key idea is that by de-correlating such effects from a causality perspective, the model would avoid making predictions based on sensitive features and thus fairness could be improved. Furthermore, FITNESS leverages multi-objective optimization to achieve a better performance-fairness trade-off. To evaluate the effectiveness, we compare FITNESS with 7 state-of-the-art methods in 8 benchmark tasks by multiple metrics. Results show that FITNESS can outperform the state-of-the-art methods on bias mitigation while preserve the model's performance: it improved the model's fairness under all the scenarios while decreased the model's performance under only 26.67 the Fairea Baseline in 96.72

READ FULL TEXT

page 7

page 8

page 9

research
06/15/2023

Fix Fairness, Don't Ruin Accuracy: Performance Aware Fairness Repair using AutoML

Machine learning (ML) is increasingly being used in critical decision-ma...
research
09/06/2019

Approaching Machine Learning Fairness through Adversarial Network

Fairness is becoming a rising concern w.r.t. machine learning model perf...
research
10/03/2021

xFAIR: Better Fairness via Model-based Rebalancing of Protected Attributes

Machine learning software can generate models that inappropriately discr...
research
05/11/2022

De-biasing "bias" measurement

When a model's performance differs across socially or culturally relevan...
research
10/30/2022

Mitigating Unfairness via Evolutionary Multi-objective Ensemble Learning

In the literature of mitigating unfairness in machine learning, many fai...
research
04/01/2021

fairmodels: A Flexible Tool For Bias Detection, Visualization, And Mitigation

Machine learning decision systems are getting omnipresent in our lives. ...
research
11/02/2022

Fair Visual Recognition via Intervention with Proxy Features

Deep learning models often learn to make predictions that rely on sensit...

Please sign up or login with your details

Forgot password? Click here to reset