Towards Fair Machine Learning Software: Understanding and Addressing Model Bias Through Counterfactual Thinking

02/16/2023
by   Zichong Wang, et al.
0

The increasing use of Machine Learning (ML) software can lead to unfair and unethical decisions, thus fairness bugs in software are becoming a growing concern. Addressing these fairness bugs often involves sacrificing ML performance, such as accuracy. To address this issue, we present a novel counterfactual approach that uses counterfactual thinking to tackle the root causes of bias in ML software. In addition, our approach combines models optimized for both performance and fairness, resulting in an optimal solution in both aspects. We conducted a thorough evaluation of our approach on 10 benchmark tasks using a combination of 5 performance metrics, 3 fairness metrics, and 15 measurement scenarios, all applied to 8 real-world datasets. The conducted extensive evaluations show that the proposed method significantly improves the fairness of ML software while maintaining competitive performance, outperforming state-of-the-art solutions in 84.6 recent benchmarking tool.

READ FULL TEXT

page 9

page 10

research
06/15/2023

Fix Fairness, Don't Ruin Accuracy: Performance Aware Fairness Repair using AutoML

Machine learning (ML) is increasingly being used in critical decision-ma...
research
07/07/2022

A Comprehensive Empirical Study of Bias Mitigation Methods for Software Fairness

Software bias is an increasingly important operational concern for softw...
research
05/18/2022

Software Fairness: An Analysis and Survey

In the last decade, researchers have studied fairness as a software prop...
research
08/03/2023

Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

Bias in applications of machine learning (ML) to healthcare is usually a...
research
06/03/2022

Fair Classification via Transformer Neural Networks: Case Study of an Educational Domain

Educational technologies nowadays increasingly use data and Machine Lear...
research
10/06/2020

Astraea: Grammar-based Fairness Testing

Software often produces biased outputs. In particular, machine learning ...
research
10/25/2021

Fair Enough: Searching for Sufficient Measures of Fairness

Testing machine learning software for ethical bias has become a pressing...

Please sign up or login with your details

Forgot password? Click here to reset