Fix Fairness, Don't Ruin Accuracy: Performance Aware Fairness Repair using AutoML

by   Giang Nguyen, et al.
Iowa State University of Science and Technology
Carnegie Mellon University

Machine learning (ML) is increasingly being used in critical decision-making software, but incidents have raised questions about the fairness of ML predictions. To address this issue, new tools and methods are needed to mitigate bias in ML-based software. Previous studies have proposed bias mitigation algorithms that only work in specific situations and often result in a loss of accuracy. Our proposed solution is a novel approach that utilizes automated machine learning (AutoML) techniques to mitigate bias. Our approach includes two key innovations: a novel optimization function and a fairness-aware search space. By improving the default optimization function of AutoML and incorporating fairness objectives, we are able to mitigate bias with little to no loss of accuracy. Additionally, we propose a fairness-aware search space pruning method for AutoML to reduce computational cost and repair time. Our approach, built on the state-of-the-art Auto-Sklearn tool, is designed to reduce bias in real-world scenarios. In order to demonstrate the effectiveness of our approach, we evaluated our approach on four fairness problems and 16 different ML models, and our results show a significant improvement over the baseline and existing bias mitigation techniques. Our approach, Fair-AutoML, successfully repaired 60 out of 64 buggy cases, while existing bias mitigation techniques only repaired up to 44 out of 64 cases.


page 1

page 2

page 3

page 4


A Comprehensive Empirical Study of Bias Mitigation Methods for Software Fairness

Software bias is an increasingly important operational concern for softw...

Towards Fair Machine Learning Software: Understanding and Addressing Model Bias Through Counterfactual Thinking

The increasing use of Machine Learning (ML) software can lead to unfair ...

FairXGBoost: Fairness-aware Classification in XGBoost

Highly regulated domains such as finance have long favoured the use of m...

FITNESS: A Causal De-correlation Approach for Mitigating Bias in Machine Learning Software

Software built on top of machine learning algorithms is becoming increas...

Simultaneous Improvement of ML Model Fairness and Performance by Identifying Bias in Data

Machine learning models built on datasets containing discriminative inst...

Promoting Fairness through Hyperparameter Optimization

Considerable research effort has been guided towards algorithmic fairnes...

Please sign up or login with your details

Forgot password? Click here to reset