An Empirical Study of Modular Bias Mitigators and Ensembles

02/01/2022
by   Michael Feffer, et al.
0

There are several bias mitigators that can reduce algorithmic bias in machine learning models but, unfortunately, the effect of mitigators on fairness is often not stable when measured across different data splits. A popular approach to train more stable models is ensemble learning. Ensembles, such as bagging, boosting, voting, or stacking, have been successful at making predictive performance more stable. One might therefore ask whether we can combine the advantages of bias mitigators and ensembles? To explore this question, we first need bias mitigators and ensembles to work together. We built an open-source library enabling the modular composition of 10 mitigators, 4 ensembles, and their corresponding hyperparameters. Based on this library, we empirically explored the space of combinations on 13 datasets, including datasets commonly used in fairness literature plus datasets newly curated by our library. Furthermore, we distilled the results into a guidance diagram for practitioners. We hope this paper will contribute towards improving stability in bias mitigation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2022

Navigating Ensemble Configurations for Algorithmic Fairness

Bias mitigators can improve algorithmic fairness in machine learning mod...
research
12/08/2022

Towards Understanding Fairness and its Composition in Ensemble Machine Learning

Machine Learning (ML) software has been widely adopted in modern society...
research
08/14/2019

Optimizing Ensemble Weights and Hyperparameters of Machine Learning Models for Regression Problems

Aggregating multiple learners through an ensemble of models aims to make...
research
09/03/2020

FairXGBoost: Fairness-aware Classification in XGBoost

Highly regulated domains such as finance have long favoured the use of m...
research
06/21/2022

Ensembling over Classifiers: a Bias-Variance Perspective

Ensembles are a straightforward, remarkably effective method for improvi...
research
04/12/2022

Detection and Mitigation of Algorithmic Bias via Predictive Rate Parity

Recently, numerous studies have demonstrated the presence of bias in mac...
research
04/13/2020

On the Usage and Performance of The Hierarchical Vote Collective of Transformation-based Ensembles version 1.0 (HIVE-COTE 1.0)

The Hierarchical Vote Collective of Transformation-based Ensembles (HIVE...

Please sign up or login with your details

Forgot password? Click here to reset