Mitigating Unfairness via Evolutionary Multi-objective Ensemble Learning

10/30/2022
by   Zhang Qingquan, et al.
0

In the literature of mitigating unfairness in machine learning, many fairness measures are designed to evaluate predictions of learning models and also utilised to guide the training of fair models. It has been theoretically and empirically shown that there exist conflicts and inconsistencies among accuracy and multiple fairness measures. Optimising one or several fairness measures may sacrifice or deteriorate other measures. Two key questions should be considered, how to simultaneously optimise accuracy and multiple fairness measures, and how to optimise all the considered fairness measures more effectively. In this paper, we view the mitigating unfairness problem as a multi-objective learning problem considering the conflicts among fairness measures. A multi-objective evolutionary learning framework is used to simultaneously optimise several metrics (including accuracy and multiple fairness measures) of machine learning models. Then, ensembles are constructed based on the learning models in order to automatically balance different metrics. Empirical results on eight well-known datasets demonstrate that compared with the state-of-the-art approaches for mitigating unfairness, our proposed algorithm can provide decision-makers with better tradeoffs among accuracy and multiple fairness metrics. Furthermore, the high-quality models generated by the framework can be used to construct an ensemble to automatically achieve a better tradeoff among all the considered fairness metrics than other ensemble methods. Our code is publicly available at https://github.com/qingquan63/FairEMOL

READ FULL TEXT

page 1

page 13

page 15

research
05/18/2022

Fair and Green Hyperparameter Optimization via Multi-objective and Multiple Information Source Bayesian Optimization

There is a consensus that focusing only on accuracy in searching for opt...
research
09/09/2019

Learning Fair Rule Lists

The widespread use of machine learning models, especially within the con...
research
10/10/2022

FEAMOE: Fair, Explainable and Adaptive Mixture of Experts

Three key properties that are desired of trustworthy machine learning mo...
research
06/15/2023

Arbitrariness Lies Beyond the Fairness-Accuracy Frontier

Machine learning tasks may admit multiple competing models that achieve ...
research
12/09/2022

A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others

Machine learning models have been found to learn shortcuts – unintended ...
research
05/23/2023

FITNESS: A Causal De-correlation Approach for Mitigating Bias in Machine Learning Software

Software built on top of machine learning algorithms is becoming increas...
research
08/25/2020

The Fairness-Accuracy Pareto Front

Mitigating bias in machine learning is a challenging task, due in large ...

Please sign up or login with your details

Forgot password? Click here to reset