Towards Understanding Fairness and its Composition in Ensemble Machine Learning

12/08/2022
by   Usman Gohar, et al.
0

Machine Learning (ML) software has been widely adopted in modern society, with reported fairness implications for minority groups based on race, sex, age, etc. Many recent works have proposed methods to measure and mitigate algorithmic bias in ML models. The existing approaches focus on single classifier-based ML models. However, real-world ML models are often composed of multiple independent or dependent learners in an ensemble (e.g., Random Forest), where the fairness composes in a non-trivial way. How does fairness compose in ensembles? What are the fairness impacts of the learners on the ultimate fairness of the ensemble? Can fair learners result in an unfair ensemble? Furthermore, studies have shown that hyperparameters influence the fairness of ML models. Ensemble hyperparameters are more complex since they affect how learners are combined in different categories of ensembles. Understanding the impact of ensemble hyperparameters on fairness will help programmers design fair ensembles. Today, we do not understand these fully for different ensemble algorithms. In this paper, we comprehensively study popular real-world ensembles: bagging, boosting, stacking and voting. We have developed a benchmark of 168 ensemble models collected from Kaggle on four popular fairness datasets. We use existing fairness metrics to understand the composition of fairness. Our results show that ensembles can be designed to be fairer without using mitigation techniques. We also identify the interplay between fairness composition and data characteristics to guide fair ensemble design. Finally, our benchmark can be leveraged for further research on fair ensembles. To the best of our knowledge, this is one of the first and largest studies on fairness composition in ensembles yet presented in the literature.

READ FULL TEXT

page 1

page 5

research
10/11/2022

Navigating Ensemble Configurations for Algorithmic Fairness

Bias mitigators can improve algorithmic fairness in machine learning mod...
research
05/25/2021

Bias in Machine Learning Software: Why? How? What to do?

Increasingly, software is making autonomous decisions in case of crimina...
research
02/01/2022

An Empirical Study of Modular Bias Mitigators and Ensembles

There are several bias mitigators that can reduce algorithmic bias in ma...
research
03/01/2023

FAIR-Ensemble: When Fairness Naturally Emerges From Deep Ensembling

Ensembling independent deep neural networks (DNNs) is a simple and effec...
research
02/17/2023

Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions

Ensuring trustworthiness in machine learning (ML) models is a multi-dime...
research
08/13/2022

A Novel Regularization Approach to Fair ML

A number of methods have been introduced for the fair ML issue, most of ...
research
07/12/2020

Exploiting Uncertainties from Ensemble Learners to Improve Decision-Making in Healthcare AI

Ensemble learning is widely applied in Machine Learning (ML) to improve ...

Please sign up or login with your details

Forgot password? Click here to reset