Can Ensembling Pre-processing Algorithms Lead to Better Machine Learning Fairness?

12/05/2022
by   Khaled Badran, et al.
0

As machine learning (ML) systems get adopted in more critical areas, it has become increasingly crucial to address the bias that could occur in these systems. Several fairness pre-processing algorithms are available to alleviate implicit biases during model training. These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy. In this work, we evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble. We report on lessons learned that can help practitioners better select fairness algorithms for their models.

READ FULL TEXT
research
09/16/2022

Survey on Fairness Notions and Related Tensions

Automated decision systems are increasingly used to take consequential d...
research
10/04/2020

Fairness in Machine Learning: A Survey

As Machine Learning technologies become increasingly used in contexts th...
research
03/29/2023

Fairness-Aware Data Valuation for Supervised Learning

Data valuation is a ML field that studies the value of training instance...
research
09/15/2022

iFlipper: Label Flipping for Individual Fairness

As machine learning becomes prevalent, mitigating any unfairness present...
research
02/04/2023

Matrix Estimation for Individual Fairness

In recent years, multiple notions of algorithmic fairness have arisen. O...
research
12/17/2020

Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting Data Scientists in Training Fair Models

Modern software relies heavily on data and machine learning, and affects...
research
11/04/2020

Debiasing classifiers: is reality at variance with expectation?

Many methods for debiasing classifiers have been proposed, but their eff...

Please sign up or login with your details

Forgot password? Click here to reset