Data Preprocessing to Mitigate Bias with Boosted Fair Mollifiers

12/01/2020
by   Alexander Soen, et al.
18

In a recent paper, Celis et al. (2020) introduced a new approach to fairness that corrects the data distribution itself. The approach is computationally appealing, but its approximation guarantees with respect to the target distribution can be quite loose as they need to rely on a (typically limited) number of constraints on data-based aggregated statistics; also resulting on a fairness guarantee which can be data dependent. Our paper makes use of a mathematical object recently introduced in privacy – mollifiers of distributions – and a popular approach to machine learning – boosting – to get an approach in the same lineage as Celis et al. but without those impediments, including in particular, better guarantees in terms of accuracy and finer guarantees in terms of fairness. The approach involves learning the sufficient statistics of an exponential family. When training data is tabular, it is defined by decision trees whose interpretability can provide clues on the source of (un)fairness. Experiments display the quality of the results obtained for simulated and real-world data.

READ FULL TEXT

page 2

page 24

page 25

research
01/29/2019

Improved Adversarial Learning for Fair Classification

Motivated by concerns that machine learning algorithms may introduce sig...
research
06/09/2022

Improved Approximation for Fair Correlation Clustering

Correlation clustering is a ubiquitous paradigm in unsupervised machine ...
research
08/24/2018

An Empirical Study of Rich Subgroup Fairness for Machine Learning

Kearns et al. [2018] recently proposed a notion of rich subgroup fairnes...
research
09/18/2020

Group Fairness by Probabilistic Modeling with Latent Fair Decisions

Machine learning systems are increasingly being used to make impactful d...
research
11/02/2019

Fair Predictors under Distribution Shift

Recent work on fair machine learning adds to a growing set of algorithmi...
research
07/27/2020

Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing

Machine learning fairness concerns about the biases towards certain prot...
research
10/13/2022

FARE: Provably Fair Representation Learning

Fair representation learning (FRL) is a popular class of methods aiming ...

Please sign up or login with your details

Forgot password? Click here to reset