Data Preprocessing to Mitigate Bias with Boosted Fair Mollifiers

by   Alexander Soen, et al.

In a recent paper, Celis et al. (2020) introduced a new approach to fairness that corrects the data distribution itself. The approach is computationally appealing, but its approximation guarantees with respect to the target distribution can be quite loose as they need to rely on a (typically limited) number of constraints on data-based aggregated statistics; also resulting on a fairness guarantee which can be data dependent. Our paper makes use of a mathematical object recently introduced in privacy – mollifiers of distributions – and a popular approach to machine learning – boosting – to get an approach in the same lineage as Celis et al. but without those impediments, including in particular, better guarantees in terms of accuracy and finer guarantees in terms of fairness. The approach involves learning the sufficient statistics of an exponential family. When training data is tabular, it is defined by decision trees whose interpretability can provide clues on the source of (un)fairness. Experiments display the quality of the results obtained for simulated and real-world data.



There are no comments yet.


page 2

page 24

page 25


Improved Adversarial Learning for Fair Classification

Motivated by concerns that machine learning algorithms may introduce sig...

An Empirical Study of Rich Subgroup Fairness for Machine Learning

Kearns et al. [2018] recently proposed a notion of rich subgroup fairnes...

Group Fairness by Probabilistic Modeling with Latent Fair Decisions

Machine learning systems are increasingly being used to make impactful d...

Fair Predictors under Distribution Shift

Recent work on fair machine learning adds to a growing set of algorithmi...

Fairness-Aware Learning from Corrupted Data

Addressing fairness concerns about machine learning models is a crucial ...

Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing

Machine learning fairness concerns about the biases towards certain prot...

On the impossibility of non-trivial accuracy under fairness constraints

One of the main concerns about fairness in machine learning (ML) is that...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.