DeepAI AI Chat
Log In Sign Up

Data Preprocessing to Mitigate Bias with Boosted Fair Mollifiers

by   Alexander Soen, et al.

In a recent paper, Celis et al. (2020) introduced a new approach to fairness that corrects the data distribution itself. The approach is computationally appealing, but its approximation guarantees with respect to the target distribution can be quite loose as they need to rely on a (typically limited) number of constraints on data-based aggregated statistics; also resulting on a fairness guarantee which can be data dependent. Our paper makes use of a mathematical object recently introduced in privacy – mollifiers of distributions – and a popular approach to machine learning – boosting – to get an approach in the same lineage as Celis et al. but without those impediments, including in particular, better guarantees in terms of accuracy and finer guarantees in terms of fairness. The approach involves learning the sufficient statistics of an exponential family. When training data is tabular, it is defined by decision trees whose interpretability can provide clues on the source of (un)fairness. Experiments display the quality of the results obtained for simulated and real-world data.


page 2

page 24

page 25


Improved Adversarial Learning for Fair Classification

Motivated by concerns that machine learning algorithms may introduce sig...

Improved Approximation for Fair Correlation Clustering

Correlation clustering is a ubiquitous paradigm in unsupervised machine ...

An Empirical Study of Rich Subgroup Fairness for Machine Learning

Kearns et al. [2018] recently proposed a notion of rich subgroup fairnes...

Group Fairness by Probabilistic Modeling with Latent Fair Decisions

Machine learning systems are increasingly being used to make impactful d...

Fair Predictors under Distribution Shift

Recent work on fair machine learning adds to a growing set of algorithmi...

Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing

Machine learning fairness concerns about the biases towards certain prot...

Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees

We consider the task of training machine learning models with data-depen...