DeepAI AI Chat
Log In Sign Up

Data Preprocessing to Mitigate Bias with Boosted Fair Mollifiers

12/01/2020
by   Alexander Soen, et al.
18

In a recent paper, Celis et al. (2020) introduced a new approach to fairness that corrects the data distribution itself. The approach is computationally appealing, but its approximation guarantees with respect to the target distribution can be quite loose as they need to rely on a (typically limited) number of constraints on data-based aggregated statistics; also resulting on a fairness guarantee which can be data dependent. Our paper makes use of a mathematical object recently introduced in privacy – mollifiers of distributions – and a popular approach to machine learning – boosting – to get an approach in the same lineage as Celis et al. but without those impediments, including in particular, better guarantees in terms of accuracy and finer guarantees in terms of fairness. The approach involves learning the sufficient statistics of an exponential family. When training data is tabular, it is defined by decision trees whose interpretability can provide clues on the source of (un)fairness. Experiments display the quality of the results obtained for simulated and real-world data.

READ FULL TEXT

page 2

page 24

page 25

01/29/2019

Improved Adversarial Learning for Fair Classification

Motivated by concerns that machine learning algorithms may introduce sig...
06/09/2022

Improved Approximation for Fair Correlation Clustering

Correlation clustering is a ubiquitous paradigm in unsupervised machine ...
08/24/2018

An Empirical Study of Rich Subgroup Fairness for Machine Learning

Kearns et al. [2018] recently proposed a notion of rich subgroup fairnes...
09/18/2020

Group Fairness by Probabilistic Modeling with Latent Fair Decisions

Machine learning systems are increasingly being used to make impactful d...
11/02/2019

Fair Predictors under Distribution Shift

Recent work on fair machine learning adds to a growing set of algorithmi...
07/27/2020

Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing

Machine learning fairness concerns about the biases towards certain prot...
01/15/2023

Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees

We consider the task of training machine learning models with data-depen...