SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness

06/25/2020
by   Mikhail Yurochkin, et al.
0

In this paper, we cast fair machine learning as invariant machine learning. We first formulate a version of individual fairness that enforces invariance on certain sensitive sets. We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently. Our theoretical results guarantee the proposed approach trains certifiably fair ML models. Finally, in the experimental studies we demonstrate improved fairness metrics in comparison to several recent fair training procedures on three ML tasks that are susceptible to algorithmic bias.

READ FULL TEXT
research
06/28/2019

Learning fair predictors with Sensitive Subspace Robustness

We consider an approach to training machine learning systems that are fa...
research
03/31/2021

Individually Fair Gradient Boosting

We consider the task of enforcing individual fairness in gradient boosti...
research
02/16/2023

Individual Fairness Guarantee in Learning with Censorship

Algorithmic fairness, studying how to make machine learning (ML) algorit...
research
03/02/2021

The KL-Divergence between a Graph Model and its Fair I-Projection as a Fairness Regularizer

Learning and reasoning over graphs is increasingly done by means of prob...
research
08/13/2022

A Novel Regularization Approach to Fair ML

A number of methods have been introduced for the fair ML issue, most of ...
research
10/24/2019

Fairness Sample Complexity and the Case for Human Intervention

With the aim of building machine learning systems that incorporate stand...
research
11/05/2019

Practical Compositional Fairness: Understanding Fairness in Multi-Task ML Systems

Most literature in fairness has focused on improving fairness with respe...

Please sign up or login with your details

Forgot password? Click here to reset