Learning fair predictors with Sensitive Subspace Robustness

06/28/2019
by   Mikhail Yurochkin, et al.
5

We consider an approach to training machine learning systems that are fair in the sense that their performance is invariant under certain perturbations to the features. For example, the performance of a resume screening system should be invariant under changes to the name of the applicant or switching the gender pronouns. We connect this intuitive notion of algorithmic fairness to individual fairness and study how to certify ML algorithms as algorithmically fair. We also demonstrate the effectiveness of our approach on three machine learning tasks that are susceptible to gender and racial biases.

READ FULL TEXT

page 4

page 23

research
06/19/2020

Two Simple Ways to Learn Individual Fairness Metrics from Data

Individual fairness is an intuitive definition of algorithmic fairness t...
research
06/25/2020

SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness

In this paper, we cast fair machine learning as invariant machine learni...
research
12/20/2022

Human-Guided Fair Classification for Natural Language Processing

Text classifiers have promising applications in high-stake tasks such as...
research
09/21/2020

Measuring justice in machine learning

How can we build more just machine learning systems? To answer this ques...
research
02/08/2023

Fairness in Matching under Uncertainty

The prevalence and importance of algorithmic two-sided marketplaces has ...
research
11/29/2022

Learning Antidote Data to Individual Unfairness

Fairness is an essential factor for machine learning systems deployed in...
research
02/10/2022

Fair When Trained, Unfair When Deployed: Observable Fairness Measures are Unstable in Performative Prediction Settings

Many popular algorithmic fairness measures depend on the joint distribut...

Please sign up or login with your details

Forgot password? Click here to reset