Fair Learning with Private Demographic Data

02/26/2020
by   Hussein Mozannar, et al.
0

Sensitive attributes such as race are rarely available to learners in real world settings as their collection is often restricted by laws and regulations. We give a scheme that allows individuals to release their sensitive information privately while still allowing any downstream entity to learn non-discriminatory predictors. We show how to adapt non-discriminatory learners to work with privatized protected attributes giving theoretical guarantees on performance. Finally, we highlight how the methodology could apply to learning fair predictors in settings where protected attributes are only available for a subset of the data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2020

Fair Classification with Noisy Protected Attributes

Due to the growing deployment of classification algorithms in various so...
research
07/06/2023

When Fair Classification Meets Noisy Protected Attributes

The operationalization of algorithmic fairness comes with several practi...
research
11/09/2020

Mitigating Bias in Set Selection with Noisy Protected Attributes

Subset selection algorithms are ubiquitous in AI-driven applications, in...
research
06/08/2018

Blind Justice: Fairness with Encrypted Sensitive Attributes

Recent work has explored how to train machine learning models which do n...
research
12/17/2018

BriarPatches: Pixel-Space Interventions for Inducing Demographic Parity

We introduce the BriarPatch, a pixel-space intervention that obscures se...
research
09/26/2020

Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach

A critical concern in data-driven decision making is to build models who...
research
07/24/2023

Causal Fair Machine Learning via Rank-Preserving Interventional Distributions

A decision can be defined as fair if equal individuals are treated equal...

Please sign up or login with your details

Forgot password? Click here to reset