Estimating and Controlling for Fairness via Sensitive Attribute Predictors

07/25/2022
by   Beepul Bharti, et al.
0

Although machine learning classifiers have been increasingly used in high-stakes decision making (e.g., cancer diagnosis, criminal prosecution decisions), they have demonstrated biases against underrepresented groups. Standard definitions of fairness require access to sensitive attributes of interest (e.g., gender and race), which are often unavailable. In this work we demonstrate that in these settings where sensitive attributes are unknown, one can still reliably estimate and ultimately control for fairness by using proxy sensitive attributes derived from a sensitive attribute predictor. Specifically, we first show that with just a little knowledge of the complete data distribution, one may use a sensitive attribute predictor to obtain upper and lower bounds of the classifier's true fairness metric. Second, we demonstrate how one can provably control for fairness with respect to the true sensitive attributes by controlling for fairness with respect to the proxy sensitive attributes. Our results hold under assumptions that are significantly milder than previous works. We illustrate our results on a series of synthetic and real datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2023

Fairness Under Demographic Scarce Regime

Most existing works on fairness assume the model has full access to demo...
research
09/10/2021

Fairness without the sensitive attribute via Causal Variational Autoencoder

In recent years, most fairness strategies in machine learning models foc...
research
10/05/2019

The Impact of Data Preparation on the Fairness of Software Systems

Machine learning models are widely adopted in scenarios that directly af...
research
07/09/2021

Multiaccurate Proxies for Downstream Fairness

We study the problem of training a model that must obey demographic fair...
research
09/12/2023

A Sequentially Fair Mechanism for Multiple Sensitive Attributes

In the standard use case of Algorithmic Fairness, the goal is to elimina...
research
06/13/2020

Quota-based debiasing can decrease representation of already underrepresented groups

Many important decisions in societies such as school admissions, hiring,...
research
06/26/2023

Balanced Filtering via Non-Disclosive Proxies

We study the problem of non-disclosively collecting a sample of data tha...

Please sign up or login with your details

Forgot password? Click here to reset