Robust Fairness under Covariate Shift

10/11/2020
by   Ashkan Rezaei, et al.
0

Making predictions that are fair with regard to protected group membership (race, gender, age, etc.) has become an important requirement for classification algorithms. Existing techniques derive a fair model from sampled labeled data relying on the assumption that training and testing data are identically and independently drawn (iid) from the same distribution.In practice, distribution shift can and does occur between training and testing datasets as the characteristics of individuals interacting with the machine learning system – and which individuals interact with the system – change. We investigate fairness under covariate shift, a relaxation of the iid assumption in which the inputs or covariates change while the conditional label distribution remains the same. We seek fair decisions under these assumptions on target data with unknown labels.We propose an approach that obtains the predictor that is robust to the worst-case in terms of target performance while satisfying target fairness requirements and matching statistical properties of the source data. We demonstrate the benefits of our approach on benchmark prediction tasks.

READ FULL TEXT
research
12/28/2017

Robust Covariate Shift Prediction with General Losses and Feature Views

Covariate shift relaxes the widely-employed independent and identically ...
research
12/28/2017

Kernel Robust Bias-Aware Prediction under Covariate Shift

Under covariate shift, training (source) data and testing (target) data ...
research
05/19/2021

More Generalizable Models For Sepsis Detection Under Covariate Shift

Sepsis is a major cause of mortality in the intensive care units (ICUs)....
research
04/17/2022

Fair Classification under Covariate Shift and Missing Protected Attribute – an Investigation using Related Features

This study investigated the problem of fair classification under Covaria...
research
06/14/2023

Distribution Shift Inversion for Out-of-Distribution Prediction

Machine learning society has witnessed the emergence of a myriad of Out-...
research
02/10/2022

Fair When Trained, Unfair When Deployed: Observable Fairness Measures are Unstable in Performative Prediction Settings

Many popular algorithmic fairness measures depend on the joint distribut...
research
10/14/2020

A Distribution-Free Test of Covariate Shift Using Conformal Prediction

Covariate shift is a common and important assumption in transfer learnin...

Please sign up or login with your details

Forgot password? Click here to reset