Fair Predictors under Distribution Shift

11/02/2019
by   Harvineet Singh, et al.
0

Recent work on fair machine learning adds to a growing set of algorithmic safeguards required for deployment in high societal impact areas. A fundamental concern with model deployment is to guarantee stable performance under changes in data distribution. Extensive work in domain adaptation addresses this concern, albeit with the notion of stability limited to that of predictive performance. We provide conditions under which a stable model both in terms of prediction and fairness performance can be trained. Building on the problem setup of causal domain adaptation, we select a subset of features for training predictors with fairness constraints such that risk with respect to an unseen target data distribution is minimized. Advantages of the approach are demonstrated on synthetic datasets and on the task of diagnosing acute kidney injury in a real-world dataset under an instance of measurement policy shift and selection bias.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2022

Fair When Trained, Unfair When Deployed: Observable Fairness Measures are Unstable in Performative Prediction Settings

Many popular algorithmic fairness measures depend on the joint distribut...
research
11/22/2021

DAPPER: Performance Estimation of Domain Adaptation in Mobile Sensing

Many applications that utilize sensors in mobile devices and apply machi...
research
10/09/2019

Optimal Training of Fair Predictive Models

Recently there has been sustained interest in modifying prediction algor...
research
02/10/2023

Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features

Conventional approaches to robustness try to learn a model based on caus...
research
03/01/2022

Low-Cost On-device Partial Domain Adaptation (LoCO-PDA): Enabling efficient CNN retraining on edge devices

With the increased deployment of Convolutional Neural Networks (CNNs) on...
research
12/01/2020

Data Preprocessing to Mitigate Bias with Boosted Fair Mollifiers

In a recent paper, Celis et al. (2020) introduced a new approach to fair...
research
05/29/2018

AdapterNet - learning input transformation for domain adaptation

Deep neural networks have demonstrated impressive performance in various...

Please sign up or login with your details

Forgot password? Click here to reset