Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

06/26/2022
by   Bang An, et al.
19

The increasing reliance on ML models in high-stakes tasks has raised a major concern on fairness violations. Although there has been a surge of work that improves algorithmic fairness, most of them are under the assumption of an identical training and test distribution. In many real-world applications, however, such an assumption is often violated as previously trained fair models are often deployed in a different environment, and the fairness of such models has been observed to collapse. In this paper, we study how to transfer model fairness under distribution shifts, a widespread issue in practice. We conduct a fine-grained analysis of how the fair model is affected under different types of distribution shifts and find that domain shifts are more challenging than subpopulation shifts. Inspired by the success of self-training in transferring accuracy under domain shifts, we derive a sufficient condition for transferring group fairness. Guided by it, we propose a practical algorithm with a fair consistency regularization as the key component. A synthetic dataset benchmark, which covers all types of distribution shifts, is deployed for experimental verification of the theoretical findings. Experiments on synthetic and real datasets including image and tabular data demonstrate that our approach effectively transfers fairness and accuracy under various distribution shifts.

READ FULL TEXT
research
01/30/2023

Fairness and Accuracy under Domain Generalization

As machine learning (ML) algorithms are increasingly used in high-stakes...
research
02/05/2023

Improving Fair Training under Correlation Shifts

Model fairness is an essential element for Trustworthy AI. While many te...
research
05/06/2020

Ensuring Fairness under Prior Probability Shifts

In this paper, we study the problem of fair classification in the presen...
research
06/23/2022

Context matters for fairness – a case study on the effect of spatial distribution shifts

With the ever growing involvement of data-driven AI-based decision makin...
research
09/20/2023

Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk Minimization Framework

While training fair machine learning models has been studied extensively...
research
12/22/2020

Can I Still Trust You?: Understanding the Impact of Distribution Shifts on Algorithmic Recourses

As predictive models are being increasingly deployed to make a variety o...
research
01/29/2023

Preserving Fairness in AI under Domain Shift

Existing algorithms for ensuring fairness in AI use a single-shot traini...

Please sign up or login with your details

Forgot password? Click here to reset