How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts

07/04/2022
by   Haotao Wang, et al.
0

Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data. However, in real-world applications, it is common to have distribution shift between the training and test data. In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts. To solve this problem, we propose a novel fairness learning method termed CUrvature MAtching (CUMA), which can achieve robust fairness generalizable to unseen domains with unknown distributional shifts. Specifically, CUMA enforces the model to have similar generalization ability on the majority and minority groups, by matching the loss curvature distributions of the two groups. We evaluate our method on three popular fairness datasets. Compared with existing methods, CUMA achieves superior fairness under unseen distribution shifts, without sacrificing either the overall accuracy or the in-distribution fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/01/2022

Domain Adaptation meets Individual Fairness. And they get along

Many instances of algorithmic bias are caused by distributional shifts. ...
research
06/23/2022

Context matters for fairness – a case study on the effect of spatial distribution shifts

With the ever growing involvement of data-driven AI-based decision makin...
research
09/20/2023

Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk Minimization Framework

While training fair machine learning models has been studied extensively...
research
09/05/2023

T-SaS: Toward Shift-aware Dynamic Adaptation for Streaming Data

In many real-world scenarios, distribution shifts exist in the streaming...
research
04/09/2023

Reweighted Mixup for Subpopulation Shift

Subpopulation shift exists widely in many real-world applications, which...
research
06/25/2015

Fairness-Aware Learning with Restriction of Universal Dependency using f-Divergences

Fairness-aware learning is a novel framework for classification tasks. L...
research
08/22/2023

Expecting The Unexpected: Towards Broad Out-Of-Distribution Detection

Improving the reliability of deployed machine learning systems often inv...

Please sign up or login with your details

Forgot password? Click here to reset