Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk Minimization Framework

09/20/2023
by   Sina Baharlouei, et al.
0

While training fair machine learning models has been studied extensively in recent years, most developed methods rely on the assumption that the training and test data have similar distributions. In the presence of distribution shifts, fair models may behave unfairly on test data. There have been some developments for fair learning robust to distribution shifts to address this shortcoming. However, most proposed solutions are based on the assumption of having access to the causal graph describing the interaction of different features. Moreover, existing algorithms require full access to data and cannot be used when small batches are used (stochastic/batch implementation). This paper proposes the first stochastic distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph. More specifically, we formulate the fair inference in the presence of the distribution shift as a distributionally robust optimization problem under L_p norm uncertainty sets with respect to the Exponential Renyi Mutual Information (ERMI) as the measure of fairness violation. We then discuss how the proposed method can be implemented in a stochastic fashion. We have evaluated the presented framework's performance and efficiency through extensive experiments on real datasets consisting of distribution shifts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2021

FERMI: Fair Empirical Risk Minimization via Exponential Rényi Mutual Information

In this paper, we propose a new notion of fairness violation, called Exp...
research
07/04/2022

How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts

Increasing concerns have been raised on deep learning fairness in recent...
research
06/26/2022

Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

The increasing reliance on ML models in high-stakes tasks has raised a m...
research
10/27/2021

Sample Selection for Fair and Robust Training

Fairness and robustness are critical elements of Trustworthy AI that nee...
research
04/13/2022

Distributionally Robust Models with Parametric Likelihood Ratios

As machine learning models are deployed ever more broadly, it becomes in...
research
09/13/2021

On Tilted Losses in Machine Learning: Theory and Applications

Exponential tilting is a technique commonly used in fields such as stati...
research
02/24/2020

FR-Train: A mutual information-based approach to fair and robust training

Trustworthy AI is a critical issue in machine learning where, in additio...

Please sign up or login with your details

Forgot password? Click here to reset