Improving Fair Training under Correlation Shifts

02/05/2023
by   Yuji Roh, et al.
0

Model fairness is an essential element for Trustworthy AI. While many techniques for model fairness have been proposed, most of them assume that the training and deployment data distributions are identical, which is often not true in practice. In particular, when the bias between labels and sensitive groups changes, the fairness of the trained model is directly influenced and can worsen. We make two contributions for solving this problem. First, we analytically show that existing in-processing fair algorithms have fundamental limits in accuracy and group fairness. We introduce the notion of correlation shifts, which can explicitly capture the change of the above bias. Second, we propose a novel pre-processing step that samples the input data to reduce correlation shifts and thus enables the in-processing approaches to overcome their limitations. We formulate an optimization problem for adjusting the data ratio among labels and sensitive groups to reflect the shifted correlation. A key benefit of our approach lies in decoupling the roles of pre- and in-processing approaches: correlation adjustment via pre-processing and unfairness mitigation on the processed data via in-processing. Experiments show that our framework effectively improves existing in-processing fair algorithms w.r.t. accuracy and fairness, both on synthetic and real datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2022

Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

The increasing reliance on ML models in high-stakes tasks has raised a m...
research
09/01/2022

Fair mapping

To mitigate the effects of undesired biases in models, several approache...
research
09/15/2022

iFlipper: Label Flipping for Individual Fairness

As machine learning becomes prevalent, mitigating any unfairness present...
research
10/29/2021

A Pre-processing Method for Fairness in Ranking

Fair ranking problems arise in many decision-making processes that often...
research
07/06/2023

BaBE: Enhancing Fairness via Estimation of Latent Explaining Variables

We consider the problem of unfair discrimination between two groups and ...
research
10/27/2021

Sample Selection for Fair and Robust Training

Fairness and robustness are critical elements of Trustworthy AI that nee...
research
01/02/2022

Fair Data Representation for Machine Learning at the Pareto Frontier

As machine learning powered decision making is playing an increasingly i...

Please sign up or login with your details

Forgot password? Click here to reset