Ensuring Fairness Beyond the Training Data

07/12/2020
by   Debmalya Mandal, et al.
49

We initiate the study of fair classifiers that are robust to perturbations in the training distribution. Despite recent progress, the literature on fairness has largely ignored the design of fair and robust classifiers. In this work, we develop classifiers that are fair not only with respect to the training distribution, but also for a class of distributions that are weighted perturbations of the training samples. We formulate a min-max objective function whose goal is to minimize a distributionally robust training loss, and at the same time, find a classifier that is fair with respect to a class of distributions. We first reduce this problem to finding a fair classifier that is robust with respect to the class of distributions. Based on online learning algorithm, we develop an iterative algorithm that provably converges to such a fair and robust solution. Experiments on standard machine learning fairness datasets suggest that, compared to the state-of-the-art fair classifiers, our classifier retains fairness guarantees and test accuracy for a large class of perturbations on the test set. Furthermore, our experiments show that there is an inherent trade-off between fairness robustness and accuracy of such classifiers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2023

On Testing and Comparing Fair classifiers under Data Bias

In this paper, we consider a theoretical model for injecting data bias, ...
research
06/22/2021

FLEA: Provably Fair Multisource Learning from Unreliable Training Data

Fairness-aware learning aims at constructing classifiers that not only m...
research
05/27/2022

Prototype Based Classification from Hierarchy to Fairness

Artificial neural nets can represent and classify many types of data but...
research
06/28/2023

Learning Fair Classifiers via Min-Max F-divergence Regularization

As machine learning (ML) based systems are adopted in domains such as la...
research
05/23/2023

Fair Oversampling Technique using Heterogeneous Clusters

Class imbalance and group (e.g., race, gender, and age) imbalance are ac...
research
10/27/2021

Sample Selection for Fair and Robust Training

Fairness and robustness are critical elements of Trustworthy AI that nee...
research
10/18/2022

Towards Fair Classification against Poisoning Attacks

Fair classification aims to stress the classification models to achieve ...

Please sign up or login with your details

Forgot password? Click here to reset