Learning Antidote Data to Individual Unfairness

11/29/2022
by   Peizhao Li, et al.
0

Fairness is an essential factor for machine learning systems deployed in high-stake applications. Among all fairness notions, individual fairness, following a consensus that `similar individuals should be treated similarly,' is a vital notion to guarantee fair treatment for individual cases. Previous methods typically characterize individual fairness as a prediction-invariant problem when perturbing sensitive attributes, and solve it by adopting the Distributionally Robust Optimization (DRO) paradigm. However, adversarial perturbations along a direction covering sensitive information do not consider the inherent feature correlations or innate data constraints, and thus mislead the model to optimize at off-manifold and unrealistic samples. In light of this, we propose a method to learn and generate antidote data that approximately follows the data distribution to remedy individual unfairness. These on-manifold antidote data can be used through a generic optimization procedure with original training data, resulting in a pure pre-processing approach to individual unfairness, or can also fit well with the in-processing DRO paradigm. Through extensive experiments, we demonstrate our antidote data resists individual unfairness at a minimal or zero cost to the model's predictive utility.

READ FULL TEXT
research
09/15/2022

iFlipper: Label Flipping for Individual Fairness

As machine learning becomes prevalent, mitigating any unfairness present...
research
02/01/2022

Achieving Fairness at No Utility Cost via Data Reweighing

With the fast development of algorithmic governance, fairness has become...
research
02/04/2023

Matrix Estimation for Individual Fairness

In recent years, multiple notions of algorithmic fairness have arisen. O...
research
06/28/2019

Learning fair predictors with Sensitive Subspace Robustness

We consider an approach to training machine learning systems that are fa...
research
12/20/2022

Human-Guided Fair Classification for Natural Language Processing

Text classifiers have promising applications in high-stake tasks such as...
research
04/04/2022

Models and Mechanisms for Fairness in Location Data Processing

Location data use has become pervasive in the last decade due to the adv...
research
07/21/2021

Leave-one-out Unfairness

We introduce leave-one-out unfairness, which characterizes how likely a ...

Please sign up or login with your details

Forgot password? Click here to reset