Leave-one-out Unfairness

07/21/2021
by   Emily Black, et al.
0

We introduce leave-one-out unfairness, which characterizes how likely a model's prediction for an individual will change due to the inclusion or removal of a single other person in the model's training data. Leave-one-out unfairness appeals to the idea that fair decisions are not arbitrary: they should not be based on the chance event of any one person's inclusion in the training data. Leave-one-out unfairness is closely related to algorithmic stability, but it focuses on the consistency of an individual point's prediction outcome over unit changes to the training data, rather than the error of the model in aggregate. Beyond formalizing leave-one-out unfairness, we characterize the extent to which deep models behave leave-one-out unfairly on real data, including in cases where the generalization error is small. Further, we demonstrate that adversarial training and randomized smoothing techniques have opposite effects on leave-one-out fairness, which sheds light on the relationships between robustness, memorization, individual fairness, and leave-one-out fairness in deep models. Finally, we discuss salient practical applications that may be negatively affected by leave-one-out unfairness.

READ FULL TEXT

page 1

page 2

research
03/07/2022

Generalization Through The Lens Of Leave-One-Out Error

Despite the tremendous empirical success of deep learning models to solv...
research
02/17/2023

Animating Sand, Mud, and Snow

Computer animations often lack the subtle environmental changes that sho...
research
06/29/2022

Understanding Generalization via Leave-One-Out Conditional Mutual Information

We study the mutual information between (certain summaries of) the outpu...
research
02/18/2020

Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness

We turn the definition of individual fairness on its head—rather than as...
research
11/29/2022

Learning Antidote Data to Individual Unfairness

Fairness is an essential factor for machine learning systems deployed in...
research
06/29/2022

Fairness via In-Processing in the Over-parameterized Regime: A Cautionary Tale

The success of DNNs is driven by the counter-intuitive ability of over-p...
research
05/07/2021

Human-Aided Saliency Maps Improve Generalization of Deep Learning

Deep learning has driven remarkable accuracy increases in many computer ...

Please sign up or login with your details

Forgot password? Click here to reset