DeepAI AI Chat
Log In Sign Up

Leave-one-out Unfairness

07/21/2021
by   Emily Black, et al.
0

We introduce leave-one-out unfairness, which characterizes how likely a model's prediction for an individual will change due to the inclusion or removal of a single other person in the model's training data. Leave-one-out unfairness appeals to the idea that fair decisions are not arbitrary: they should not be based on the chance event of any one person's inclusion in the training data. Leave-one-out unfairness is closely related to algorithmic stability, but it focuses on the consistency of an individual point's prediction outcome over unit changes to the training data, rather than the error of the model in aggregate. Beyond formalizing leave-one-out unfairness, we characterize the extent to which deep models behave leave-one-out unfairly on real data, including in cases where the generalization error is small. Further, we demonstrate that adversarial training and randomized smoothing techniques have opposite effects on leave-one-out fairness, which sheds light on the relationships between robustness, memorization, individual fairness, and leave-one-out fairness in deep models. Finally, we discuss salient practical applications that may be negatively affected by leave-one-out unfairness.

READ FULL TEXT

page 1

page 2

03/07/2022

Generalization Through The Lens Of Leave-One-Out Error

Despite the tremendous empirical success of deep learning models to solv...
02/17/2023

Animating Sand, Mud, and Snow

Computer animations often lack the subtle environmental changes that sho...
06/29/2022

Understanding Generalization via Leave-One-Out Conditional Mutual Information

We study the mutual information between (certain summaries of) the outpu...
02/18/2020

Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness

We turn the definition of individual fairness on its head—rather than as...
11/29/2022

Learning Antidote Data to Individual Unfairness

Fairness is an essential factor for machine learning systems deployed in...
09/15/2022

iFlipper: Label Flipping for Individual Fairness

As machine learning becomes prevalent, mitigating any unfairness present...
01/27/2023

Variance, Self-Consistency, and Arbitrariness in Fair Classification

In fair classification, it is common to train a model, and to compare an...