To Split or Not to Split: The Impact of Disparate Treatment in Classification

02/12/2020
by   Hao Wang, et al.
22

Disparate treatment occurs when a machine learning model produces different decisions for groups defined by a legally protected or sensitive attribute (e.g., race, gender). In domains where prediction accuracy is paramount, it is acceptable to fit a model which exhibits disparate treatment. We explore the effect of splitting classifiers (i.e., training and deploying a separate classifier on each group) and derive an information-theoretic impossibility result: there exists precise conditions where a group-blind classifier will always have a non-trivial performance gap from the split classifiers. We further demonstrate that, in the finite sample regime, splitting is no longer always beneficial and relies on the number of samples from each group and the complexity of the hypothesis class. We provide data-dependent bounds for understanding the effect of splitting and illustrate these bounds on real-world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2019

Group Average Treatment Effects for Observational Studies

The paper proposes an estimator to make inference on key features of het...
research
01/16/2018

On the Direction of Discrimination: An Information-Theoretic Analysis of Disparate Impact in Machine Learning

In the context of machine learning, disparate impact refers to a form of...
research
01/30/2022

Meta-Learners for Estimation of Causal Effects: Finite Sample Cross-Fit Performance

Estimation of causal effects using machine learning methods has become a...
research
01/29/2019

Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions

When the average performance of a prediction model varies significantly ...
research
01/01/2018

Sanskrit Sandhi Splitting using seq2(seq)^2

In Sanskrit, small words (morphemes) are combined through a morphophonol...
research
10/12/2020

How Important is the Train-Validation Split in Meta-Learning?

Meta-learning aims to perform fast adaptation on a new task through lear...
research
02/27/2020

"Do the Right Thing" for Whom? An Experiment on Ingroup Favouritism, Group Assorting and Moral Suasion

In this paper we investigate the effect of moral suasion on ingroup favo...

Please sign up or login with your details

Forgot password? Click here to reset