Fairness Sample Complexity and the Case for Human Intervention

10/24/2019
by   Ananth Balashankar, et al.
0

With the aim of building machine learning systems that incorporate standards of fairness and accountability, we explore explicit subgroup sample complexity bounds. The work is motivated by the observation that classifier predictions for real world datasets often demonstrate drastically different metrics, such as accuracy, when subdivided by specific sensitive variable subgroups. The reasons for these discrepancies are varied and not limited to the influence of mitigating variables, institutional bias, underlying population distributions as well as sampling bias. Among the numerous definitions of fairness that exist, we argue that at a minimum, principled ML practices should ensure that classification predictions are able to mirror the underlying sub-population distributions. However, as the number of sensitive variables increase, populations meeting at the intersectionality of these variables may simply not exist or may not be large enough to provide accurate samples for classification. In these increasingly likely scenarios, we make the case for human intervention and applying situational and individual definitions of fairness. In this paper we present lower bounds of subgroup sample complexity for metric-fair learning based on the theory of Probably Approximately Metric Fair Learning. We demonstrate that for a classifier to approach a definition of fairness in terms of specific sensitive variables, adequate subgroup population samples need to exist and the model dimensionality has to be aligned with subgroup population distributions. In cases where this is not feasible, we propose an approach using individual fairness definitions for achieving alignment. We look at two commonly explored UCI datasets under this lens and suggest human interventions for data collection for specific subgroups to achieve approximate individual fairness for linear hypotheses.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2020

Justicia: A Stochastic SAT Approach to Formally Verify Fairness

As a technology ML is oblivious to societal good or bad, and thus, the f...
research
06/25/2020

SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness

In this paper, we cast fair machine learning as invariant machine learni...
research
05/24/2021

Robust Fairness-aware Learning Under Sample Selection Bias

The underlying assumption of many machine learning algorithms is that th...
research
06/23/2020

Fair Performance Metric Elicitation

What is a fair performance metric? We consider the choice of fairness me...
research
06/10/2020

Fair Data Integration

The use of machine learning (ML) in high-stakes societal decisions has e...
research
10/14/2020

Explainability for fair machine learning

As the decisions made or influenced by machine learning models increasin...
research
05/30/2018

Why Is My Classifier Discriminatory?

Recent attempts to achieve fairness in predictive models focus on the ba...

Please sign up or login with your details

Forgot password? Click here to reset