Quantifying Infra-Marginality and Its Trade-off with Group Fairness

09/03/2019
by   Arpita Biswas, et al.
0

In critical decision-making scenarios, optimizing accuracy can lead to a biased classifier, hence past work recommends enforcing group-based fairness metrics in addition to maximizing accuracy. However, doing so exposes the classifier to another kind of bias called infra-marginality. This refers to individual-level bias where some individuals/subgroups can be worse off than under simply optimizing for accuracy. For instance, a classifier implementing race-based parity may significantly disadvantage women of the advantaged race. To quantify this bias, we propose a general notion of η-infra-marginality that can be used to evaluate the extent of this bias. We prove theoretically that, unlike other fairness metrics, infra-marginality does not have a trade-off with accuracy: high accuracy directly leads to low infra-marginality. This observation is confirmed through empirical analysis on multiple simulated and real-world datasets. Further, we find that maximizing group fairness often increases infra-marginality, suggesting the consideration of both group-level fairness and individual-level infra-marginality. However, measuring infra-marginality requires knowledge of the true distribution of individual-level outcomes correctly and explicitly. We propose a practical method to measure infra-marginality, and a simple algorithm to maximize group-wise accuracy and avoid infra-marginality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/14/2018

Bias Mitigation Post-processing for Individual and Group Fairness

Whereas previous post-processing approaches for increasing the fairness ...
research
11/13/2020

Metric-Free Individual Fairness with Cooperative Contextual Bandits

Data mining algorithms are increasingly used in automated decision makin...
research
07/07/2021

Bias-Tolerant Fair Classification

The label bias and selection bias are acknowledged as two reasons in dat...
research
10/08/2020

Assessing the Fairness of Classifiers with Collider Bias

The increasing maturity of machine learning technologies and their appli...
research
05/11/2022

De-biasing "bias" measurement

When a model's performance differs across socially or culturally relevan...
research
07/02/2018

A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices

Discrimination via algorithmic decision making has received considerable...
research
10/04/2019

Group-based Fair Learning Leads to Counter-intuitive Predictions

A number of machine learning (ML) methods have been proposed recently to...

Please sign up or login with your details

Forgot password? Click here to reset