Fairness Under Feature Exemptions: Counterfactual and Observational Measures

06/14/2020
by   Sanghamitra Dutta, et al.
0

With the growing use of AI in highly consequential domains, the quantification and removal of bias with respect to protected attributes, such as gender, race, etc., is becoming increasingly important. While quantifying bias is essential, sometimes the needs of a business (e.g., hiring) may require the use of certain features that are critical in a way that any bias that can be explained by them might need to be exempted. E.g., a standardized test-score may be a critical feature that should be weighed strongly in hiring even if biased, whereas other features, such as zip code may be used only to the extent that they do not discriminate. In this work, we propose a novel information-theoretic decomposition of the total bias (in a counterfactual sense) into a non-exempt component that quantifies the part of the bias that cannot be accounted for by the critical features, and an exempt component which quantifies the remaining bias. This decomposition allows one to check if the bias arose purely due to the critical features (inspired from the business necessity defense of disparate impact law) and also enables selective removal of the non-exempt component if desired. We arrive at this decomposition through examples that lead to a set of desirable properties (axioms) that any measure of non-exempt bias should satisfy. We demonstrate that our proposed counterfactual measure satisfies all of them. Our quantification bridges ideas of causality, Simpson's paradox, and a body of work from information theory called Partial Information Decomposition. We also obtain an impossibility result showing that no observational measure of non-exempt bias can satisfy all of the desirable properties, which leads us to relax our goals and examine observational measures that satisfy only some of these properties. We then perform case studies to show how one can train models while reducing non-exempt bias.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2023

Counterfactual Reasoning for Bias Evaluation and Detection in a Fairness under Unawareness setting

Current AI regulations require discarding sensitive features (e.g., gend...
research
04/26/2020

Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds

Most NLP datasets are not annotated with protected attributes such as ge...
research
02/11/2023

Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous Pronouns

Bias-measuring datasets play a critical role in detecting biased behavio...
research
10/24/2022

Simultaneous Improvement of ML Model Fairness and Performance by Identifying Bias in Data

Machine learning models built on datasets containing discriminative inst...
research
07/31/2023

Approximating Counterfactual Bounds while Fusing Observational, Biased and Randomised Data Sources

We address the problem of integrating data from multiple, possibly biase...
research
11/09/2021

Can Information Flows Suggest Targets for Interventions in Neural Circuits?

Motivated by neuroscientific and clinical applications, we empirically e...
research
06/01/2014

On the measure of conflicts: A MUS-Decomposition Based Framework

Measuring inconsistency is viewed as an important issue related to handl...

Please sign up or login with your details

Forgot password? Click here to reset