Bias Mitigation Framework for Intersectional Subgroups in Neural Networks

12/26/2022
by   Narine Kokhlikyan, et al.
2

We propose a fairness-aware learning framework that mitigates intersectional subgroup bias associated with protected attributes. Prior research has primarily focused on mitigating one kind of bias by incorporating complex fairness-driven constraints into optimization objectives or designing additional layers that focus on specific protected attributes. We introduce a simple and generic bias mitigation approach that prevents models from learning relationships between protected attributes and output variable by reducing mutual information between them. We demonstrate that our approach is effective in reducing bias with little or no drop in accuracy. We also show that the models trained with our learning framework become causally fair and insensitive to the values of protected attributes. Finally, we validate our approach by studying feature interactions between protected and non-protected attributes. We demonstrate that these interactions are significantly reduced when applying our bias mitigation.

READ FULL TEXT

page 7

page 8

research
10/03/2021

xFAIR: Better Fairness via Model-based Rebalancing of Protected Attributes

Machine learning software can generate models that inappropriately discr...
research
12/11/2020

RENATA: REpreseNtation And Training Alteration for Bias Mitigation

We propose a novel method for enforcing AI fairness with respect to prot...
research
11/04/2022

Decorrelation with conditional normalizing flows

The sensitivity of many physics analyses can be enhanced by constructing...
research
11/09/2021

Can Information Flows Suggest Targets for Interventions in Neural Circuits?

Motivated by neuroscientific and clinical applications, we empirically e...
research
05/29/2023

Generalized Disparate Impact for Configurable Fairness Solutions in ML

We make two contributions in the field of AI fairness over continuous pr...
research
02/24/2021

Directional Bias Amplification

Mitigating bias in machine learning systems requires refining our unders...
research
09/15/2018

Omitted and Included Variable Bias in Tests for Disparate Impact

Policymakers often seek to gauge discrimination against groups defined b...

Please sign up or login with your details

Forgot password? Click here to reset