Evaluating Debiasing Techniques for Intersectional Biases

09/21/2021
by   Shivashankar Subramanian, et al.
0

Bias is pervasive in NLP models, motivating the development of automatic debiasing techniques. Evaluation of NLP debiasing methods has largely been limited to binary attributes in isolation, e.g., debiasing with respect to binary gender or race, however many corpora involve multiple such attributes, possibly with higher cardinality. In this paper we argue that a truly fair model must consider `gerrymandering' groups which comprise not only single attributes, but also intersectional groups. We evaluate a form of bias-constrained model which is new to NLP, as well an extension of the iterative nullspace projection technique which can handle multiple protected attributes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/10/2019

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

There is a growing body of work that proposes methods for mitigating bia...
research
08/07/2021

What do Bias Measures Measure?

Natural Language Processing (NLP) models propagate social biases about p...
research
10/06/2020

LOGAN: Local Group Bias Detection by Clustering

Machine learning techniques have been widely used in natural language pr...
research
10/08/2021

Evaluation of Summarization Systems across Gender, Age, and Race

Summarization systems are ultimately evaluated by human annotators and r...
research
05/28/2019

Overlearning Reveals Sensitive Attributes

`Overlearning' means that a model trained for a seemingly simple objecti...
research
06/02/2023

Affinity Clustering Framework for Data Debiasing Using Pairwise Distribution Discrepancy

Group imbalance, resulting from inadequate or unrepresentative data coll...
research
10/14/2022

Controlling Bias Exposure for Fair Interpretable Predictions

Recent work on reducing bias in NLP models usually focuses on protecting...

Please sign up or login with your details

Forgot password? Click here to reset