Mitigating Gender Bias Amplification in Distribution by Posterior Regularization

by   Shengyu Jia, et al.

Advanced machine learning techniques have boosted the performance of natural language processing. Nevertheless, recent studies, e.g., Zhao et al. (2017) show that these techniques inadvertently capture the societal bias hidden in the corpus and further amplify it. However, their analysis is conducted only on models' top predictions. In this paper, we investigate the gender bias amplification issue from the distribution perspective and demonstrate that the bias is amplified in the view of predicted probability distribution over labels. We further propose a bias mitigation approach based on posterior regularization. With little performance loss, our method can almost remove the bias amplification in the distribution. Our study sheds the light on understanding the bias amplification.



page 1

page 2

page 3

page 4


Mitigating Gender Bias in Natural Language Processing: Literature Review

As Natural Language Processing (NLP) and Machine Learning (ML) tools ris...

Gender Bias in Neural Natural Language Processing

We examine whether neural natural language processing (NLP) systems refl...

Balancing out Bias: Achieving Fairness Through Training Reweighting

Bias in natural language processing arises primarily from models learnin...

LOGAN: Local Group Bias Detection by Clustering

Machine learning techniques have been widely used in natural language pr...

NLVR2 Visual Bias Analysis

NLVR2 (Suhr et al., 2019) was designed to be robust for language bias th...

The Hidden Shape of Stories Reveals Positivity Bias and Gender Bias

To capture the shape of stories is crucial for understanding the mind of...

Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists

Natural Language Processing (NLP) models risk overfitting to specific te...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.