[RE] Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation

04/14/2021
by   Haswanth Aekula, et al.
7

Despite widespread use in natural language processing (NLP) tasks, word embeddings have been criticized for inheriting unintended gender bias from training corpora. programmer is more closely associated with man and homemaker is more closely associated with woman. Such gender bias has also been shown to propagate in downstream tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2018

Is there Gender bias and stereotype in Portuguese Word Embeddings?

In this work, we propose an analysis of the presence of gender bias asso...
research
10/31/2020

Evaluating Bias In Dutch Word Embeddings

Recent research in Natural Language Processing has revealed that word em...
research
05/03/2020

Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation

Word embeddings derived from human-generated corpora inherit strong gend...
research
07/31/2018

Gender Bias in Neural Natural Language Processing

We examine whether neural natural language processing (NLP) systems refl...
research
09/28/2021

Marked Attribute Bias in Natural Language Inference

Reporting and providing test sets for harmful bias in NLP applications i...
research
07/04/2023

Racial Bias Trends in the Text of US Legal Opinions

Although there is widespread recognition of racial bias in US law, it is...
research
12/09/2021

Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving

With widening deployments of natural language processing (NLP) in daily ...

Please sign up or login with your details

Forgot password? Click here to reset