It's All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution

09/02/2019
by   Rowan Hall Maudslay, et al.
0

This paper treats gender bias latent in word embeddings. Previous mitigation attempts rely on the operationalisation of gender bias as a projection over a linear subspace. An alternative approach is Counterfactual Data Augmentation (CDA), in which a corpus is duplicated and augmented to remove bias, e.g. by swapping all inherently-gendered words in the copy. We perform an empirical comparison of these approaches on the English Gigaword and Wikipedia, and find that whilst both successfully reduce direct bias and perform well in tasks which quantify embedding quality, CDA variants outperform projection-based methods at the task of drawing non-biased gender analogies by an average of 19 across both corpora. We propose two improvements to CDA: Counterfactual Data Substitution (CDS), a variant of CDA in which potentially biased text is randomly substituted to avoid duplication, and the Names Intervention, a novel name-pairing technique that vastly increases the number of words being treated. CDA/S with the Names Intervention is the only approach which is able to mitigate indirect gender bias: following debiasing, previously biased words are significantly less clustered according to gender (cluster purity is reduced by 49

READ FULL TEXT
research
08/22/2018

Reducing Gender Bias in Abusive Language Detection

Abusive language detection models tend to have a problem of being biased...
research
04/05/2019

Gender Bias in Contextualized Word Embeddings

In this paper, we quantify, analyze and mitigate gender bias exhibited i...
research
05/23/2023

Detecting and Mitigating Indirect Stereotypes in Word Embeddings

Societal biases in the usage of words, including harmful stereotypes, ar...
research
10/27/2020

Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias

Contextualized word embeddings have been replacing standard embeddings a...
research
06/12/2023

Gender-Inclusive Grammatical Error Correction through Augmentation

In this paper we show that GEC systems display gender bias related to th...
research
06/11/2019

Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology

Gender stereotypes are manifest in most of the world's languages and are...
research
11/10/2019

Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation

Models often easily learn biases present in the training data, and their...

Please sign up or login with your details

Forgot password? Click here to reset