Assessing Social and Intersectional Biases in Contextualized Word Representations

11/04/2019
by   Yi Chern Tan, et al.
0

Social bias in machine learning has drawn significant attention, with work ranging from demonstrations of bias in a multitude of applications, curating definitions of fairness for different contexts, to developing algorithms to mitigate bias. In natural language processing, gender bias has been shown to exist in context-free word embeddings. Recently, contextual word representations have outperformed word embeddings in several downstream NLP tasks. These word representations are conditioned on their context within a sentence, and can also be used to encode the entire sentence. In this paper, we analyze the extent to which state-of-the-art models for contextual word representations, such as BERT and GPT-2, encode biases with respect to gender, race, and intersectional identities. Towards this, we propose assessing bias at the contextual word level. This novel approach captures the contextual effects of bias missing in context-free word embeddings, yet avoids confounding effects that underestimate bias at the sentence encoding level. We demonstrate evidence of bias at the corpus level, find varying evidence of bias in embedding association tests, show in particular that racial bias is strongly encoded in contextual word models, and observe that bias effects for intersectional minorities are exacerbated beyond their constituent minority identities. Further, evaluating bias effects at the contextual word level captures biases that are not captured at the sentence level, confirming the need for our novel approach.

READ FULL TEXT
research
12/15/2022

The effects of gender bias in word embeddings on depression prediction

Word embeddings are extensively used in various NLP problems as a state-...
research
06/18/2019

Measuring Bias in Contextualized Word Representations

Contextual word embeddings such as BERT have achieved state of the art p...
research
07/16/2020

Towards Debiasing Sentence Representations

As natural language processing methods are increasingly deployed in real...
research
03/11/2020

Hurtful Words: Quantifying Biases in Clinical Contextual Word Embeddings

In this work, we examine the extent to which embeddings may encode margi...
research
03/25/2019

On Measuring Social Biases in Sentence Encoders

The Word Embedding Association Test shows that GloVe and word2vec word e...
research
04/06/2021

VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations

Word vector embeddings have been shown to contain and amplify biases in ...

Please sign up or login with your details

Forgot password? Click here to reset