Conceptor Debiasing of Word Representations Evaluated on WEAT

06/14/2019
by   Saket Karve, et al.
0

Bias in word embeddings such as Word2Vec has been widely investigated, and many efforts made to remove such bias. We show how to use conceptors debiasing to post-process both traditional and contextualized word embeddings. Our conceptor debiasing can simultaneously remove racial and gender biases and, unlike standard debiasing methods, can make effect use of heterogeneous lists of biased words. We show that conceptor debiasing diminishes racial and gender bias of word representations as measured using the Word Embedding Association Test (WEAT) of Caliskan et al. (2017).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2019

Evaluating the Underlying Gender Bias in Contextualized Word Embeddings

Gender bias is highly impacting natural language processing applications...
research
03/09/2019

Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them

Word embeddings are widely used in NLP for a vast range of tasks. It was...
research
03/09/2020

Joint Multiclass Debiasing of Word Embeddings

Bias in Word Embeddings has been a subject of recent interest, along wit...
research
08/18/2019

Understanding Undesirable Word Embedding Associations

Word embeddings are often criticized for capturing undesirable word asso...
research
06/25/2021

A Source-Criticism Debiasing Method for GloVe Embeddings

It is well-documented that word embeddings trained on large public corpo...
research
05/23/2023

Detecting and Mitigating Indirect Stereotypes in Word Embeddings

Societal biases in the usage of words, including harmful stereotypes, ar...
research
12/20/2018

What are the biases in my word embedding?

This paper presents an algorithm for enumerating biases in word embeddin...

Please sign up or login with your details

Forgot password? Click here to reset