A General Framework for Implicit and Explicit Debiasing of Distributional Word Vector Spaces

09/13/2019
by   Anne Lauscher, et al.
9

Distributional word vectors have recently been shown to encode many of the human biases, most notably gender and racial biases, and models for attenuating such biases have consequently been proposed. However, existing models and studies (1) operate on under-specified and mutually differing bias definitions, (2) are tailored for a particular bias (e.g., gender bias) and (3) have been evaluated inconsistently and non-rigorously. In this work, we introduce a general framework for debiasing word embeddings. We operationalize the definition of a bias by discerning two types of bias specification: explicit and implicit. We then propose three debiasing models that operate on explicit or implicit bias specifications, and that can be composed towards more robust debiasing. Finally, we devise a full-fledged evaluation framework in which we couple existing bias metrics with newly proposed ones. Experimental findings across three embedding methods suggest that the proposed debiasing models are robust and widely applicable: they often completely remove the bias both implicitly and explicitly, without degradation of semantic information encoded in any of the input distributional spaces. Moreover, we successfully transfer debiasing models, by means of crosslingual embedding spaces, and remove or attenuate biases in distributional word vector spaces of languages that lack readily available bias specifications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2019

Are We Consistently Biased? Multidimensional Analysis of Biases in Distributional Word Vectors

Word embeddings have recently been shown to reflect many of the pronounc...
research
03/11/2021

DebIE: A Platform for Implicit and Explicit Debiasing of Word Embedding Spaces

Recent research efforts in NLP have demonstrated that distributional wor...
research
11/03/2020

AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings

Recent work has shown that distributional word vector spaces often encod...
research
12/09/2021

Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving

With widening deployments of natural language processing (NLP) in daily ...
research
04/07/2020

Neutralizing Gender Bias in Word Embedding with Latent Disentanglement and Counterfactual Generation

Recent researches demonstrate that word embeddings, trained on the human...
research
08/06/2020

Discovering and Categorising Language Biases in Reddit

We present a data-driven approach using word embeddings to discover and ...
research
08/13/2021

Diachronic Analysis of German Parliamentary Proceedings: Ideological Shifts through the Lens of Political Biases

We analyze bias in historical corpora as encoded in diachronic distribut...

Please sign up or login with your details

Forgot password? Click here to reset