DebIE: A Platform for Implicit and Explicit Debiasing of Word Embedding Spaces

03/11/2021
by   Niklas Friedrich, et al.
0

Recent research efforts in NLP have demonstrated that distributional word vector spaces often encode stereotypical human biases, such as racism and sexism. With word representations ubiquitously used in NLP models and pipelines, this raises ethical issues and jeopardizes the fairness of language technologies. While there exists a large body of work on bias measures and debiasing methods, to date, there is no platform that would unify these research efforts and make bias measuring and debiasing of representation spaces widely accessible. In this work, we present DebIE, the first integrated platform for (1) measuring and (2) mitigating bias in word embeddings. Given an (i) embedding space (users can choose between the predefined spaces or upload their own) and (ii) a bias specification (users can choose between existing bias specifications or create their own), DebIE can (1) compute several measures of implicit and explicit bias and modify the embedding space by executing two (mutually composable) debiasing models. DebIE's functionality can be accessed through four different interfaces: (a) a web application, (b) a desktop application, (c) a REST-ful API, and (d) as a command-line application. DebIE is available at: debie.informatik.uni-mannheim.de.

READ FULL TEXT

page 5

page 6

research
09/13/2019

A General Framework for Implicit and Explicit Debiasing of Distributional Word Vector Spaces

Distributional word vectors have recently been shown to encode many of t...
research
04/26/2019

Are We Consistently Biased? Multidimensional Analysis of Biases in Distributional Word Vectors

Word embeddings have recently been shown to reflect many of the pronounc...
research
10/31/2019

Probabilistic Bias Mitigation in Word Embeddings

It has been shown that word embeddings derived from large corpora tend t...
research
11/03/2020

AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings

Recent work has shown that distributional word vector spaces often encod...
research
06/19/2019

Considerations for the Interpretation of Bias Measures of Word Embeddings

Word embedding spaces are powerful tools for capturing latent semantic r...
research
12/31/2020

Intrinsic Bias Metrics Do Not Correlate with Application Bias

Natural Language Processing (NLP) systems learn harmful societal biases ...
research
12/13/2018

An Unbiased Approach to Quantification of Gender Inclination using Interpretable Word Representations

Recent advances in word embedding provide significant benefit to various...

Please sign up or login with your details

Forgot password? Click here to reset