"Thy algorithm shalt not bear false witness": An Evaluation of Multiclass Debiasing Methods on Word Embeddings

10/30/2020
by   Thalea Schlender, et al.
0

With the vast development and employment of artificial intelligence applications, research into the fairness of these algorithms has been increased. Specifically, in the natural language processing domain, it has been shown that social biases persist in word embeddings and are thus in danger of amplifying these biases when used. As an example of social bias, religious biases are shown to persist in word embeddings and the need for its removal is highlighted. This paper investigates the state-of-the-art multiclass debiasing techniques: Hard debiasing, SoftWEAT debiasing and Conceptor debiasing. It evaluates their performance when removing religious bias on a common basis by quantifying bias removal via the Word Embedding Association Test (WEAT), Mean Average Cosine Similarity (MAC) and the Relative Negative Sentiment Bias (RNSB). By investigating the religious bias removal on three widely used word embeddings, namely: Word2Vec, GloVe, and ConceptNet, it is shown that the preferred method is ConceptorDebiasing. Specifically, this technique manages to decrease the measured religious bias on average by 82,42 for the three word embedding sets respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2019

Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them

Word embeddings are widely used in NLP for a vast range of tasks. It was...
research
06/06/2020

Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases

With the starting point that implicit human biases are reflected in the ...
research
11/08/2022

Bridging Fairness and Environmental Sustainability in Natural Language Processing

Fairness and environmental impact are important research directions for ...
research
12/15/2019

Artificial mental phenomena: Psychophysics as a framework to detect perception biases in AI models

Detecting biases in artificial intelligence has become difficult because...
research
06/25/2021

A Source-Criticism Debiasing Method for GloVe Embeddings

It is well-documented that word embeddings trained on large public corpo...
research
11/24/2020

Argument from Old Man's View: Assessing Social Bias in Argumentation

Social bias in language - towards genders, ethnicities, ages, and other ...
research
03/14/2022

VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models

VAST, the Valence-Assessing Semantics Test, is a novel intrinsic evaluat...

Please sign up or login with your details

Forgot password? Click here to reset