The SAME score: Improved cosine based bias score for word embeddings

03/28/2022
by   Sarah Schröder, et al.
0

Over the last years, word and sentence embeddings have established as text preprocessing for all kinds of NLP tasks and improved performances in these tasks significantly. Unfortunately, it has also been shown that these embeddings inherit various kinds of biases from the training data and thereby pass on biases present in society to NLP solutions. Many papers attempted to quantify bias in word or sentence embeddings to evaluate debiasing methods or compare different embedding models, often with cosine-based scores. However, some works have raised doubts about these scores showing that even though they report low biases, biases persist and can be shown with other tests. In fact, there is a great variety of bias scores or tests proposed in the literature without any consensus on the optimal solutions. We lack works that study the behavior of bias scores and elaborate their advantages and disadvantages. In this work, we will explore different cosine-based bias scores. We provide a bias definition based on the ideas from the literature and derive novel requirements for bias scores. Furthermore, we thoroughly investigate the existing cosine-based scores and their limitations in order to show why these scores fail to report biases in some situations. Finally, we propose a new bias score, SAME, to address the shortcomings of existing bias scores and show empirically that SAME is better suited to quantify biases in word embeddings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2021

Evaluating Metrics for Bias in Word Embeddings

Over the last years, word and sentence embeddings have established as te...
research
06/18/2019

Measuring Bias in Contextualized Word Representations

Contextual word embeddings such as BERT have achieved state of the art p...
research
10/31/2019

Probabilistic Bias Mitigation in Word Embeddings

It has been shown that word embeddings derived from large corpora tend t...
research
07/21/2021

Using Adversarial Debiasing to Remove Bias from Word Embeddings

Word Embeddings have been shown to contain the societal biases present i...
research
09/10/2021

Assessing the Reliability of Word Embedding Gender Bias Measures

Various measures have been proposed to quantify human-like social biases...
research
02/28/2022

Rethinking and Refining the Distinct Metric

Distinct is a widely used automatic metric for evaluating the diversity ...
research
08/11/2017

Improved Abusive Comment Moderation with User Embeddings

Experimenting with a dataset of approximately 1.6M user comments from a ...

Please sign up or login with your details

Forgot password? Click here to reset