On Measuring and Mitigating Biased Inferences of Word Embeddings

08/25/2019
by   Sunipa Dev, et al.
0

Word embeddings carry stereotypical connotations from the text they are trained on, which can lead to invalid inferences. We use this observation to design a mechanism for measuring stereotypes using the task of natural language inference. We demonstrate a reduction in invalid inferences via bias mitigation strategies on static word embeddings (GloVe), and explore adapting them to contextual embeddings (ELMo).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/10/2019

Better Word Embeddings by Disentangling Contextual n-Gram Information

Pre-trained word vectors are ubiquitous in Natural Language Processing a...
research
09/19/2021

Conditional probing: measuring usable information beyond a baseline

Probing experiments investigate the extent to which neural representatio...
research
09/05/2023

Substitution-based Semantic Change Detection using Contextual Embeddings

Measuring semantic change has thus far remained a task where methods usi...
research
12/14/2021

Representing Inferences and their Lexicalization

We have recently begun a project to develop a more effective and efficie...
research
03/24/2018

Near-lossless Binarization of Word Embeddings

Is it possible to learn binary word embeddings of arbitrary size from th...
research
01/14/2020

Balancing the composition of word embeddings across heterogenous data sets

Word embeddings capture semantic relationships based on contextual infor...
research
09/28/2021

Marked Attribute Bias in Natural Language Inference

Reporting and providing test sets for harmful bias in NLP applications i...

Please sign up or login with your details

Forgot password? Click here to reset