ValNorm: A New Word Embedding Intrinsic Evaluation Method Reveals Valence Biases are Consistent Across Languages and Over Decades

06/06/2020
by   Autumn Toney, et al.
0

Word embeddings learn implicit biases from linguistic regularities captured by word co-occurrence information. As a result, statistical methods can detect and quantify social biases as well as widely shared associations imbibed by the corpus the word embeddings are trained on. By extending methods that quantify human-like biases in word embeddings, we introduce ValNorm, a new word embedding intrinsic evaluation task, and the first unsupervised method that estimates the affective meaning of valence in words with high accuracy. The correlation between human scores of valence for 399 words collected to establish pleasantness norms in English and ValNorm scores is r=0.88. These 399 words, obtained from social psychology literature, are used to measure biases that are non-discriminatory among social groups. We hypothesize that the valence associations for these words are widely shared across languages and consistent over time. We estimate valence associations of these words using word embeddings from six languages representing various language structures and from historical text covering 200 years. Our method achieves consistently high accuracy, suggesting that the valence associations for these words are widely shared. In contrast, we measure gender stereotypes using the same set of word embeddings and find that social biases vary across languages. Our results signal that valence associations of this word set represent widely shared associations and consequently an intrinsic quality of words.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2016

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

The blind application of machine learning runs the risk of amplifying bi...
research
03/14/2022

VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models

VAST, the Valence-Assessing Semantics Test, is a novel intrinsic evaluat...
research
09/19/2020

Word class flexibility: A deep contextualized approach

Word class flexibility refers to the phenomenon whereby a single word fo...
research
03/25/2018

The Geometry of Culture: Analyzing Meaning through Word Embeddings

We demonstrate the utility of a new methodological tool, neural-network ...
research
08/18/2019

Parsimonious Morpheme Segmentation with an Application to Enriching Word Embeddings

Traditionally, many text-mining tasks treat individual word-tokens as th...
research
10/27/2020

Discovering and Interpreting Conceptual Biases in Online Communities

Language carries implicit human biases, functioning both as a reflection...
research
05/19/2020

Word-Emoji Embeddings from large scale Messaging Data reflect real-world Semantic Associations of Expressive Icons

We train word-emoji embeddings on large scale messaging data obtained fr...

Please sign up or login with your details

Forgot password? Click here to reset