Word Embeddings: Stability and Semantic Change

07/23/2020
by   Lucas Rettenmeier, et al.
0

Word embeddings are computed by a class of techniques within natural language processing (NLP), that create continuous vector representations of words in a language from a large text corpus. The stochastic nature of the training process of most embedding techniques can lead to surprisingly strong instability, i.e. subsequently applying the same technique to the same data twice, can produce entirely different results. In this work, we present an experimental study on the instability of the training process of three of the most influential embedding techniques of the last decade: word2vec, GloVe and fastText. Based on the experimental results, we propose a statistical model to describe the instability of embedding techniques and introduce a novel metric to measure the instability of the representation of an individual word. Finally, we propose a method to minimize the instability - by computing a modified average over multiple runs - and apply it to a specific linguistic problem: The detection and quantification of semantic change, i.e. measuring changes in the meaning and usage of words over time.

READ FULL TEXT
research
11/01/2017

Semantic Structure and Interpretability of Word Embeddings

Dense word embeddings, which encode semantic meanings of words to low di...
research
12/16/2019

Scale-dependent Relationships in Natural Language

Natural language exhibits statistical dependencies at a wide range of sc...
research
03/15/2022

TSM: Measuring the Enticement of Honeyfiles with Natural Language Processing

Honeyfile deployment is a useful breach detection method in cyber decept...
research
06/05/2019

Training Temporal Word Embeddings with a Compass

Temporal word embeddings have been proposed to support the analysis of w...
research
02/29/2020

Understanding the Downstream Instability of Word Embeddings

Many industrial machine learning (ML) systems require frequent retrainin...
research
11/19/2015

sense2vec - A Fast and Accurate Method for Word Sense Disambiguation In Neural Word Embeddings

Neural word representations have proven useful in Natural Language Proce...
research
06/17/2020

On the Learnability of Concepts: With Applications to Comparing Word Embedding Algorithms

Word Embeddings are used widely in multiple Natural Language Processing ...

Please sign up or login with your details

Forgot password? Click here to reset