Word Embedding Algorithms as Generalized Low Rank Models and their Canonical Form

11/06/2019
by   Kian Kenyon-Dean, et al.
0

Word embedding algorithms produce very reliable feature representations of words that are used by neural network models across a constantly growing multitude of NLP tasks. As such, it is imperative for NLP practitioners to understand how their word representations are produced, and why they are so impactful. The present work presents the Simple Embedder framework, generalizing the state-of-the-art existing word embedding algorithms (including Word2vec (SGNS) and GloVe) under the umbrella of generalized low rank models. We derive that both of these algorithms attempt to produce embedding inner products that approximate pointwise mutual information (PMI) statistics in the corpus. Once cast as Simple Embedders, comparison of these models reveals that these successful embedders all resemble a straightforward maximum likelihood estimate (MLE) of the PMI parametrized by the inner product (between embeddings). This MLE induces our proposed novel word embedding model, Hilbert-MLE, as the canonical representative of the Simple Embedder framework. We empirically compare these algorithms with evaluations on 17 different datasets. Hilbert-MLE consistently observes second-best performance on every extrinsic evaluation (news classification, sentiment analysis, POS-tagging, and supersense tagging), while the first-best model depends varying on the task. Moreover, Hilbert-MLE consistently observes the least variance in results with respect to the random initialization of the weights in bidirectional LSTMs. Our empirical results demonstrate that Hilbert-MLE is a very consistent word embedding algorithm that can be reliably integrated into existing NLP systems to obtain high-quality results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2019

Deconstructing and reconstructing word embedding algorithms

Uncontextualized word embeddings are reliable feature representations of...
research
08/20/2019

CatE: Category-Name GuidedWord Embedding

Unsupervised word embedding has benefited a wide spectrum of NLP tasks d...
research
09/15/2021

Fast Extraction of Word Embedding from Q-contexts

The notion of word embedding plays a fundamental role in natural languag...
research
07/29/2016

A Novel Bilingual Word Embedding Method for Lexical Translation Using Bilingual Sense Clique

Most of the existing methods for bilingual word embedding only consider ...
research
10/02/2019

Distilled embedding: non-linear embedding factorization using knowledge distillation

Word-embeddings are a vital component of Natural Language Processing (NL...
research
07/17/2017

To Normalize, or Not to Normalize: The Impact of Normalization on Part-of-Speech Tagging

Does normalization help Part-of-Speech (POS) tagging accuracy on noisy, ...
research
11/01/2018

Online Embedding Compression for Text Classification using Low Rank Matrix Factorization

Deep learning models have become state of the art for natural language p...

Please sign up or login with your details

Forgot password? Click here to reset