Morphological Priors for Probabilistic Neural Word Embeddings

08/03/2016
by   Parminder Bhatia, et al.
0

Word embeddings allow natural language processing systems to share statistical information across related words. These embeddings are typically based on distributional statistics, making it difficult for them to generalize to rare or unseen words. We propose to improve word embeddings by incorporating morphological information, capturing shared sub-word features. Unlike previous work that constructs word embeddings directly from morphemes, we combine morphological and distributional information in a unified probabilistic framework, in which the word embedding is a latent variable. The morphological information provides a prior distribution on the latent word embeddings, which in turn condition a likelihood function over an observed corpus. This approach yields improvements on intrinsic word similarity evaluations, and also in the downstream task of part-of-speech tagging.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2017

Mimicking Word Embeddings using Subword RNNs

Word embeddings improve generalization over lexical features by placing ...
research
09/30/2020

Interactive Re-Fitting as a Technique for Improving Word Embeddings

Word embeddings are a fixed, distributional representation of the contex...
research
04/24/2017

A Trie-Structured Bayesian Model for Unsupervised Morphological Segmentation

In this paper, we introduce a trie-structured Bayesian model for unsuper...
research
04/02/2019

Identification, Interpretability, and Bayesian Word Embeddings

Social scientists have recently turned to analyzing text using tools fro...
research
10/21/2020

PBoS: Probabilistic Bag-of-Subwords for Generalizing Word Embedding

We look into the task of generalizing word embeddings: given a set of pr...
research
12/12/2016

ConceptNet 5.5: An Open Multilingual Graph of General Knowledge

Machine learning about language can be improved by supplying it with spe...
research
08/12/2016

Redefining part-of-speech classes with distributional semantic models

This paper studies how word embeddings trained on the British National C...

Please sign up or login with your details

Forgot password? Click here to reset