Reconstruction of Word Embeddings from Sub-Word Parameters

07/21/2017
by   Karl Stratos, et al.
0

Pre-trained word embeddings improve the performance of a neural model at the cost of increasing the model size. We propose to benefit from this resource without paying the cost by operating strictly at the sub-lexical level. Our approach is quite simple: before task-specific training, we first optimize sub-word parameters to reconstruct pre-trained word embeddings using various distance measures. We report interesting results on a variety of tasks: word similarity, word analogy, and part-of-speech tagging.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2014

Tailoring Word Embeddings for Bilexical Predictions: An Experimental Comparison

We investigate the problem of inducing word embeddings that are tailored...
research
06/13/2019

Antonym-Synonym Classification Based on New Sub-space Embeddings

Distinguishing antonyms from synonyms is a key challenge for many NLP ap...
research
04/01/2020

Adversarial Transfer Learning for Punctuation Restoration

Previous studies demonstrate that word embeddings and part-of-speech (PO...
research
06/09/2016

Sentence Similarity Measures for Fine-Grained Estimation of Topical Relevance in Learner Essays

We investigate the task of assessing sentence-level prompt relevance in ...
research
09/27/2018

Predictive Embeddings for Hate Speech Detection on Twitter

We present a neural-network based approach to classifying online hate sp...
research
04/07/2021

Combining Pre-trained Word Embeddings and Linguistic Features for Sequential Metaphor Identification

We tackle the problem of identifying metaphors in text, treated as a seq...
research
02/27/2019

A Framework for Decoding Event-Related Potentials from Text

We propose a novel framework for modeling event-related potentials (ERPs...

Please sign up or login with your details

Forgot password? Click here to reset