Predicting and interpreting embeddings for out of vocabulary words in downstream tasks

03/02/2019
by   Nicolas Garneau, et al.
0

We propose a novel way to handle out of vocabulary (OOV) words in downstream natural language processing (NLP) tasks. We implement a network that predicts useful embeddings for OOV words based on their morphology and on the context in which they appear. Our model also incorporates an attention mechanism indicating the focus allocated to the left context words, the right context words or the word's characters, hence making the prediction more interpretable. The model is a "drop-in" module that is jointly trained with the downstream task's neural network, thus producing embeddings specialized for the task at hand. When the task is mostly syntactical, we observe that our model aims most of its attention on surface form characters. On the other hand, for tasks more semantical, the network allocates more attention to the surrounding words. In all our tests, the module helps the network to achieve better performances in comparison to the use of simple random embeddings.

READ FULL TEXT

page 1

page 2

page 3

research
07/19/2019

An Unsupervised Character-Aware Neural Approach to Word and Context Representation Learning

In the last few years, neural networks have been intensively used to dev...
research
03/11/2021

Evaluation of Morphological Embeddings for the Russian Language

A number of morphology-based word embedding models were introduced in re...
research
08/05/2017

A Syllable-based Technique for Word Embeddings of Korean Words

Word embedding has become a fundamental component to many NLP tasks such...
research
07/01/2019

Few-Shot Representation Learning for Out-Of-Vocabulary Words

Existing approaches for learning word embeddings often assume there are ...
research
12/14/2019

Attending Form and Context to Generate Specialized Out-of-VocabularyWords Representations

We propose a new contextual-compositional neural network layer that hand...
research
09/10/2023

Unsupervised Chunking with Hierarchical RNN

In Natural Language Processing (NLP), predicting linguistic structures, ...
research
10/20/2020

CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters

Due to the compelling improvements brought by BERT, many recent represen...

Please sign up or login with your details

Forgot password? Click here to reset