DeepAI AI Chat
Log In Sign Up

Multiple Word Embeddings for Increased Diversity of Representation

by   Brian Lester, et al.

Most state-of-the-art models in natural language processing (NLP) are neural models built on top of large, pre-trained, contextual language models that generate representations of words in context and are fine-tuned for the task at hand. The improvements afforded by these "contextual embeddings" come with a high computational cost. In this work, we explore a simple technique that substantially and consistently improves performance over a strong baseline with negligible increase in run time. We concatenate multiple pre-trained embeddings to strengthen our representation of words. We show that this concatenation technique works across many tasks, datasets, and model types. We analyze aspects of pre-trained embedding similarity and vocabulary coverage and find that the representational diversity between different pre-trained embeddings is the driving force of why this technique works. We provide open source implementations of our models in both TensorFlow and PyTorch.


page 1

page 2

page 3

page 4


Computationally Efficient NER Taggers with Combined Embeddings and Constrained Decoding

Current State-of-the-Art models in Named Entity Recognition (NER) are ne...

Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost

State-of-the-art NLP systems represent inputs with word embeddings, but ...

Towards Parameter-Efficient Integration of Pre-Trained Language Models In Temporal Video Grounding

This paper explores the task of Temporal Video Grounding (TVG) where, gi...

HashFormers: Towards Vocabulary-independent Pre-trained Transformers

Transformer-based pre-trained language models are vocabulary-dependent, ...

Pre-trained Contextual Embedding of Source Code

The source code of a program not only serves as a formal description of ...

Unsupervised Lexical Substitution with Decontextualised Embeddings

We propose a new unsupervised method for lexical substitution using pre-...

Improving Segmentation for Technical Support Problems

Technical support problems are often long and complex. They typically co...