Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs

12/10/2018
by   Sachin Kumar, et al.
0

The Softmax function is used in the final layer of nearly all existing sequence-to-sequence models for language generation. However, it is usually the slowest layer to compute which limits the vocabulary size to a subset of most frequent types; and it has a large memory footprint. We propose a general technique for replacing the softmax layer with a continuous embedding layer. Our primary innovations are a novel probabilistic loss, and a training and inference procedure in which we generate a probability distribution over pre-trained word embeddings, instead of a multinomial distribution over the vocabulary obtained via softmax. We evaluate this new class of sequence-to-sequence models with continuous outputs on the task of neural machine translation. We show that our models obtain upto 2.5x speed-up in training time while performing on par with the state-of-the-art models in terms of translation quality. These models are capable of handling very large vocabularies without compromising on translation quality. They also produce more meaningful errors than in the softmax-based models, as these errors typically lie in a subspace of the vector space of the reference translations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/14/2019

Sparse Sequence-to-Sequence Models

Sequence-to-sequence models are a powerful workhorse of NLP. Most varian...
research
08/05/2016

Resolving Out-of-Vocabulary Words with Bilingual Embeddings in Machine Translation

Out-of-vocabulary words account for a large proportion of errors in mach...
research
10/29/2018

Learning to Screen for Fast Softmax Inference on Large Vocabulary Neural Networks

Neural language models have been widely used in various NLP tasks, inclu...
research
11/12/2021

Speeding Up Entmax

Softmax is the de facto standard in modern neural networks for language ...
research
04/23/2017

Neural Machine Translation via Binary Code Prediction

In this paper, we propose a new method for calculating the output layer ...
research
06/29/2017

Talking Drums: Generating drum grooves with neural networks

Presented is a method of generating a full drum kit part for a provided ...
research
03/18/2021

Smoothing and Shrinking the Sparse Seq2Seq Search Space

Current sequence-to-sequence models are trained to minimize cross-entrop...

Please sign up or login with your details

Forgot password? Click here to reset