Sequence to Sequence Learning with Neural Networks

09/10/2014
by   Ilya Sutskever, et al.
0

Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/15/2023

TransDocs: Optical Character Recognition with word to word translation

While OCR has been used in various applications, its output is not alway...
research
10/30/2015

Generating Text with Deep Reinforcement Learning

We introduce a novel schema for sequence to sequence learning with a Dee...
research
04/13/2018

Demo of Sanskrit-Hindi SMT System

The demo proposal presents a Phrase-based Sanskrit-Hindi (SaHiT) Statist...
research
09/13/2016

An Experimental Study of LSTM Encoder-Decoder Model for Text Simplification

Text simplification (TS) aims to reduce the lexical and structural compl...
research
01/07/2016

Leveraging Sentence-level Information with Encoder LSTM for Semantic Slot Filling

Recurrent Neural Network (RNN) and one of its specific architectures, Lo...
research
11/23/2018

A Hierarchical Neural Network for Sequence-to-Sequences Learning

In recent years, the sequence-to-sequence learning neural networks with ...
research
12/27/2017

CNN Is All You Need

The Convolution Neural Network (CNN) has demonstrated the unique advanta...

Please sign up or login with your details

Forgot password? Click here to reset