A Convolutional Encoder Model for Neural Machine Translation

11/07/2016 ∙ by Jonas Gehring, et al. ∙ 0

The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. In this paper we present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the entire source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT'16 English-Romanian translation we achieve competitive accuracy to the state-of-the-art and we outperform several recently published results on the WMT'15 English-German task. Our models obtain almost the same accuracy as a very deep LSTM setup on WMT'14 English-French translation. Our convolutional encoder speeds up CPU decoding by more than two times at the same or higher accuracy as a strong bi-directional LSTM baseline.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 13

Code Repositories

fairseq

Facebook AI Research Sequence-to-Sequence Toolkit


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.