Convolutional Sequence to Sequence Learning

05/08/2017 ∙ by Jonas Gehring, et al. ∙ 0

The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 15

Code Repositories

fairseq

Facebook AI Research Sequence-to-Sequence Toolkit


view repo

neurowriter

Framework to imitate writing styles using deep learning


view repo

fairseq-translator

A quick Tensorflow implementation of Facebook FairSeq[1] for character-level neural machine translation (EN -> JP).


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.