1 Introduction
Neural machine translation (NMT) has attracted a lot of interest in solving the machine translation (MT) problem in recent years [Kalchbrenner and Blunsom2013, Sutskever et al.2014, Bahdanau et al.2015]. Unlike conventional statistical machine translation (SMT) systems [Koehn et al.2003, Durrani et al.2014] which consist of multiple separately tuned components, NMT models encode the source sequence into continuous representation space and generate the target sequence in an endtoend fashon. Moreover, NMT models can also be easily adapted to other tasks such as dialog systems [Vinyals and Le2015], question answering systems [Yu et al.2015] and image caption generation [Mao et al.2015].
In general, there are two types of NMT topologies: the encoderdecoder network [Sutskever et al.2014] and the attention network [Bahdanau et al.2015]
. The encoderdecoder network represents the source sequence with a fixed dimensional vector and the target sequence is generated from this vector word by word. The attention network uses the representations from all time steps of the input sequence to build a detailed relationship between the target words and the input words. Recent results show that the systems based on these models can achieve similar performance to conventional SMT systems
[Luong et al.2015, Jean et al.2015].However, a single neural model of either of the above types has not been competitive with the best conventional system [Durrani et al.2014] when evaluated on the WMT’14 EnglishtoFrench task. The best BLEU score from a single model with six layers is only [Luong et al.2015] while the conventional method of [Durrani et al.2014] achieves .
We focus on improving the single model performance by increasing the model depth. Deep topology has been proven to outperform the shallow architecture in computer vision. In the past two years the top positions of the ImageNet contest have always been occupied by systems with tens or even hundreds of layers
[Szegedy et al.2015, He et al.2016]. But in NMT, the biggest depth used successfully is only six [Luong et al.2015]. We attribute this problem to the properties of the Long ShortTerm Memory (LSTM) [Hochreiter and Schmidhuber1997] which is widely used in NMT. In the LSTM, there are more nonlinear activations than in convolution layers. These activations significantly decrease the magnitude of the gradient in the deep topology, especially when the gradient propagates in recurrent form. There are also many efforts to increase the depth of the LSTM such as the work by KalchbrennerGravesArxiv2015, where the shortcuts do not avoid the nonlinear and recurrent computation.In this work, we introduce a new type of linear connections for multilayer recurrent networks. These connections, which are called fastforward connections, play an essential role in building a deep topology with depth of . In addition, we introduce an interleaved bidirectional architecture to stack LSTM layers in the encoder. This topology can be used for both the encoderdecoder network and the attention network. On the WMT’14 EnglishtoFrench task, this is the deepest NMT topology that has ever been investigated. With our deep attention model, the BLEU score can be improved to outperforming the shallow model which has six layers [Luong et al.2015] by BLEU points. This is also the first time on this task that a single NMT model achieves stateoftheart performance and outperforms the best conventional SMT system [Durrani et al.2014] with an improvement of . Even without using the attention mechanism, we can still achieve with a single model. After model ensembling and unknown word processing, the BLEU score can be further improved to . When evaluated on the subset of the test corpus without unknown words, our model achieves . As a reference, previous work showed that oracle rescoring of the best sequences generated by the SMT model can achieve the BLEU score of about [Sutskever et al.2014]. Our models are also validated on the more difficult WMT’14 EnglishtoGerman task.
2 Neural Machine Translation
Neural machine translation aims at generating the target word sequence given the source word sequence with neural models. In this task, the likelihood of the target sequence will be maximized [Forcada and Ñeco1997] with parameter to learn:
(1) 
where is the sub sequence from to . and denote the start mark and end mark of target sequence respectively.
The process can be explicitly split into an encoding part, a decoding part and the interface between these two parts. In the encoding part, the source sequence is processed and transformed into a group of vectors } for each time step. Further operations will be used at the interface part to extract the final representation of the source sequence from . At the decoding step, the target sequence is generated from the representation .
Recently, there have been two types of NMT models which are different in the interface part. In the encoderdecoder model [Sutskever et al.2014], a single vector extracted from is used as the representation. In the attention model [Bahdanau et al.2015], is dynamically obtained according to the relationship between the target sequence and the source sequence.
The recurrent neural network (RNN), or its specific form the LSTM, is generally used as the basic unit of the encoding and decoding part. However, the topology of most of the existing models is shallow. In the attention network, the encoding part and the decoding part have only one LSTM layer respectively. In the encoderdecoder network, researchers have used at most six LSTM layers
[Luong et al.2015]. Because machine translation is a difficult problem, we believe more complex encoding and decoding architecture is needed for modeling the relationship between the source sequence and the target sequence. In this work, we focus on enhancing the complexity of the encoding/decoding architecture by increasing the model depth.Deep neural models have been studied in a wide range of problems. In computer vision, models with more than ten convolution layers outperform shallow ones on a series of image tasks in recent years [Srivastava et al.2015, He et al.2016, Szegedy et al.2015]. Different kinds of shortcut connections are proposed to decrease the length of the gradient propagation path. Training networks based on LSTM layers, which are widely used in language problems, is a much more challenging task. Because of the existence of many more nonlinear activations and the recurrent computation, gradient values are not stable and are generally smaller. Following the same spirit for convolutional networks, a lot of effort has also been spent on training deep LSTM networks. Yao et al. YaoDyerARXIV2015 introduced depthgated shortcuts, connecting LSTM cells at adjacent layers, to provide a fast way to propagate the gradients. They validated the modification of these shortcuts on an MT task and a language modeling task. However, the best score was obtained using models with three layers. Similarly, Kalchbrenner et al. KalchbrennerGravesArxiv2015 proposed a two dimensional structure for the LSTM. Their structure decreases the number of nonlinear activations and path length. However, the gradient propagation still relies on the recurrent computation. The investigations were also made on questionanswering to encode the questions, where at most two LSTM layers were stacked [Hermann et al.2015].
Based on the above considerations, we propose new connections to facilitate gradient propagation in the following section.
3 Deep Topology
We build the deep LSTM network with the new proposed linear connections. The shortest paths through the proposed connections do not include any nonlinear transformations and do not rely on any recurrent computation. We call these connections fastforward connections. Within the deep topology, we also introduce an interleaved bidirectional architecture to stack the LSTM layers.
3.1 Network
Our entire deep neural network is shown in Fig. 2. This topology can be divided into three parts:
the encoder part (PE) on the left, the decoder part (PD) on the right and the interface between these two parts (PI)
which extracts the representation of the source sequence. We have two instantiations of this topology: DeepED and DeepAtt, which correspond
to the extension of the encoderdecoder network and the attention network respectively. Our main innovation is the novel
scheme for connecting adjacent recurrent layers. We will start with the basic RNN model for the sake of clarity.
Recurrent layer: When an input sequence is given to a recurrent layer, the output at each time
step can be computed as (see Fig. 1 (a))
(2)  
where the bias parameter is not included for simplicity. We use a red circle and a blue empty square to denote an input and a hidden state. A blue square with a “” denotes the previous hidden state. A dotted line means that the hidden state is used recurrently. This computation can be equivalently split into two consecutive steps:
For a deep topology with stacked recurrent layers, the input of each block “f” at recurrent layer (denoted by ) is usually the output of block “r” at its previous recurrent layer (denoted by ). In our work, we add fastforward connections (FF connections) which connect two feedforward computation blocks “f” of adjacent recurrent layers. It means that each block “f” at recurrent layer takes both the outputs of block “f” and block “r” at its previous layer as input (Fig. 1 (c)). FF connections are denoted by dashed red lines in Fig. 1 (c) and Fig. 2. The path of FF connections contains neither nonlinear activations nor recurrent computation. It provides a fast path for information to propagate, so we call this path fastforward connections.
Additionally, in order to learn more temporal dependencies, the sequences can be processed in different directions at each pair of adjacent recurrent layers. This is quantitatively expressed in Eq. 3.1:
(3) 
The opposite directions are marked by the direction term . At the first recurrent layer, the block “f” takes as the input. denotes the concatenation of vectors. This is shown in Fig. 1 (c). The two changes are summarized here:

We add a connection between and . Without , our model will be reduced to the traditional stacked model.

We alternate the RNN direction at different layers with the direction term . If we fix the direction term to , all layers work in the forward direction.
LSTM layer: In our experiments, instead of an RNN, a specific type of recurrent layer called LSTM [Hochreiter and Schmidhuber1997, Graves et al.2009] is used. The LSTM is structurally more complex than the basic RNN in Eq. 2. We define the computation of the LSTM as a function which maps the input and its stateoutput pair at the previous time step to the current stateoutput pair. The exact computations for are the following:
(4) 
where is the concatenation of four vectors of equal size, means elementwise multiplication,
is the input activation function,
is the output activation function, is the activation function for gates, and , , , and are the parameters of the LSTM. It is slightly different from the standard notation in that we do not have a matrix to multiply with the input in our notation.With this notation, we can write down the computations for our deep bidirectional LSTM model with FF connections:
(5) 
where is the input to the deep bidirectional LSTM model. For the encoder, is the embedding of the word in the source sentence. For the decoder is the concatenation of the embedding of the word in the target sentence and the encoder representation for step .
In our final model， two additional operations are used with Eq. 3.1, which is shown in Eq. 6. denotes the first half of the elements of , and is the dropout operation [Hinton et al.2012] which randomly sets an element of
to zero with a certain probability. The use of
is to reduce the parameter size and does not affect the performance. We observed noticeable performance degradation when using only the first third of the elements of “f”.(6) 
With the FF connections, we build a fast channel to propagate the gradients in the deep topology. FF connections can accelerate the model convergence and while improving the performance. A similar idea was also used in [He et al.2016, Zhou and Xu2015].
Encoder: The LSTM layers are stacked following Eq. 3.1. We call this type of encoder interleaved bidirectional encoder. In addition, there are two similar columns ( and ) in the encoder part. Each column consists of stacked LSTM layers. There is no connection between the two columns. The first layers of the two columns process the word representations of the source sequence in different directions. At the last LSTM layers, there are two groups of vectors representing the source sequence. The group size is the same as the length of the input sequence.
Interface: Prior encoderdecoder models and attention models are different in their method of extracting the representations of the source sequences. In our work, as a consequence of the introduced FF connections, we have output vectors ( and of both columns). The representations are modified for both DeepED and DeepAtt.
For DeepED, is static and consists of four parts. 1: The last time step output of the first column. 2: Maxoperation of at all time steps of the second column, denoted by . denotes obtaining the maximal value for each dimension over . 3: . 4: . The maxoperation and last time step state extraction provide complimentary information but do not affect the performance much. is used as the final representation .
For DeepAtt, we do not need the above two operations. We only concatenate the output vectors at each time step to obtain , and a soft attention mechanism [Bahdanau et al.2015] is used to calculate the final representation from . is summarized as:
(7) 
Note that the vector dimensionality of is four times larger than that of (see Eq. 3.1). is summarized as:
(8) 
is the normalized attention weight computed by:
(9) 
is the first hidden layer output in the decoding part. is an alignment model described in [Bahdanau et al.2015]. For DeepAtt, in order to reduce the memory cost, we linearly project (with ) the concatenated vector to a vector with dimension size, denoted by the (fully connected) block “fc” in Fig. 2.
Decoder: The decoder follows Eq. 3.1 and Eq. 6 with fixed direction term . At the first layer, we use the following :
(10) 
is the target word embedding at the previous time step and is zero. There is a single column of stacked LSTM layers. We also use the FF connections like those in the encoder and all layers are in the forward direction. Note that at the last LSTM layer, we only use
to make the prediction with a softmax layer.
Although the network is deep, the training technique is straightforward. We will describe this in the next part.
3.2 Training technique
We take the parallel data as the only input without using any monolingual data for either word representation pretraining or language modeling. Because of the deep bidirectional structure, we do not need to reverse the sequence order as SutskeverLeNIPS2014.
The deep topology brings difficulties for the model training, especially when first order methods such as stochastic gradient descent (SGD)
[LeCun et al.1998] are used. The parameters should be properly initialized and the converging process can be slow. We tried several optimization techniques such as AdaDelta [Zeiler2012], RMSProp
[Tieleman and Hinton2012] and Adam [Kingma and Ba2015]. We found that all of them were able to speed up the process a lot compared to simple SGD while no significant performance difference was observed among them. In this work, we chose Adam for model training and do not present a detailed comparison with other optimization methods.Dropout [Hinton et al.2012] is also used to avoid overfitting. It is utilized on the LSTM nodes (See Eq. 3.1) with a ratio of for both the encoder and decoder.
During the whole model training process, we keep all hyper parameters fixed without any intermediate interruption. The hyper parameters are selected according to the performance on the development set. For such a deep and large network, it is not easy to determine the tuning strategy and this will be considered in future work.
3.3 Generation
We use the common lefttoright beamsearch method for sequence generation. At each time step , the word can be predicted by:
(11) 
where is the predicted target word. is the generated sequence from time step to . We keep best candidates according to Eq. 11 at each time step, until the end of sentence mark is generated. The hypotheses are ranked by the total likelihood of the generated sequence, although normalized likelihood is used in some works [Jean et al.2015].
4 Experiments
We evaluate our method mainly on the widely used WMT’14 EnglishtoFrench translation task. In order to validate our model on more difficult language pairs, we also provide results on the WMT’14 EnglishtoGerman translation task. Our models are implemented in the PADDLE (PArallel Distributed Deep LEarning) platform.
4.1 Data sets
For both tasks, we use the full WMT’14 parallel corpus as our training data. The detailed data sets are listed below:

EnglishtoFrench: Europarl v7, Common Crawl, UN, News Commentary, Gigaword

EnglishtoGerman: Europarl v7, Common Crawl, News Commentary
In total, the EnglishtoFrench corpus includes million sentence pairs, and the EnglishtoGerman corpus includes million sentence pairs. The newstest2012 and newstest2013 are concatenated as our development set, and the newstest2014 is the test set. Our data partition is consistent with previous works on NMT [Luong et al.2015, Jean et al.2015] to ensure fair comparison.
For the source language, we select the most frequent K words as the input vocabulary. For the target language we select the most frequent K French words and the most frequent K German words as the output vocabulary. The full vocabulary of the German corpus is larger [Jean et al.2015], so we select more German words to build the target vocabulary. Outofvocabulary words are replaced with the unknown symbol . For complete comparison to previous work on the EnglishtoFrench task, we also show the results with a smaller vocabulary of K input words and K output words on the sub train set with selected M parallel sequences [Schwenk2014, Sutskever et al.2014, Cho et al.2014].
4.2 Model settings
We have two models as described above, named DeepED and DeepAtt. Both models have exactly the same configuration and layer size except the interface part PI.
We use dimensional word embeddings for both the source and target languages. All LSTM layers, including the layers in the encoder and the layers in the decoder, have memory cells. The output layer size is the same as the size of the target vocabulary. The dimension of is and for DeepED and DeepAtt respectively. For each LSTM layer, the activation functions for gates, inputs and outputs are , , and respectively.
Our network is narrow on word embeddings and LSTM layers. Note that in previous work [Sutskever et al.2014, Bahdanau et al.2015], dimensional word embeddings and dimensional LSTM layers are used. We also tried larger scale models but did not obtain further improvements.
4.3 Optimization
Note that each LSTM layer includes two parts as described in Eq. 3.1, feedforward computation and recurrent computation. Since there are nonlinear activations in the recurrent computation, a larger learning rate is used, while for the feedforward computation a smaller learning rate is used. Word embeddings and the softmax layer also use this learning rate . We refer all the parameters not used for recurrent computation as nonrecurrent part of the model.
Because of the large model size, we use strong regularization to constrain the parameter matrix in the following way:
(12) 
Here is the regularization strength, is the corresponding learning rate, stands for the gradients of . The two embedding layers are not regularized. All the other layers have the same .
The parameters of the recurrent computation part are initialized to zero. All nonrecurrent parts are randomly initialized with zero mean and standard deviation of
. A detailed guide for setting hyperparameters can be found in [Bengio2012].The dropout ratio is . In each batch, there are sequences in our work. The exact number depends on the sequence lengths and model size. We also find that larger batch size results in better convergence although the improvement is not large. However, the largest batch size is constrained by the GPU memory. We use GPU machines (each has K40 GPU cards) running for days to train the full model with parallelization at the data batch level. It takes nearly days for each pass.
One thing we want to emphasize here is that our deep model is not sensitive to these settings. Small variation does not affect the final performance.
4.4 Results
We evaluate the same way as previous NMT works [Sutskever et al.2014, Luong et al.2015, Jean et al.2015]. All reported BLEU scores are computed with the multibleu.perl^{1}^{1}1https://github.com/mosessmt/mosesdecoder/blob/master/scripts/generic/multibleu.perl script which is also used in the above works. The results are for tokenized and case sensitive evaluation.
4.4.1 Single models
EnglishtoFrench: First we list our single model results on the EnglishtoFrench task in Tab. 1. In the first block we show the results with the full corpus. The previous best single NMT encoderdecoder model (EncDec) with six layers achieves BLEU= [Luong et al.2015]. From DeepED, we obtain the BLEU score of , which outperforms EncDec model by BLEU points. This result is even better than the ensemble result of eight EncDec models, which is [Luong et al.2015]. This shows that, in addition to the convolutional layers for computer vision, deep topologies can also work for LSTM layers. For DeepAtt, the performance is further improved to . We also list the previous stateoftheart performance from a conventional SMT system [Durrani et al.2014] with the BLEU of . This is the first time that a single NMT model trained in an endtoend form beats the best conventional system on this task.
We also show the results on the smaller data set with M sentence pairs and K vocabulary in the second block. The two attention models, RNNsearch [Bahdanau et al.2015] and RNNsearchLV [Jean et al.2015], achieve BLEU scores of and respectively. Note that RNNsearchLV uses a large output vocabulary of K words based on the standard attention model RNNsearch. We obtain BLEU= which outperforms its corresponding shallow model RNNsearch by BLEU points. The SMT result from [Schwenk2014] is also listed and falls behind our model by BLEU points.
Methods  Data  Voc  BLEU 

EncDec (Luong,2015)  36M  80K  31.5 
SMT (Durrani,2014)  36M  Full  37.0 
DeepED (Ours)  36M  80K  36.3 
DeepAtt (Ours)  36M  80K  37.7 
RNNsearch (Bahdanau,2014)  12M  30K  28.5 
RNNsearchLV (Jean,2015)  12M  500K  32.7 
SMT (Schwenk,2014)  12M  Full  33.3 
DeepAtt (Ours)  12M  30K  35.9 
Moreover, during the generation process, we obtained the best BLEU score with beam size (when the beam size is , there is only a difference in BLEU score). This is different from other works listed in Tab. 1, where the beam size is [Jean et al.2015, Sutskever et al.2014]. We attribute this difference to the improved model performance, where the ground truth generally exists in the top hypothesis. Consequently, with the much smaller beam size, the generation efficiency is significantly improved.
Next we list the effect of the novel FF connections in our DeepAtt model of shallow topology in Tab. 2. When and , the BLEU scores are without FF and with FF. Note that the model without FF is exactly the standard attention model [Bahdanau et al.2015]. Since there is only a single layer, the use of FF connections means that at the interface part we include into the representation (see Eq. 3.1). We find FF connections bring an improvement of in BLEU. After we increase our model depth to and , the improvement is enlarged to . When the model is trained with larger depth without FF connections, we find that the parameter exploding problem [Bengio et al.1994] happens so frequently that we could not finish training. This suggests that FF connections provide a fast way for gradient propagation.
Models  FF  BLEU  

DeepAtt  No  1  1  31.2 
DeepAtt  Yes  1  1  32.3 
DeepAtt  No  2  2  33.3 
DeepAtt  Yes  2  2  34.7 
Removing FF connections also reduces the corresponding model size. In order to figure out the effect of FF comparing models with the same parameter size, we increase the LSTM layer width of DeepAtt without FF. In Tab. 3 we show that, after using a two times larger LSTM layer width of , we can only obtain a BLEU score of , which is still worse than the corresponding DeepAtt with FF.
Models  FF  width  BLEU  

DeepAtt  No  2  2  512  33.3 
DeepAtt  No  2  2  1024  33.8 
DeepAtt  Yes  2  2  512  34.7 
We also notice that the interleaved bidirectional encoder starts to work when the encoder depth is larger than . The effect of the interleaved bidirectional encoder is shown in Tab. 4. For our largest model with and , we compared the BLEU scores of the interleaved bidirectional encoder and the unidirectional encoder (where all LSTM layers work in forward direction). We find there is a gap of about points between these two encoders for both DeepAtt and DeepED.
Models  Encoder  BLEU  

DeepAtt  Bi  9  7  37.7 
DeepAtt  Uni  9  7  36.2 
DeepED  Bi  9  7  36.3 
DeepED  Uni  9  7  34.9 
Next we look into the effect of model depth. In Tab. 5, starting from and and gradually increasing the model depth, we significantly increase BLEU scores. With and , the best score for DeepAtt is . We tried to increase the LSTM width based on this, but obtained little improvement. As we stated in Sec.2, the complexity of the encoder and decoder, which is related to the model depth, is more important than the model size. We also tried a larger depth, but the results started to get worse. With our topology and training technique, and is the best depth we can achieve.
Models  FF  Col  BLEU  

DeepAtt  Yes  1  1  2  32.3 
DeepAtt  Yes  2  2  2  34.7 
DeepAtt  Yes  5  3  2  36.0 
DeepAtt  Yes  9  7  2  37.7 
DeepAtt  Yes  9  7  1  36.6 
The last line in Tab. 5 shows the BLEU score of of our deepest model, where only one encoding column () is used. We find a BLEU points degradation with a single encoding column. Note that the unidirectional models in Tab. 4 with unidirection still have two encoding columns. In order to find out whether this is caused by the decreased parameter size, we test a wider model with 1024 memory blocks for the LSTM layers. It is shown in Tab. 6 that there is a minor improvement of only . We attribute this to the complementary information provided by the double encoding column.
Models  FF  Col  width  BLEU  

DeepAtt  Yes  9  7  2  512  37.7 
DeepAtt  Yes  9  7  1  512  36.6 
DeepAtt  Yes  9  7  1  1024  36.7 
EnglishtoGerman: We also validate our deep topology on the EnglishtoGerman task. The EnglishtoGerman task is considered a relatively more difficult task, because of the lower similarity between these two languages. Since the German vocabulary is much larger than the French vocabulary, we select K most frequent words as the target vocabulary. All the other hyper parameters are exactly the same as those in the EnglishtoFrench task.
We list our single model DeepAtt performance in Tab. 7. Our single model result with BLEU= is similar to the conventional SMT result of [Buck et al.2014]. We also outperform the shallow attention models as shown in the first two lines in Tab. 7. All the results are consistent with those in the EnglishtoFrench task.
Methods  Data  Voc  BLEU 

RNNsearch (Jean,2015)  4.5M  50K  16.5 
RNNsearchLV (Jean,2015)  4.5M  500K  16.9 
SMT (Buck,2014)  4.5M  Full  20.7 
DeepAtt (Ours)  4.5M  160K  20.6 
4.4.2 Post processing
Two post processing techniques are used to improve the performance further on the EnglishtoFrench task.
First, three DeepAtt models are built for ensemble results. They are initialized with different random parameters; in addition, the training corpus for these models is shuffled with different random seeds. We sum over the predicted probabilities of the target words and normalize the final distribution to generate the next word. It is shown in Tab. 8 that the model ensemble can improve the performance further to . In LuongZarembaACL2015 and JeanBengioACL2015 there are eight models for the best scores, but we only use three models and we do not obtain further gain from more models.
Methods  Model  Data  Voc  BLEU 

DeepED  Single  36M  80K  36.3 
DeepAtt  Single  36M  80K  37.7 
DeepAtt  Single+PosUnk  36M  80K  39.2 
DeepAtt  Ensemble  36M  80K  38.9 
DeepAtt  Ensemble+PosUnk  36M  80K  40.4 
SMT  Durrani, 2014  36M  Full  37.0 
EncDec  Ensemble+PosUnk  36M  80K  37.5 
Second, we recover the unknown words in the generated sequences with the Positional Unknown (PosUnk) model introduced in [Luong et al.2015]. The full parallel corpus is used to obtain the word mappings [Liang et al.2006]. We find this method provides an additional BLEU points, which is consistent with the conclusion in LuongZarembaACL2015. We obtain the new BLEU score of with a single DeepAtt model. For the ensemble models of DeepAtt, the BLEU score rises to . In the last two lines, we list the conventional SMT model [Durrani et al.2014] and the previous best neural models based system EncDec [Luong et al.2015] for comparison. We find our best score outperforms the previous best score by nearly points.
4.5 Analysis
4.5.1 Length
On the EnglishtoFrench task, we analyze the effect of the source sentence length on our models as shown in Fig. 3. Here we show five curves: our DeepAtt single model, our DeepAtt ensemble model, our DeepED model, a previously proposed EncDec model with four layers [Sutskever et al.2014] and an SMT model [Durrani et al.2014].
We find our DeepAtt model works better than the previous two models (EncDec and SMT) on nearly all sentence lengths. It is also shown that for very long sequences with length over words, the performance of our DeepAtt does not degrade, when compared to another NMT model EncDec. Our DeepED also has much better performance than the shallow EncDec model on nearly all lengths, although for long sequences it degrades and starts to fall behind DeepAtt.
4.5.2 Unknown words
Next we look into the detail of the effect of unknown words on the EnglishtoFrench task. We select the subset without unknown words on target sentences from the original test set. There are such sentences (56.8%). We compute the BLEU scores on this subset and the results are shown in Tab. 9. We also list the results from SMT model [Durrani et al.2014] as a comparison.
We find that the BLEU score of DeepAtt on this subset rises to , which has a gap of with the score on the full test set. On this subset, the SMT model achieves , which is similar to its score on the full test set. This suggests that the difficulty on this subset is not much different from that on the full set. We therefore attribute the larger gap for Deepatt to the existence of unknown words. We also compute the BLEU score on the subset of the ensemble model and obtain . As a reference related to human performance, in SutskeverLeNIPS2014, it has been tested that the BLEU score of oracle rescoring the LIUM best results [Schwenk2014] is .
Model  Test set  Ratio(%)  BLEU 

DeepAtt  Full  100.0  37.7 
Ensemble  Full  100.0  38.9 
SMT(Durrani)  Full  100.0  37.0 
DeepAtt  Subset  56.8  40.3 
Ensemble  Subset  56.8  41.4 
SMT(Durrani)  Subset  56.8  37.5 
4.5.3 Overfitting
Deep models have more parameters, and thus have a stronger ability to fit the large data set. However, our experimental results suggest that deep models are less prone to the problem of overfitting.
In Fig. 4, we show three results from models with a different depth on the EnglishtoFrench task. These three models are evaluated by token error rate, which is defined as the ratio of incorrectly predicted words in the whole target sequence with correct historical input. The curve with square marks corresponds to DeepAtt with and . The curve with circle marks corresponds to and . The curve with triangle marks corresponds to and . We find that the deep model has better performance on the test set when the token error rate is the same as that of the shallow models on the training set. This shows that, with decreased token error rate, the deep model is more advantageous in avoiding the overfitting phenomenon. We only plot the early training stage curves because, during the late training stage, the curves are not smooth.
5 Conclusion
With the introduction of fastforward connections to the deep LSTM network, we build a fast path with neither nonlinear transformations nor recurrent computation to propagate the gradients from the top to the deep bottom. On this path, gradients decay much slower compared to the standard deep network. This enables us to build the deep topology of NMT models.
We trained NMT models with depth of including LSTM layers and evaluated them mainly on the WMT’14 EnglishtoFrench translation task. This is the deepest topology that has been investigated in the NMT area on this task. We showed that our DeepAtt exhibits BLEU points improvement over the previous best single model, achieving a BLEU score. This single endtoend NMT model outperforms the best conventional SMT system [Durrani et al.2014] and achieves a stateoftheart performance. After utilizing unknown word processing and model ensemble of three models, we obtained a BLEU score of , an improvement of BLEU points over the previous best result. When evaluated on the subset of the test corpus without unknown words, our model achieves . Our model is also validated on the more difficult EnglishtoGerman task.
Our model is also efficient in sequence generation. The best results from both a single model and model ensemble are obtained with a beam size of , much smaller than previous NMT systems where beam size is about [Jean et al.2015, Sutskever et al.2014]. From our analysis, we find that deep models are more advantageous for learning for long sequences and that the deep topology is resistant to the overfitting problem.
We tried deeper models and did not obtain further improvements with our current topology and training techniques. However, the depth of is not very deep compared to the models in computer vision [He et al.2016]. We believe we can benefit from deeper models, with new designs of topologies and training techniques, which remain as our future work.
References
 [Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of International Conference on Learning Representations.
 [Bengio et al.1994] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning longterm dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166.
 [Bengio2012] Yoshua Bengio, 2012. Practical Recommendations for GradientBased Training of Deep Architectures, pages 437–478. Springer Berlin Heidelberg, Berlin, Heidelberg.
 [Buck et al.2014] Christian Buck, Kenneth Heafield, and Bas van Ooyen. 2014. Ngram counts and language models from the common crawl. In Proceedings of the Language Resources and Evaluation Conference.

[Cho et al.2014]
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger
Schwenk, and Yoshua Bengio.
2014.
Learning phrase representations using rnn encoderdecoder for
statistical machine translation.
In
Proceedings of the Empiricial Methods in Natural Language Processing
.  [Durrani et al.2014] Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. 2014. Edinburgh’s phrasebased machine translation systems for WMT14. In Proceedings of the Ninth Workshop on Statistical Machine Translation.
 [Forcada and Ñeco1997] Mikel L. Forcada and Ramón P. Ñeco. 1997. Recursive heteroassociative memories for translation. In Biological and Artificial Computation: From Neuroscience to Technology, Berlin, Heidelberg. Springer Berlin Heidelberg.
 [Graves et al.2009] Alex Graves, Marcus Liwicki, Santiago Fernandez, Roman Bertolami, Horst Bunke, and Jürgen Schmidhuber. 2009. A novel connectionist system for unconstrained handwriting recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(5):855–868.

[He et al.2016]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
2016.
Deep residual learning for image recognition.
In
IEEE Conference on Computer Vision and Pattern Recognition
.  [Hermann et al.2015] Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems.
 [Hinton et al.2012] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv:1207.0580.
 [Hochreiter and Schmidhuber1997] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long shortterm memory. Neural Computation, 9(8):1735–1780.
 [Jean et al.2015] Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing.
 [Kalchbrenner and Blunsom2013] Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the Empirical Methods in Natural Language Processing.
 [Kalchbrenner et al.2016] Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. 2016. Grid long shortterm memory. In Proceedings of International Conference on Learning Representations.
 [Kingma and Ba2015] Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representations.
 [Koehn et al.2003] P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrasebased translation. In Proceedings of the North American Chapter of the Association for Computational Linguistics on Human Language Technology.
 [LeCun et al.1998] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324.
 [Liang et al.2006] Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the North American Chapter of the Association of Computational Linguistics on Human Language Technology.
 [Luong et al.2015] Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing.
 [Mao et al.2015] Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan L. Yuille. 2015. Deep captioning with multimodal recurrent neural networks (mRNN). In Proceedings of International Conference on Learning Representations.
 [Schwenk2014] Holger Schwenk. 2014. http:wwwlium.univlemans.frschwenkcslm_joint_paper [online; accessed 03september2014]. University Le Mans.

[Srivastava et al.2015]
Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber.
2015.
Highway networks.
In
Proceedings of the 32nd International Conference on Machine Learning, Deep Learning Workshop
.  [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems.
 [Szegedy et al.2015] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition.
 [Tieleman and Hinton2012] Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4.
 [Vinyals and Le2015] Oriol Vinyals and Quoc Le. 2015. A neural conversational model. In Proceedings of the 32nd International Conference on Machine Learning, Deep Learning Workshop.
 [Yao et al.2015] Kaisheng Yao, Trevor Cohn, Katerina Vylomova, Kevin Duh, and Chris Dyer. 2015. Depthgated LSTM. arXiv:1508.03790.
 [Yu et al.2015] Yang Yu, Wei Zhang, ChungWei Hang, Bing Xiang, and Bowen Zhou. 2015. Empirical study on deep learning models for QA. arXiv:1510.07526.
 [Zeiler2012] Matthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. arXiv:1212.5701.
 [Zhou and Xu2015] Jie Zhou and Wei Xu. 2015. Endtoend learning of semantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing.
Comments
There are no comments yet.