Bridging the Gap between Training and Inference for Neural Machine Translation

06/06/2019 ∙ by Wen Zhang, et al. ∙ 5

Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words. At training time, it predicts with the ground truth words as context while at inference it has to generate the entire sequence from scratch. This discrepancy of the fed context leads to error accumulation among the way. Furthermore, word-level training requires strict matching between the generated sequence and the ground truth sequence which leads to overcorrection over different but reasonable translations. In this paper, we address these issues by sampling context words not only from the ground truth sequence but also from the predicted sequence by the model during training, where the predicted sequence is selected with a sentence-level optimum. Experiment results on Chinese->English and WMT'14 English->German translation tasks demonstrate that our approach can achieve significant improvements on multiple datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Neural Machine Translation has shown promising results and drawn more attention recently. Most NMT models fit in the encoder-decoder framework, including the RNN-based Sutskever et al. (2014); Bahdanau et al. (2015); Meng and Zhang (2019), the CNN-based Gehring et al. (2017) and the attention-based Vaswani et al. (2017) models, which predict the next word conditioned on the previous context words, deriving a language model over target words. The scenario is at training time the ground truth words are used as context while at inference the entire sequence is generated by the resulting model on its own and hence the previous words generated by the model are fed as context. As a result, the predicted words at training and inference are drawn from different distributions, namely, from the data distribution as opposed to the model distribution. This discrepancy, called exposure bias Ranzato et al. (2015), leads to a gap between training and inference. As the target sequence grows, the errors accumulate among the sequence and the model has to predict under the condition it has never met at training time.

Intuitively, to address this problem, the model should be trained to predict under the same condition it will face at inference. Inspired by Data As Demonstrator (DaDVenkatraman et al. (2015), feeding as context both ground truth words and the predicted words during training can be a solution. NMT models usually optimize the cross-entropy loss which requires a strict pairwise matching at the word level between the predicted sequence and the ground truth sequence. Once the model generates a word deviating from the ground truth sequence, the cross-entropy loss will correct the error immediately and draw the remaining generation back to the ground truth sequence. However, this causes a new problem. A sentence usually has multiple reasonable translations and it cannot be said that the model makes a mistake even if it generates a word different from the ground truth word. For example,

reference: We should comply with the rule.
cand1: We should abide with the rule.
cand2: We should abide by the law.
cand3: We should abide by the rule.

once the model generates “abide” as the third target word, the cross-entropy loss would force the model to generate “with” as the fourth word (as cand1) so as to produce larger sentence-level likelihood and be in line with the reference, although “by” is the right choice. Then, “with” will be fed as context to generate “the rule”, as a result, the model is taught to generate “abide with the rule” which actually is wrong. The translation cand1 can be treated as overcorrection phenomenon. Another potential error is that even the model predicts the right word “by” following “abide”, when generating subsequent translation, it may produce “the law” improperly by feeding “by” (as cand2). Assume the references and the training criterion let the model memorize the pattern of the phrase “the rule” always following the word “with”, to help the model recover from the two kinds of errors and create the correct translation like cand3, we should feed “with” as context rather than “by” even when the previous predicted phrase is “abide by”. We refer to this solution as Overcorrection Recovery (OR).

In this paper, we present a method to bridge the gap between training and inference and improve the overcorrection recovery capability of NMT. Our method first selects oracle

words from its predicted words and then samples as context from the oracle words and ground truth words. Meanwhile, the oracle words are selected not only with a word-by-word greedy search but also with a sentence-level evaluation, e.g. BLEU, which allows greater flexibility under the pairwise matching restriction of cross-entropy. At the beginning of training, the model selects as context ground truth words at a greater probability. As the model converges gradually, oracle words are chosen as context more often. In this way, the training process changes from a fully guided scheme towards a less guided scheme. Under this mechanism, the model has the chance to learn to handle the mistakes made at inference and also has the ability to recover from overcorrection over alternative translations. We verify our approach on both the RNNsearch model and the stronger Transformer model. The results show that our approach can significantly improve the performance on both models.

2 RNN-based NMT Model

Our method can be applied in a variety of NMT models. Without loss of generality, we take the RNN-based NMT Bahdanau et al. (2015) as an example to introduce our method. Assume the source sequence and the observed translation are and .

Encoder.

A bidirectional Gated Recurrent Unit (GRU)

Cho et al. (2014) is used to acquire two sequences of hidden states, the annotation of is . Note that

is employed to represent the embedding vector of the word

.

(1)
(2)

Attention. The attention is designed to extract source information (called source context vector). At the -th step, the relevance between the target word and the -th source word is evaluated and normalized over the source sequence

(3)
(4)

The source context vector is the weighted sum of all source annotations and can be calculated by

(5)

Decoder. The decoder employs a variant of GRU to unroll the target information. At the -th step, the target hidden state is given by

(6)

The probability distribution

over all the words in the target vocabulary is produced conditioned on the embedding of the previous ground truth word, the source context vector and the hidden state

(7)
(8)
(9)

where

stands for a linear transformation,

is used to map to so that each target word has one corresponding dimension in .

3 Approach

The main framework (as shown in Figure 1) of our method is to feed as context either the ground truth words or the previous predicted words, i.e. oracle words, with a certain probability. This potentially can reduce the gap between training and inference by training the model to handle the situation which will appear during test time. We will introduce two methods to select the oracle words. One method is to select the oracle words at the word level with a greedy search algorithm, and another is to select a oracle sequence at the sentence-level optimum. The sentence-level oracle provides an option of -gram matching with the ground truth sequence and hence inherently has the ability of recovering from overcorrection for the alternative context. To predict the -th target word , the following steps are involved in our approach:

  • Select an oracle word (at word level or sentence level) at the {}-th step. (Section Oracle Word Selection)

  • Sample from the ground truth word with a probability of or from the oracle word with a probability of . (Section Sampling with Decay)

  • Use the sampled word as and replace the in Equation (6) and (7) with , then perform the following prediction of the attention-based NMT.

Figure 1: The architecture of our method.

3.1 Oracle Word Selection

Generally, at the -th step, the NMT model needs the ground truth word as the context word to predict , thus, we could select an oracle word to simulate the context word. The oracle word should be a word similar to the ground truth or a synonym. Using different strategies will produce a different oracle word . One option is that word-level greedy search could be employed to output the oracle word of each step, which is called Word-level Oracle (called ). Besides, we can further optimize the oracle by enlarging the search space with beam search and then re-ranking the candidate translations with a sentence-level metric, e.g. BLEU Papineni et al. (2002), GLEU Wu et al. (2016), ROUGE Lin (2004), etc, the selected translation is called oracle sentence, the words in the translation are Sentence-level Oracle (denoted as ).

Word-Level Oracle

For the {}-th decoding step, the direct way to select the word-level oracle is to pick the word with the highest probability from the word distribution drawn by Equation (9), which is shown in Figure 2. The predicted score in is the value before the operation. In practice, we can acquire more robust word-level oracles by introducing the Gumbel-Max technique Gumbel (1954); Maddison et al. (2014), which provides a simple and efficient way to sample from a categorical distribution.

The Gumbel noise, treated as a form of regularization, is added to in Equation (8), as shown in Figure 3, then function is performed, the word distribution of is approximated by

(10)
(11)
(12)

where

is the Gumbel noise calculated from a uniform random variable

, is temperature. As approaches 0, the function is similar to the

operation, and it becomes uniform distribution gradually when

. Similarly, according to , the -best word is selected as the word-level oracle word

(13)

Note that the Gumbel noise is just used to select the oracle and it does not affect the loss function for training.

Figure 2: Word-level oracle without noise.
Figure 3: Word-level oracle with Gumbel noise.

Sentence-Level Oracle

The sentence-level oracle is employed to allow for more flexible translation with -gram matching required by a sentence-level metric. In this paper, we employ BLEU as the sentence-level metric. To select the sentence-level oracles, we first perform beam search for all sentences in each batch, assuming beam size is , and get -best candidate translations. In the process of beam search, we also could apply the Gumbel noise for each word generation. We then evaluate each translation by calculating its BLEU score with the ground truth sequence, and use the translation with the highest BLEU score as the oracle sentence. We denote it as , then at the -th decoding step, we define the sentence-level oracle word as

(14)

But a problem comes with sentence-level oracle. As the model samples from ground truth word and the sentence-level oracle word at each step, the two sequences should have the same number of words. However we can not assure this with the naive beam search decoding algorithm. Based on the above problem, we introduce force decoding to make sure the two sequences have the same length.

Force Decoding. As the length of the ground truth sequence is , the goal of force decoding is to generate a sequence with words followed by a special end-of-sentence (EOS) symbol. Therefore, in beam search, once a candidate translation tends to end with EOS when it is shorter or longer than , we will force it to generate words, that is,

  • If the candidate translation gets a word distribution at the -th step where and EOS is the top first word in , then we select the top second word in as the -th word of this candidate translation.

  • If the candidate translation gets a word distribution at the {}-th step where EOS is not the top first word in , then we select EOS as the {}-th word of this candidate translation.

In this way, we can make sure that all the candidate translations have words, then re-rank the candidates according to BLEU score and select the top first as the oracle sentence. For adding Gumbel noise into the sentence-level oracle selection, we replace the with at the -th decoding step during force decoding.

3.2 Sampling with Decay

In our method, we employ a sampling mechanism to randomly select the ground truth word or the oracle word as . At the beginning of training, as the model is not well trained, using as too often would lead to very slow convergence, even being trapped into local optimum. On the other hand, at the end of training, if the context is still selected from the ground truth word at a large probability, the model is not fully exposed to the circumstance which it has to confront at inference and hence can not know how to act in the situation at inference. In this sense, the probability of selecting from the ground truth word can not be fixed, but has to decrease progressively as the training advances. At the beginning, , which means the model is trained entirely based on the ground truth words. As the model converges gradually, the model selects from the oracle words more often.

Borrowing ideas from but being different from Bengio et al. (2015) which used a schedule to decrease as a function of the index of mini-batch, we define

with a decay function dependent on the index of training epochs

(starting from )

(15)

where is a hyper-parameter. The function is strictly monotone decreasing. As the training proceeds, the probability of feeding ground truth words decreases gradually.

3.3 Training

After selecting by using the above method, we can get the word distribution of according to Equation (6), (7), (8) and (9

). We do not add the Gumbel noise to the distribution when calculating loss for training. The objective is to maximize the probability of the ground truth sequence based on maximum likelihood estimation (MLE). Thus following loss function is minimized:

(16)

where is the number of sentence pairs in the training data, indicates the length of the -th ground truth sentence, refers to the predicted probability distribution at the -th step for the -th sentence, hence is the probability of generating the ground truth word at the -th step.

4 Related Work

Some other researchers have noticed the problem of exposure bias in NMT and tried to solve it. Venkatraman et al. (2015) proposed Data As Demonstrator (DAD) which initialized the training examples as the paired two adjacent ground truth words and at each step added the predicted word paired with the next ground truth word as a new training example. Bengio et al. (2015) further developed the method by sampling as context from the previous ground truth word and the previous predicted word with a changing probability, not treating them equally in the whole training process. This is similar to our method, but they do not include the sentence-level oracle to relieve the overcorrection problem and neither the noise perturbations on the predicted distribution.

Another direction of attempts is the sentence-level training with the thinking that the sentence-level metric, e.g., BLEU, brings a certain degree of flexibility for generation and hence is more robust to mitigate the exposure bias problem. To avoid the problem of exposure bias, Ranzato et al. (2015) presented a novel algorithm Mixed Incremental Cross-Entropy Reinforce (MIXER) for sequence-level training, which directly optimized the sentence-level BLEU used at inference. Shen et al. (2016)

introduced the Minimum Risk Training (MRT) into the end-to-end NMT model, which optimized model parameters by minimizing directly the expected loss with respect to arbitrary evaluation metrics, e.g., sentence-level BLEU. 

Shao et al. (2018)

proposed to eliminate the exposure bias through a probabilistic n-gram matching objective, which trains NMT NMT under the greedy decoding strategy.

5 Experiments

We carry out experiments on the NIST ChineseEnglish (ZhEn) and the WMT’14 EnglishGerman (EnDe) translation tasks.

5.1 Settings

For ZhEn, the training dataset consists of 1.25M sentence pairs extracted from LDC corpora111These sentence pairs are mainly extracted from LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06. We choose the NIST 2002 (MT02) dataset as the validation set, which has sentences, and the NIST 2003 (MT03), NIST 2004 (MT04), NIST 2005 (MT05) and NIST 2006 (MT06) datasets as the test sets, which contain 919, 1788, 1082 and 1664 sentences respectively. For EnDe, we perform our experiments on the corpus provided by WMT’14, which contains 4.5M sentence pairs222http://www.statmt.org/wmt14/translation-task.html. We use the as the validation set, and the as the test sets, which containing and sentences respectively. We measure the translation quality with BLEU scores Papineni et al. (2002). For ZhEn, case-insensitive BLEU score is calculated by using the mteval-v11b.pl script. For EnDe, we tokenize the references and evaluate the performance with case-sensitive BLEU score by the multi-bleu.pl script. The metrics are exactly the same as in previous work. Besides, we make statistical significance test according to the method of Collins et al. (2005).

Systems Architecture MT03 MT04 MT05 MT06 Average
Existing end-to-end NMT systems
 Tu et al. (2016) Coverage
 Shen et al. (2016) MRT
 Zhang et al. (2017) Distortion
Our end-to-end NMT systems
this work RNNsearch
   + SS-NMT
   + MIXER
   + OR-NMT 40.40 42.63 38.87 38.44 40.09
Transformer
   + word oracle
   + sentence oracle 48.31 49.40 48.72 48.45 48.72
Table 1: Case-insensitive BLEU scores (%) on ZhEn translation task. “”, “”, “” and “” indicate statistically significant difference (p<0.01) from RNNsearch, SS-NMT, MIXER and Transformer, respectively.

In training the NMT model, we limit the source and target vocabulary to the most frequent K words for both sides in the ZhEn translation task, covering approximately % and % words of two corpus respectively. For the EnDe translation task, sentences are encoded using byte-pair encoding (BPE) Sennrich et al. (2016) with merging operations for both source and target languages, which have vocabularies of and tokens respectively. We limit the length of sentences in the training datasets to words for ZhEn and subwords for EnDe. For RNNSearch model, the dimension of word embedding and hidden layer is , and the beam size in testing is . All parameters are initialized by the uniform distribution over

. The mini-batch stochastic gradient descent (SGD) algorithm is employed to train the model parameters with batch size setting to

. Moreover, the learning rate is adjusted by adadelta optimizer Zeiler (2012) with = and =. Dropout is applied on the output layer with dropout rate being . For Transformer model, we train base model with default settings (fairseq333https://github.com/pytorch/fairseq).

5.2 Systems

The following systems are involved:

RNNsearch:

Our implementation of an improved model as described in Section 2, where the decoder employs two GRUs and an attention. Specifically, Equation 6 is substituted with:

(17)
(18)

Besides, in Equation 3, is replaced with .

Ss-Nmt:

Our implementation of the scheduled sampling (SS) method Bengio et al. (2015) on the basis of the RNNsearch. The decay scheme is the same as Equation 15 in our approach.

Mixer:

Our implementation of the mixed incremental cross-entropy reinforce Ranzato et al. (2015), where the sentence-level metric is BLEU and the average reward is acquired according to its offline method with a -layer linear regressor.

Or-Nmt:

Based on the RNNsearch, we introduced the word-level oracles, sentence-level oracles and the Gumbel noises to enhance the overcorrection recovery capacity. For the sentence-level oracle selection, we set the beam size to be , set = in Equation (11) and = for the decay function in Equation (15). OR-NMT is the abbreviation of NMT with Overcorrection Recovery.

5.3 Results on ZhEn Translation

We verify our method on two baseline models with the NIST ZhEn datasets in this section.

Results on the RNNsearch

As shown in Table 1Tu et al. (2016) propose to model coverage in RNN-based NMT to improve the adequacy of translations. Shen et al. (2016) propose minimum risk training (MRT) for NMT to directly optimize model parameters with respect to BLEU scores. Zhang et al. (2017)

model distortion to enhance the attention model. Compared with them, our baseline system RNNsearch 1) outperforms previous shallow RNN-based NMT system equipped with the coverage model 

Tu et al. (2016); and 2) achieves competitive performance with the MRT Shen et al. (2016) and the Distortion Zhang et al. (2017) on the same datasets. We hope that the strong shallow baseline system used in this work makes the evaluation convincing.

We also compare with the other two related methods that aim at solving the exposure bias problem, including the scheduled sampling Bengio et al. (2015) (SS-NMT) and the sentence-level training Ranzato et al. (2015) (MIXER). From Table 1, we can see that both SS-NMT and MIXER can achieve improvements by taking measures to mitigate the exposure bias. While our approach OR-NMT can outperform the baseline system RNNsearch and the competitive comparison systems by directly incorporate the sentence-level oracle and noise perturbations for relieving the overcorrection problem. Particularly, our OR-NMT significantly outperforms the RNNsearch by + BLEU points averagely on four test datasets. Comparing with the two related models, our approach further gives a significant improvements on most test sets and achieves improvement by about + BLEU points on average.

Results on the Transformer

The methods we propose can also be adapted to the stronger Transformer model. The evaluated results are listed in Table 1. Our word-level method can improve the base model by + BLEU points on average, and the sentence-level method can further bring in + BLEU points improvement.

Systems Average
RNNsearch
   + word oracle
        + noise
   + sentence oracle
        + noise 40.09
Table 2: Factor analysis on ZhEn translation, the results are average BLEU scores on MT0306 datasets.

5.4 Factor Analysis

We propose several strategies to improve the performance of approach on relieving the overcorrection problem, including utilizing the word-level oracle, the sentence-level oracle, and incorporating the Gumbel noise for oracle selection. To investigate the influence of these factors, we conduct the experiments and list the results in Table 2.

When only employing the word-level oracle, the translation performance was improved by + BLEU points, this indicates that feeding predicted words as context can mitigate exposure bias. When employing the sentence-level oracle, we can further achieve + BLEU points improvement. It shows that the sentence-level oracle performs better than the word-level oracle in terms of BLEU. We conjecture that the superiority may come from a greater flexibility for word generation which can mitigate the problem of overcorrection. By incorporating the Gumbel noise during the generation of the word-level and sentence-level oracle words, the BLEU score are further improved by and respectively. This indicates Gumbel noise can help the selection of each oracle word, which is consistent with our claim that Gumbel-Max provides a efficient and robust way to sample from a categorical distribution.

Figure 4: Training loss curves on ZhEn translation with different factors. The black, blue and red colors represent the RNNsearch, RNNsearch with word-level oracle and RNNsearch with sentence-level oracle systems respectively.

5.5 About Convergence

In this section, we analyze the influence of different factors for the convergence. Figure 4 gives the training loss curves of the RNNsearch, word-level oracle (WO) without noise and sentence-level oracle (SO) with noise. In training, BLEU score on the validation set is used to select the best model, a detailed comparison among the BLEU score curves under different factors is shown in Figure 5. RNNsearch converges fast and achieves the best result at the -th epoch, while the training loss continues to decline after the -th epoch until the end. Thus, the training of RNNsearch may encounter the overfitting problem.

Figure 5: Trends of BLEU scores on the validation set with different factors on the ZhEn translation task.

Figure 4 and 5 also reveal that, integrating the oracle sampling and the Gumbel noise leads to a little slower convergence and the training loss does not keep decreasing after the best results appear on the validation set. This is consistent with our intuition that oracle sampling and noises can avoid overfitting despite needs a longer time to converge.

Figure 6 shows the BLEU scores curves on the MT03 test set under different factors444Note that the “SO” model without noise is trained based on the pre-trained RNNsearch model (as shown by the red dashed lines in Figure 5 and 6).. When sampling oracles with noise (=) on the sentence level, we obtain the best model. Without noise, our system converges to a lower BLEU score. This can be understood easily that using its own results repeatedly during training without any regularization will lead to overfitting and quick convergence. In this sense, our method benefits from the sentence-level sampling and Gumbel noise.

Figure 6: Trends of BLEU scores on the MT03 test set with different factors on the ZhEn translation task.

5.6 About Length

Figure 7 shows the BLEU scores of generated translations on the MT03 test set with respect to the lengths of the source sentences. In particular, we split the translations for the MT03 test set into different bins according to the length of source sentences, then test the BLEU scores for translations in each bin separately with the results reported in Figure 7. Our approach can achieve big improvements over the baseline system in all bins, especially in the bins (,], (,] and (,] of the super-long sentences. The cross-entropy loss requires that the predicted sequence is exactly the same as the ground truth sequence which is more difficult to achieve for long sentences, while our sentence-level oracle can help recover from this kind of overcorrection.

Figure 7: Performance comparison on the MT03 test set with respect to the different lengths of source sentences on the ZhEn translation task.

5.7 Effect on Exposure Bias

To validate whether the improvements is mainly obtained by addressing the exposure bias problem, we randomly select K sentence pairs from the ZhEn training data, and use the pre-trained RNNSearch model and proposed model to decode the source sentences. The BLEU score of RNNSearch model was , while our model produced + points. We then count the ground truth words whose probabilities in the predicted distributions produced by our model are greater than those produced by the baseline model, and mark the number as . There are totally gold words in the references, and =. The proportion is =, which could verify the improvements are mainly obtained by addressing the exposure bias problem.

5.8 Results on EnDe Translation

Systems
RNNsearch
   + SS-NMT
   + MIXER
   + OR-NMT 27.41
Transformer (base)
   + SS-NMT
   + MIXER
   + OR-NMT 28.65
Table 3: Case-sensitive BLEU scores (%) on EnDe task. The “” indicates the results are significantly better (p<0.01) than RNNsearch and Transformer.

We also evaluate our approach on the WMT’14 benchmarks on the EnDe translation task. From the results listed in Table 3, we conclude that the proposed method significantly outperforms the competitive baseline model as well as related approaches. Similar with results on the ZhEn task, both scheduled sampling and MIXER could improve the two baseline systems. Our method improves the RNNSearch and Transformer baseline models by + and + BLEU points respectively. These results demonstrate that our model works well across different language pairs.

6 Conclusion

The end-to-end NMT model generates a translation word by word with the ground truth words as context at training time as opposed to the previous words generated by the model as context at inference. To mitigate the discrepancy between training and inference, when predicting one word, we feed as context either the ground truth word or the previous predicted word with a sampling scheme. The predicted words, referred to as oracle words, can be generated with the word-level or sentence-level optimization. Compared to word-level oracle, sentence-level oracle can further equip the model with the ability of overcorrection recovery. To make the model fully exposed to the circumstance at reference, we sample the context word with decay from the ground truth words. We verified the effectiveness of our method with two strong baseline models and related works on the real translation tasks, achieved significant improvement on all the datasets. We also conclude that the sentence-level oracle show superiority over the word-level oracle.

Acknowledgments

We thank the three anonymous reviewers for their valuable suggestions. This work was supported by National Natural Science Foundation of China (NO. 61662077, NO. 61876174) and National Key R&D Program of China (NO. YS2017YFGH001428).

References

  • Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR 2015.
  • Bengio et al. (2015) Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1171–1179. Curran Associates, Inc.
  • Cho et al. (2014) Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In

    Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

    , pages 1724–1734, Doha, Qatar. Association for Computational Linguistics.
  • Collins et al. (2005) Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 531–540, Ann Arbor, Michigan. Association for Computational Linguistics.
  • Gehring et al. (2017) Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In

    Proceedings of the 34th International Conference on Machine Learning

    , volume 70 of Proceedings of Machine Learning Research, pages 1243–1252, International Convention Centre, Sydney, Australia. PMLR.
  • Gumbel (1954) Emil Julius Gumbel. 1954. Statistical theory of extreme valuse and some practical applications. Nat. Bur. Standards Appl. Math. Ser. 33.
  • Lin (2004) Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain. Association for Computational Linguistics.
  • Maddison et al. (2014) Chris J Maddison, Daniel Tarlow, and Tom Minka. 2014. A* sampling. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3086–3094. Curran Associates, Inc.
  • Meng and Zhang (2019) Fandong Meng and Jinchao Zhang. 2019. Dtmt: A novel deep transition architecture for neural machine translation. In

    Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence

    , AAAI’19. AAAI Press.
  • Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics.
  • Ranzato et al. (2015) Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732.
  • Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
  • Shao et al. (2018) Chenze Shao, Xilin Chen, and Yang Feng. 2018. Greedy search with probabilistic n-gram matching for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4778–4784.
  • Shen et al. (2016) Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1683–1692.
  • Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc.
  • Tu et al. (2016) Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of ACL.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc.
  • Venkatraman et al. (2015) Arun Venkatraman, Martial Hebert, and J. Andrew Bagnell. 2015. Improving multi-step prediction of learned time series models. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI’15, pages 3024–3030. AAAI Press.
  • Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
  • Zeiler (2012) Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.
  • Zhang et al. (2017) Jinchao Zhang, Mingxuan Wang, Qun Liu, and Jie Zhou. 2017. Incorporating word reordering knowledge into attention-based neural machine translation. In Proceedings of ACL.