Neural Machine Translation (NMT) [Cho et al.2014, Bahdanau et al.2014] has drawn more and more attention in both academia and industry [Luong and Manning2016, Jean et al.2015, Shen et al.2016, Tu et al.2016b, Sennrich et al.2016, Wu et al.2016]. Compared with traditional Statistical Machine Translation (SMT) [Koehn et al.2003], NMT achieves similar or even better translation results in an end-to-end framework. The sentence level maximum likelihood principle and gating units in LSTM/GRU [Hochreiter and Schmidhuber1997, Cho et al.2014], together with attention mechanisms grant NMT with the ability to better translate long sentences.
Despite its success, the translation quality of latest NMT systems is still far from satisfaction and there remains large room for improvement. For example, NMT usually adopts the Maximum Likelihood Estimation (MLE) principle for training, i.e., to maximize the probability of the target ground-truth sentence conditioned on the source sentence. Such an objective does not guarantee the translation results to be natural, sufficient, and accurate compared with ground-truth translation by human. There are previous works[Ranzato et al.2015, Shen et al.2016, Bahdanau et al.2016] that aim to alleviate such limitations of maximum likelihood training, by adopting sequence level objectives (e.g., directly maximizing BLEU [Papineni et al.2002]), to reduce the objective inconsistency between NMT training and inference. Yet somewhat improved, such objectives still cannot fully bridge the gap between NMT translations and ground-truth translations.
We, in this paper, adopt a thoroughly different training objective for NMT, targeting at directly minimizing the difference between human translation and the translation given by an NMT model. To achieve this target, inspired by the recent success of Generative Adversarial Networks (GANs) [Goodfellow et al.2014a], we design an adversarial training protocol for NMT and name it as Adversarial-NMT. In Adversarial-NMT, besides the typical NMT model, an adversary is introduced to distinguish the translation generated by NMT from that by human (i.e., ground truth). Meanwhile the NMT model tries to improve its translation results such that it can successfully cheat the adversary.
These two modules in Adversarial-NMT are co-trained, and their performances get mutually improved. In particular, the discriminative power of the adversary can be improved by learning from more and more training samples (both positive ones generated by human and negative ones sampled from NMT); and the ability of the NMT model in cheating the adversary can be improved by taking the output of the adversary as reward. In this way, the NMT translation results are professor forced [Lamb et al.2016] to be as close as possible to ground-truth translation.
Different from previous GANs, which assume the existence of a generator in continuous space, in our proposed framework, the NMT model is in fact not a typical generative model, but instead a probabilistic transformation that maps a source language sentence to a target language sentence, both in discrete space. Such differences make it necessary to design both new network architectures and optimization methods to make adversarial training possible for NMT. We therefore on one aspect, leverage a specially designed Convolutional Neural Network (CNN) model as the adversary, which takes the (source, target) sentence pair as input; on the other aspect, we turn to a policy gradient method named REINFORCE [Williams1992]
, widely used in the reinforcement learning literature[Sutton and Barto1998], to guarantee both the two modules are effectively optimized in an adversarial manner. We conduct extensive experiments, which demonstrates that Adversarial-NMT can achieve significantly better translation results than traditional NMT models with even much larger vocabulary size and higher model complexity.
2 Related Work
has been the recent research focus of the community. Typical NMT system is built within the RNN based encoder-decoder framework. In such a framework the encoder RNN sequentially processes the words in a source language sentence into fixed length vectors, which act as the inputs to decoder RNN to decode the translation sentence. NMT typically adopts the principle of Maximum Likelihood Estimation (MLE) for training, i.e., maximizing the per-word likelihood of target sentence. Other training criteria, such as Minimum Risk Training (MRT) based on reinforcement learning[Ranzato et al.2015, Shen et al.2016] and translation reconstruction [Tu et al.2016a], are shown to improve over such word-level MLE principle since these objectives take the translation sentence as a whole.
The training principle we propose is based on the spirit of Generative Adversarial Networks (GANs) [Goodfellow et al.2014a, Salimans et al.2016], or more generally, adversarial training [Goodfellow et al.2014b]. In adversarial training, a discriminator and a generator compete with each other, forcing the generator to produce high quality outputs that are able to fool the discriminator. Adversarial training typically succeed in image generation [Goodfellow et al.2014a, Reed et al.2016]
, with limited contribution in natural language processing tasks[Yu et al.2016, Li et al.2017], mainly due to the difficulty of propagating the error signals from the discriminator to the generator through the discretely generated natural language tokens. seqGAN alleviates such a difficulty by reinforcement learning approach for sequence (e.g., music) generation. However, as far as we know, there are limited efforts on adversarial training for sequence-to-sequence task when a conditional mapping between two sequences is involved, and our work is among the first endeavors to explore the potential of acting in this way, especially for Neural Machine Translation [Yang et al.2017].
The overall framework of Our Adversarial-NMT is shown in Figure 1. Let be a bilingual aligned sentence pair for training, where is the -th word in the source sentence and is the -th word in the target sentence. Let denote the translation sentence out from an NMT system for the source sentence . As previously stated, the goal of Adversarial-NMT is to force to be as ‘similar’ as . In the perfect case, is so similar to the human translation that even a human cannot tell whether is generated by machine or human. In order to achieve that, we introduce an extra adversary network, which acts similarly to the discriminator adopted in GANs [Goodfellow et al.2014a]. The goal of the adversary is to differentiate human translation from machine translation, and the NMT model tries to produce a target sentence as similar as human translation so as to fool the adversary.
3.1 NMT Model
We adopt the Recurrent Neural Network (RNN) based encoder-decoder as the NMT model to seek a target language translationgiven source sentence . In particular, a probabilistic mapping is firstly learnt and the translation result is sampled from it. To be specific, given source sentence and previously generated words , the probability of generating word is:
where is the decoding state from decoder at time . Here
is the recurrent unit such as the Long Short Term Memory (LSTM) unit[Hochreiter and Schmidhuber1997]
or Gated Recurrent Unit (GRU)[Cho et al.2014], and is a distinct source representation at time calculated by an attention mechanism [Bahdanau et al.2014]:
where is the source sentence length,
is a feed-forward neural network andis the hidden state from RNN encoder computed by and :
The translation result can be sampled from either in a greedy way for each timestep, or using beam search [Sutskever et al.2014] to seek globally optimized result.
3.2 Adversary Model
The adversary is used to differentiate translation result and the ground-truth translation , given the source language sentence . To achieve that, one needs to measure the translative matching degree of source-target sentence pair . We turn to Convolution Neural Network (CNN) for this task [Yin et al.2015, Hu et al.2014], since with its layer-by-layer convolution and pooling strategies, CNN is able to accurately capture the hierarchical correspondence of at different abstraction levels.
The general structure is shown in Figure 2. Specifically, given a sentence pair , we first construct a image-like representation by simply concatenating the embedding vectors of words in and . That is, for -th word in and -th word in sentence , we have the following feature map:
Based on such a image-like representation, we perform convolution on every window, with the purpose to capture the correspondence between segments in and segments in by the following feature map of type :
is the sigmoid active function,.
After that we perform a max-pooling in non-overlappingwindows:
We could go on for more layers of convolution and max-pooling, aiming at capturing the correspondence at different levels of abstraction. The extracted features are then fed into a multi-layer perceptron (MLP), with sigmoid activation at the last layer to give the probability thatis from ground-truth data, i.e. . The optimization target of such CNN adversary is to minimize the cross entropy loss for binary classification, with ground-truth data as positive instance while sampled data (from ) as negative one.
3.3 Policy Gradient Algorithm to Train Adversarial-NMT
With the notations for NMT model and adversary model , the final training objective is:
That is, translation model tries to produce high quality translation to fool the adversary (the outer-loop
), whose objective is to successfully classify translation results from real data (i.e., ground-truth) and from(the inner-loop ).
Eqn. (6) reveals that it is straightforward to train the adversary , by keeping providing with the ground-truth sentence pair and the sampled translation pair from , respectively as positive and negative training data. However, when it turns to NMT model , it is non-trivial to design the training process, given that the discretely sampled from makes it difficult to directly back-propagate the error signals from to , making nondifferentiable w.r.t. ’s model parameters .
To tackle the above challenge, we leverage the REINFORCE algorithm [Williams1992], a Monte-Carlo policy gradient method in reinforcement learning literature to optimize . Note that the objective of training under a fixed source language sentence and is to minimize the following loss item:
whose gradient w.r.t. is:
A sample from is used to approximate the above gradient:
in which are gradients specified with standard sequence-to-sequence NMT networks. Such a gradient approximation is used to update :
where is the learning rate.
Using the language of reinforcement learning, in the above Eqn. (7) to (9), the NMT model is the conditional policy faced with , while the term , provided by the adversary , acts as a Monte-Carlo estimation of the reward. Intuitively speaking, Eqn. (9) implies, the more likely to successfully fool (i.e, larger ), the larger reward the NMT model will get, and the ’pseudo’ training data will correspondingly be more favored to improve the policy .
Note here we in fact use one sample from a trajectory to estimate the terminal reward given by
. Acting in this way brings high variance, to reduce the variance, a moving average of the historical reward values is set as a reward baseline[Weaver and Tao2001]. One can sample multiple trajectories in each decoding step, by regarding as the roll-out policy to reduce estimation variance for immediate reward [Silver et al.2016, Yu et al.2016]. However, empirically we find such approach is intolerably time-consuming in our task, given that the decoding space in NMT is typically extremely large (the same as vocabulary size).
It is worth comparing our adversarial training with existing methods that directly maximize sequence level measure such as BLEU [Ranzato et al.2015, Shen et al.2016, Bahdanau et al.2016] in training NMT models, using similar approaches based on reinforcement learning as ours. We argue that Adversarial-NMT makes the optimization easier compared with these methods. Firstly, the reward learned by our adversary
provides rich and global information to evaluate the translation, which goes beyond the BLEU’s simple low-level n-gram matching criteria. Acting in this way provides much smoother objective compared with BLEU since the latter is highly sensitive for slight translation difference at word or phrase level. Secondly, the NMT modeland the adversary in Adversarial-NMT co-evolves. The dynamics of adversary makes NMT model
grows in an adaptive way rather than controlled by a fixed evaluation metric as BLEU. Given the above two reasons, Adversarial-NMT makes the optimization process towards sequence level objectives much more robust and better controlled, which is further verified by its superior performances to the aforementioned methods that will be reported in the next Section4.
We report the experimental results on both EnglishFrench translation (EnFr for short) and GermanEnglish translation (DeEn for short).
Dataset: For EnFr translation, for the sake of fair comparison with previous works, we use the same dataset as [Bahdanau et al.2014, Shen et al.2016]. The dataset is composed of a subset of WMT 2014 training corpus as training set, the combination of news-test 2012 and news-test 2013 as dev set and news-test 2014 as test set, which respectively contains roughly , and sentence pairs. The maximal sentence length is 50. We use top most frequent English and French words and replace the other words as ‘UNK’ token.
For DeEn translation, following previous works [Ranzato et al.2015, Bahdanau et al.2016], the dataset is from IWSLT 2014 evaluation campaign [Cettolo et al.2014], consisting of training/dev/test corpus with approximately , and bilingual sentence pairs respectively. The maximal sentence length is also set as 50. The dictionary for English and German corpus respectively include and most frequent words [Bahdanau et al.2016], with other words replaced as a special token ‘UNK’.
Implementation Details: In Adversarial-NMT, the structure of the NMT model is the same as RNNSearch model [Bahdanau et al.2014], a RNN based encoding-decoding framework with attention mechanism. Single layer GRUs act as encoder and decoder. For EnFr translation, the dimensions of word embedding and GRU hidden state are respectively set as and , and for DeEn translation they are both .
For the adversary , the CNN consists of two convolution
pooling layers, one MLP layer and one softmax layer, withconvolution window size, pooling window size, feature map size and MLP hidden layer size.
For the training of NMT model , similar as what is commonly done in previous works [Shen et al.2016, Tu et al.2016a], we warm start from a well-trained RNNSearch model, and optimize it using vanilla SGD with mini-batch size for EnFr translation and for De
En translation. Gradient clipping is used with clipping value 1 for EnFr and 10 for DeEn. The initial learning rate is chosen from cross-validation on dev set ( for EnFr and for DeEn) and we halve it every iterations.
An important factor we find in successfully training is that the combination of adversarial objective with MLE. That is, we force randomly chosen mini-batch data are trained with Adversarial-NMT, while apply MLE principle to the other mini-batches. Acting in this way significantly improves stability in model training, which is also reported in other tasks such as language model [Lamb et al.2016] and neural dialogue generation [Li et al.2017]. We conjecture that the reason is that MLE acts as a regularizer to guarantee smooth model update, alleviating the negative effects brought by high gradient estimation variance of the one-step Monte-Carlo sample in REINFORCE.
As the first step, the CNN adversary network is initially pre-trained using the sampled data sampled from the RNNSearch model, and the ground-truth translation . After that, in joint G-D training of Adversarial-NMT, the adversary is optimized using Nesterov SGD [Nesterov1983] with batch size set as . The initial learning rate is for EnFr and for DeEn, both chosen by validation on dev set. The dimension of word embedding is the same with that of
, and we fix the word embeddings during training. Batch normalization[Ioffe and Szegedy2015] is observed to significantly improve ’s performance. Considering efficiency, all the negative training data instances used in ’s training are generated using beam search with beam size .
In generating model translation for evaluation, we set beam width as and for EnFr and DeEn respectively according to BLEU on dev set. The translation quality is measured by tokenized case-sensitive BLEU [Papineni et al.2002] score 333https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl.
4.2 Result on EnFr translation
In Table 1 we provide the EnFr translation result of Adversarial-NMT, together with several strong NMT baselines, such as the well representative attention-based NMT model RNNSearch [Bahdanau et al.2014]. In addition, to make our comparison comprehensive, we would like to cover several well acknowledged techniques whose effectiveness has been verified to improve EnFr translation by previously published works, including the leverage of 1) Using large vocabulary to handle rare words [Jean et al.2015, Luong et al.2015]; 2) Different training objectives [Shen et al.2016, Ranzato et al.2015, Bahdanau et al.2016] such as Minimum Risk Training (MRT) to directly optimize evaluation measure [Shen et al.2016], and dual learning to enhance both primal and dual tasks (e.g., EnFr and FrEn) [He et al.2016]; 3) Improved inference process such as beam search optimization [Wiseman and Rush2016] and postprocessing UNK [Luong et al.2015, Jean et al.2015]; 4) Leveraging additional monolingual data [Sennrich et al.2016, Zhang and Zong2016, He et al.2016].
From the table, we can clearly observe that Adversarial-NMT obtains satisfactory translation quality against baseline systems. In particular, it even surpasses the performances of other models with much larger vocabularies [Jean et al.2015], deeper layers [Luong et al.2015], much larger monolingual training corpus [Sennrich et al.2016], and the goal of directly maximizing BLEU [Shen et al.2016]. In fact, as far as we know, Adversarial-NMT achieves state-of-the-art result () on EnFr translation for single-layer GRU sequence-to-sequence models trained with only supervised bilingual corpus on news-test 2014 test set.
Human Evaluation: Apart from the comparison based on the objective BLEU scores, to better appraise the performance of our model, we also involve human judgements as a subjective measure. To be more specific, we generate the translation results for 500 randomly selected English sentences from EnFr news-test 2014 dataset using both MRT [Shen et al.2016] and our Adversarial-NMT. Here MRT is chosen since it is the well representative of previous NMT methods which maximize sequence level objectives, achieving satisfactory results among all single layer models (i.e., in Table 1). Afterwards we ask three human labelers to choose the better one from the two versions of translated sentences. The evaluation process is conducted on Amazon Mechanical Turk 444https://www.mturk.com with all the workers to be native English or French speakers.
|evaluator 1||286 (57.2%)||214 (42.8%)|
|evaluator 2||310 (62.0%)||190 (38.0%)|
|evaluator 3||295 (59.0%)||205 (41.0%)|
|Overall||891 (59.4%)||609 (40.6%)|
Result in Table 2 shows that sentences are better translated by our Adversarial-NMT, compared with MRT [Shen et al.2016]. Such human evaluation further demonstrates the effectiveness of our model and matches the expectation that Adversarial-NMT provides more human desired translation.
Adversarial Training: Slow or Fast: In this subsection we analyze how to set the pace for training the NMT model and adversary , to make them combatting effectively. Specifically, for EnFr translation, we inspect how dev set BLEU varies along adversarial training process with different initial learning rates for (shown in 2(a)) and for (shown in 2(b)), conditioned on the other one fixed.
Overall speaking, these two figures show that Adversarial-NMT is much more robust with regard to the pace of making progress than that of , since the three curves in 2(b) grow in a similar pattern while curves in 2(a) drastically differ with each other. We conjecture the reason is that in Adversarial-NMT, CNN based is powerful in classification tasks, especially when it is warm started with sampled data from RNNSearch. As a comparison, the translation model is relatively weak in providing qualified translations. Therefore, training needs carefully configurations of learning rate: small value (e.g., ) leads to slower convergence (blue line in 2(a)), while large value (e.g., ) brings un-stability (green line in 2(a)). The proper learning rate (e.g. ) induces to make fast, meanwhile stable progress along training.
4.3 Result on DeEn translation
Again, we can see that Adversarial-NMT performs best against other models from Table 3, achieves 27.94 BLEU scores, which is also a state-of-the-art result.
|1-2 Groundtruth translation||
|Translation by RNNSearch||
|1-2 Groundtruth translation||
|Translation by RNNSearch||
Effect of Adversarial Training: To better visualize and understand the advantages of adversarial training brought by Adversarial-NMT, we show several translation cases in Table 4. Concretely speaking, we give two GermanEnglish translation examples, including the source language sentence , the ground-truth translation sentence , and two NMT model translation sentences, respectively out from RNNSearch and Adversarial-NMT (trained after epochs) and emphasized on their different parts by bold fonts which lead to different translation quality. For each model translation , we also list , i.e., the probability that the adversary regards as ground-truth, in the third column, and the sentence level bleu score of in the last column.
Since RNNSearch model acts as the warm start for training Adversarial-NMT, its translation could be viewed as the result of Adversarial-NMT at its initial phase. Therefore, from Table 4, we can observe:
With adversarial training goes on, the quality of translation sentence output by gets improved, both in terms of subjective feelings and BLEU scores as a quantitative measure.
Correspondingly, the translation quality growth makes the adversary deteriorated, as shown by ’s successful recognition of by RNNSearch as translated from model, whereas makes mistakes in classifying out from Adversarial-NMT as ground-truth (by human).
We in this paper propose a novel and intuitive training objective for NMT, that is to force the translation results be as similar as ground-truth translations generated by human. Such an objective is achieved via an adversarial training framework called Adversarial-NMT which complements the original NMT model with an adversary based on CNN. Adversarial-NMT adopts both new network architectures to reflect the mapping within (source, target) sentence, and an efficient policy gradient algorithm to tackle the optimization difficulty brought by the discrete nature of machine translation. The experiments on both EnglishFrench and GermanEnglish translation tasks clearly demonstrate the effectiveness of such adversarial training method for NMT.
As to future works, with the hope of achieving new state-of-the-art performance for NMT system, we plan to fully exploit the potential of Adversarial-NMT by combining it with other powerful methods listed in subsection 4.2, such as training with large vocabulary, minimum-risk principle and deep structures. We additionally would like to emphasize and explore the feasibility of adversarial training to other text processing tasks, such as image caption, dependency parsing and sentiment classification.
- [Bahdanau et al.2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
- [Bahdanau et al.2016] Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086.
- [Cettolo et al.2014] Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th iwslt evaluation campaign, iwslt 2014.
- [Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In EMNLP, October.
- [Goodfellow et al.2014a] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014a. Generative adversarial nets. In NIPS.
- [Goodfellow et al.2014b] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014b. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
- [He et al.2016] Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, NIPS.
- [Hochreiter and Schmidhuber1997] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation.
- [Hu et al.2014] Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In NIPS, pages 2042–2050.
- [Ioffe and Szegedy2015] Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML-15, pages 448–456.
- [Jean et al.2015] Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In ACL, July.
- [Koehn et al.2003] Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In NAACL. Association for Computational Linguistics.
- [Lamb et al.2016] Alex M Lamb, Anirudh Goyal ALIAS PARTH GOYAL, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. In NIPS.
- [Li et al.2017] Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547.
- [Luong and Manning2016] Minh-Thang Luong and Christopher D Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. arXiv preprint arXiv:1604.00788.
- [Luong et al.2015] Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In ACL, July.
- [Nesterov1983] Yurii Nesterov. 1983. A method for unconstrained convex minimization problem with the rate of convergence o (1/k2). In Doklady an SSSR, pages 543–547.
- [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. Association for Computational Linguistics.
- [Ranzato et al.2015] Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732.
- [Reed et al.2016] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image synthesis. In ICML.
- [Salimans et al.2016] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. In NIPS.
- [Sennrich et al.2016] Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In ACL, August.
- [Shen et al.2016] Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In ACL, August.
- [Silver et al.2016] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, , et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature.
- [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS.
- [Sutton and Barto1998] Richard S Sutton and Andrew G Barto. 1998. Reinforcement learning: An introduction. MIT press Cambridge.
- [Tu et al.2016a] Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2016a. Neural machine translation with reconstruction. arXiv preprint arXiv:1611.01874.
- [Tu et al.2016b] Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016b. Modeling coverage for neural machine translation. In ACL, August.
[Weaver and Tao2001]
Lex Weaver and Nigel Tao.
The optimal reward baseline for gradient-based reinforcement
Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pages 538–545. Morgan Kaufmann Publishers Inc.
- [Williams1992] Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning.
- [Wiseman and Rush2016] Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In EMNLP, November.
- [Wu et al.2016] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
- [Yang et al.2017] Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2017. Improving neural machine translation with conditional sequence generative adversarial nets. arXiv preprint arXiv:1703.04887.
- [Yin et al.2015] Wenpeng Yin, Hinrich Schütze, Bing Xiang, and Bowen Zhou. 2015. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193.
- [Yu et al.2016] Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2016. Seqgan: sequence generative adversarial nets with policy gradient. arXiv preprint arXiv:1609.05473.
- [Zhang and Zong2016] Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In EMNLP, November.
- [Zhou et al.2016] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fast-forward connections for neural machine translation. arXiv preprint arXiv:1606.04199.