Adversarial Neural Machine Translation

04/20/2017 ∙ by Lijun Wu, et al. ∙ Microsoft SUN YAT-SEN UNIVERSITY 0

In this paper, we study a new learning paradigm for Neural Machine Translation (NMT). Instead of maximizing the likelihood of the human translation as in previous works, we minimize the distinction between human translation and the translation given by an NMT model. To achieve this goal, inspired by the recent success of generative adversarial networks (GANs), we employ an adversarial training architecture and name it as Adversarial-NMT. In Adversarial-NMT, the training of the NMT model is assisted by an adversary, which is an elaborately designed Convolutional Neural Network (CNN). The goal of the adversary is to differentiate the translation result generated by the NMT model from that by human. The goal of the NMT model is to produce high quality translations so as to cheat the adversary. A policy gradient method is leveraged to co-train the NMT model and the adversary. Experimental results on English→French and German→English translation tasks show that Adversarial-NMT can achieve significantly better translation quality than several strong baselines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Neural Machine Translation (NMT) [Cho et al.2014, Bahdanau et al.2014] has drawn more and more attention in both academia and industry [Luong and Manning2016, Jean et al.2015, Shen et al.2016, Tu et al.2016b, Sennrich et al.2016, Wu et al.2016]. Compared with traditional Statistical Machine Translation (SMT) [Koehn et al.2003], NMT achieves similar or even better translation results in an end-to-end framework. The sentence level maximum likelihood principle and gating units in LSTM/GRU [Hochreiter and Schmidhuber1997, Cho et al.2014], together with attention mechanisms grant NMT with the ability to better translate long sentences.

Despite its success, the translation quality of latest NMT systems is still far from satisfaction and there remains large room for improvement. For example, NMT usually adopts the Maximum Likelihood Estimation (MLE) principle for training, i.e., to maximize the probability of the target ground-truth sentence conditioned on the source sentence. Such an objective does not guarantee the translation results to be natural, sufficient, and accurate compared with ground-truth translation by human. There are previous works 

[Ranzato et al.2015, Shen et al.2016, Bahdanau et al.2016] that aim to alleviate such limitations of maximum likelihood training, by adopting sequence level objectives (e.g., directly maximizing BLEU [Papineni et al.2002]), to reduce the objective inconsistency between NMT training and inference. Yet somewhat improved, such objectives still cannot fully bridge the gap between NMT translations and ground-truth translations.

We, in this paper, adopt a thoroughly different training objective for NMT, targeting at directly minimizing the difference between human translation and the translation given by an NMT model. To achieve this target, inspired by the recent success of Generative Adversarial Networks (GANs) [Goodfellow et al.2014a], we design an adversarial training protocol for NMT and name it as Adversarial-NMT. In Adversarial-NMT, besides the typical NMT model, an adversary is introduced to distinguish the translation generated by NMT from that by human (i.e., ground truth). Meanwhile the NMT model tries to improve its translation results such that it can successfully cheat the adversary.

These two modules in Adversarial-NMT are co-trained, and their performances get mutually improved. In particular, the discriminative power of the adversary can be improved by learning from more and more training samples (both positive ones generated by human and negative ones sampled from NMT); and the ability of the NMT model in cheating the adversary can be improved by taking the output of the adversary as reward. In this way, the NMT translation results are professor forced [Lamb et al.2016] to be as close as possible to ground-truth translation.

Different from previous GANs, which assume the existence of a generator in continuous space, in our proposed framework, the NMT model is in fact not a typical generative model, but instead a probabilistic transformation that maps a source language sentence to a target language sentence, both in discrete space. Such differences make it necessary to design both new network architectures and optimization methods to make adversarial training possible for NMT. We therefore on one aspect, leverage a specially designed Convolutional Neural Network (CNN) model as the adversary, which takes the (source, target) sentence pair as input; on the other aspect, we turn to a policy gradient method named REINFORCE [Williams1992]

, widely used in the reinforcement learning literature 

[Sutton and Barto1998], to guarantee both the two modules are effectively optimized in an adversarial manner. We conduct extensive experiments, which demonstrates that Adversarial-NMT can achieve significantly better translation results than traditional NMT models with even much larger vocabulary size and higher model complexity.

2 Related Work

End-to-end Neural Machine Translation (NMT) [Bahdanau et al.2014, Cho et al.2014, Sutskever et al.2014, Jean et al.2015, Wu et al.2016, Zhou et al.2016]

has been the recent research focus of the community. Typical NMT system is built within the RNN based encoder-decoder framework. In such a framework the encoder RNN sequentially processes the words in a source language sentence into fixed length vectors, which act as the inputs to decoder RNN to decode the translation sentence. NMT typically adopts the principle of Maximum Likelihood Estimation (MLE) for training, i.e., maximizing the per-word likelihood of target sentence. Other training criteria, such as Minimum Risk Training (MRT) based on reinforcement learning 

[Ranzato et al.2015, Shen et al.2016] and translation reconstruction [Tu et al.2016a], are shown to improve over such word-level MLE principle since these objectives take the translation sentence as a whole.

The training principle we propose is based on the spirit of Generative Adversarial Networks (GANs) [Goodfellow et al.2014a, Salimans et al.2016], or more generally, adversarial training [Goodfellow et al.2014b]. In adversarial training, a discriminator and a generator compete with each other, forcing the generator to produce high quality outputs that are able to fool the discriminator. Adversarial training typically succeed in image generation [Goodfellow et al.2014a, Reed et al.2016]

, with limited contribution in natural language processing tasks 

[Yu et al.2016, Li et al.2017], mainly due to the difficulty of propagating the error signals from the discriminator to the generator through the discretely generated natural language tokens. seqGAN alleviates such a difficulty by reinforcement learning approach for sequence (e.g., music) generation. However, as far as we know, there are limited efforts on adversarial training for sequence-to-sequence task when a conditional mapping between two sequences is involved, and our work is among the first endeavors to explore the potential of acting in this way, especially for Neural Machine Translation [Yang et al.2017].

3 Adversarial-NMT

Figure 1: The Adversarial-NMT framework. ‘Ref’ is short for ‘Reference’ which means the ground-truth translation and ‘Hyp’ is short for ‘Hypothesis’, denoting model translation sentence. All the yellow parts denote the NMT model , which maps a source sentence to a translation sentence. The red parts are the adversary network , which predicts whether a given target sentence is the ground-truth translation of the given source sentence . and combat with each other, generating both sampled translation to train , and the reward signals to train by policy gradient (the blue arrows).

The overall framework of Our Adversarial-NMT is shown in Figure 1. Let be a bilingual aligned sentence pair for training, where is the -th word in the source sentence and is the -th word in the target sentence. Let denote the translation sentence out from an NMT system for the source sentence . As previously stated, the goal of Adversarial-NMT is to force to be as ‘similar’ as . In the perfect case, is so similar to the human translation that even a human cannot tell whether is generated by machine or human. In order to achieve that, we introduce an extra adversary network, which acts similarly to the discriminator adopted in GANs [Goodfellow et al.2014a]. The goal of the adversary is to differentiate human translation from machine translation, and the NMT model tries to produce a target sentence as similar as human translation so as to fool the adversary.

3.1 NMT Model

We adopt the Recurrent Neural Network (RNN) based encoder-decoder as the NMT model to seek a target language translation

given source sentence . In particular, a probabilistic mapping is firstly learnt and the translation result is sampled from it. To be specific, given source sentence and previously generated words , the probability of generating word is:

(1)
(2)

where is the decoding state from decoder at time . Here

is the recurrent unit such as the Long Short Term Memory (LSTM) unit

[Hochreiter and Schmidhuber1997]

or Gated Recurrent Unit (GRU)

[Cho et al.2014], and is a distinct source representation at time calculated by an attention mechanism [Bahdanau et al.2014]:

(3)
(4)

where is the source sentence length,

is a feed-forward neural network and

is the hidden state from RNN encoder computed by and :

(5)

The translation result can be sampled from either in a greedy way for each timestep, or using beam search [Sutskever et al.2014] to seek globally optimized result.

3.2 Adversary Model

The adversary is used to differentiate translation result and the ground-truth translation , given the source language sentence . To achieve that, one needs to measure the translative matching degree of source-target sentence pair . We turn to Convolution Neural Network (CNN) for this task [Yin et al.2015, Hu et al.2014], since with its layer-by-layer convolution and pooling strategies, CNN is able to accurately capture the hierarchical correspondence of at different abstraction levels.

The general structure is shown in Figure 2. Specifically, given a sentence pair , we first construct a image-like representation by simply concatenating the embedding vectors of words in and . That is, for -th word in and -th word in sentence , we have the following feature map:

Based on such a image-like representation, we perform convolution on every window, with the purpose to capture the correspondence between segments in and segments in by the following feature map of type :

where

is the sigmoid active function,

.

After that we perform a max-pooling in non-overlapping

windows:

We could go on for more layers of convolution and max-pooling, aiming at capturing the correspondence at different levels of abstraction. The extracted features are then fed into a multi-layer perceptron (MLP), with sigmoid activation at the last layer to give the probability that

is from ground-truth data, i.e. . The optimization target of such CNN adversary is to minimize the cross entropy loss for binary classification, with ground-truth data as positive instance while sampled data (from ) as negative one.

Figure 2: The CNN adversary framework.

3.3 Policy Gradient Algorithm to Train Adversarial-NMT

With the notations for NMT model and adversary model , the final training objective is:

(6)

That is, translation model tries to produce high quality translation to fool the adversary (the outer-loop

), whose objective is to successfully classify translation results from real data (i.e., ground-truth) and from

(the inner-loop ).

Eqn. (6) reveals that it is straightforward to train the adversary , by keeping providing with the ground-truth sentence pair and the sampled translation pair from , respectively as positive and negative training data. However, when it turns to NMT model , it is non-trivial to design the training process, given that the discretely sampled from makes it difficult to directly back-propagate the error signals from to , making nondifferentiable w.r.t. ’s model parameters .

To tackle the above challenge, we leverage the REINFORCE algorithm  [Williams1992], a Monte-Carlo policy gradient method in reinforcement learning literature to optimize . Note that the objective of training under a fixed source language sentence and is to minimize the following loss item:

(7)

whose gradient w.r.t. is:

(8)

A sample from is used to approximate the above gradient:

(9)

in which are gradients specified with standard sequence-to-sequence NMT networks. Such a gradient approximation is used to update :

(10)

where is the learning rate.

Using the language of reinforcement learning, in the above Eqn. (7) to (9), the NMT model is the conditional policy faced with , while the term , provided by the adversary , acts as a Monte-Carlo estimation of the reward. Intuitively speaking, Eqn. (9) implies, the more likely to successfully fool (i.e, larger ), the larger reward the NMT model will get, and the ’pseudo’ training data will correspondingly be more favored to improve the policy .

Note here we in fact use one sample from a trajectory to estimate the terminal reward given by

. Acting in this way brings high variance, to reduce the variance, a moving average of the historical reward values is set as a reward baseline

[Weaver and Tao2001]. One can sample multiple trajectories in each decoding step, by regarding as the roll-out policy to reduce estimation variance for immediate reward [Silver et al.2016, Yu et al.2016]. However, empirically we find such approach is intolerably time-consuming in our task, given that the decoding space in NMT is typically extremely large (the same as vocabulary size).

It is worth comparing our adversarial training with existing methods that directly maximize sequence level measure such as BLEU [Ranzato et al.2015, Shen et al.2016, Bahdanau et al.2016] in training NMT models, using similar approaches based on reinforcement learning as ours. We argue that Adversarial-NMT makes the optimization easier compared with these methods. Firstly, the reward learned by our adversary

provides rich and global information to evaluate the translation, which goes beyond the BLEU’s simple low-level n-gram matching criteria. Acting in this way provides much smoother objective compared with BLEU since the latter is highly sensitive for slight translation difference at word or phrase level. Secondly, the NMT model

and the adversary in Adversarial-NMT co-evolves. The dynamics of adversary makes NMT model

grows in an adaptive way rather than controlled by a fixed evaluation metric as BLEU. Given the above two reasons, Adversarial-NMT makes the optimization process towards sequence level objectives much more robust and better controlled, which is further verified by its superior performances to the aforementioned methods that will be reported in the next Section 

4.

4 Experiments

4.1 Settings

We report the experimental results on both EnglishFrench translation (EnFr for short) and GermanEnglish translation (DeEn for short).

Dataset: For EnFr translation, for the sake of fair comparison with previous works, we use the same dataset as [Bahdanau et al.2014, Shen et al.2016]. The dataset is composed of a subset of WMT 2014 training corpus as training set, the combination of news-test 2012 and news-test 2013 as dev set and news-test 2014 as test set, which respectively contains roughly , and sentence pairs. The maximal sentence length is 50. We use top most frequent English and French words and replace the other words as ‘UNK’ token.

For DeEn translation, following previous works [Ranzato et al.2015, Bahdanau et al.2016], the dataset is from IWSLT 2014 evaluation campaign [Cettolo et al.2014], consisting of training/dev/test corpus with approximately , and bilingual sentence pairs respectively. The maximal sentence length is also set as 50. The dictionary for English and German corpus respectively include and most frequent words [Bahdanau et al.2016], with other words replaced as a special token ‘UNK’.

System System Configurations BLEU Representative end-to-end NMT systems Sutskever et al. S2S LSTM with 4 layers + vocabs 30.59 Bahdanau et al. NMT RNNSearch 29.97111Reported in  [Jean et al.2015]. Jean et al. NMTLarge RNNSearch + UNK Replace 33.08 Jean et al. NMTLarge RNNSearch + vocabs + UNK Replace 34.11 Luong et al. RareWordsNMT LSTM with 4 layers + vocabs 29.50 Luong et al. RareWordsNMT LSTM with 4 layers + vocabs + PosUnk 31.80 Shen et al. RL4NMT RNNSearch +Minimum Risk Training Objective 31.30 Sennrich et al. NMTMolingual RNNSearch +Monolingual Data 30.40 222Reported in  [He et al.2016]. He et al. DualNMT RNNSearch+ Monolingual Data + Dual Objective 32.06 Adversarial-NMT RNNSearch + Adversarial Training Objective 31.91 RNNSearch + Adversarial Training Objective + UNK Replace 34.78
Table 1: Different NMT systems’ performances on EnFr translation. The default setting is single layer GRU + vocabs + MLE training objective, trained with no monolingual data, i.e., the RNNSearch model proposed by Bahdanau et al. NMT. : significantly better than RL4NMT ( 0.05).

Implementation Details: In Adversarial-NMT, the structure of the NMT model is the same as RNNSearch model [Bahdanau et al.2014], a RNN based encoding-decoding framework with attention mechanism. Single layer GRUs act as encoder and decoder. For EnFr translation, the dimensions of word embedding and GRU hidden state are respectively set as and , and for DeEn translation they are both .

For the adversary , the CNN consists of two convolution

pooling layers, one MLP layer and one softmax layer, with

convolution window size, pooling window size, feature map size and MLP hidden layer size.

For the training of NMT model , similar as what is commonly done in previous works [Shen et al.2016, Tu et al.2016a], we warm start from a well-trained RNNSearch model, and optimize it using vanilla SGD with mini-batch size for EnFr translation and for De

En translation. Gradient clipping is used with clipping value 1 for En

Fr and 10 for DeEn. The initial learning rate is chosen from cross-validation on dev set ( for EnFr and for DeEn) and we halve it every iterations.

An important factor we find in successfully training is that the combination of adversarial objective with MLE. That is, we force randomly chosen mini-batch data are trained with Adversarial-NMT, while apply MLE principle to the other mini-batches. Acting in this way significantly improves stability in model training, which is also reported in other tasks such as language model [Lamb et al.2016] and neural dialogue generation [Li et al.2017]. We conjecture that the reason is that MLE acts as a regularizer to guarantee smooth model update, alleviating the negative effects brought by high gradient estimation variance of the one-step Monte-Carlo sample in REINFORCE.

As the first step, the CNN adversary network is initially pre-trained using the sampled data sampled from the RNNSearch model, and the ground-truth translation . After that, in joint G-D training of Adversarial-NMT, the adversary is optimized using Nesterov SGD [Nesterov1983] with batch size set as . The initial learning rate is for EnFr and for DeEn, both chosen by validation on dev set. The dimension of word embedding is the same with that of

, and we fix the word embeddings during training. Batch normalization 

[Ioffe and Szegedy2015] is observed to significantly improve ’s performance. Considering efficiency, all the negative training data instances used in ’s training are generated using beam search with beam size .

In generating model translation for evaluation, we set beam width as and for EnFr and DeEn respectively according to BLEU on dev set. The translation quality is measured by tokenized case-sensitive BLEU [Papineni et al.2002] score 333https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl.

(a) : Same learning rates; : different learning rates.
(b) : Same learning rates; : different learning rates.
Figure 3: Dev set BLEUs during EnFr Adversarial-NMT training process, with same learning rates for , different learning rates for in left 2(a), and same learning rates for and different learning rates for in right 2(b).

4.2 Result on EnFr translation

In Table 1 we provide the EnFr translation result of Adversarial-NMT, together with several strong NMT baselines, such as the well representative attention-based NMT model RNNSearch [Bahdanau et al.2014]. In addition, to make our comparison comprehensive, we would like to cover several well acknowledged techniques whose effectiveness has been verified to improve EnFr translation by previously published works, including the leverage of 1) Using large vocabulary to handle rare words  [Jean et al.2015, Luong et al.2015]; 2) Different training objectives [Shen et al.2016, Ranzato et al.2015, Bahdanau et al.2016] such as Minimum Risk Training (MRT) to directly optimize evaluation measure [Shen et al.2016], and dual learning to enhance both primal and dual tasks (e.g., EnFr and FrEn) [He et al.2016]; 3) Improved inference process such as beam search optimization [Wiseman and Rush2016] and postprocessing UNK [Luong et al.2015, Jean et al.2015]; 4) Leveraging additional monolingual data [Sennrich et al.2016, Zhang and Zong2016, He et al.2016].

From the table, we can clearly observe that Adversarial-NMT obtains satisfactory translation quality against baseline systems. In particular, it even surpasses the performances of other models with much larger vocabularies [Jean et al.2015], deeper layers  [Luong et al.2015], much larger monolingual training corpus [Sennrich et al.2016], and the goal of directly maximizing BLEU [Shen et al.2016]. In fact, as far as we know, Adversarial-NMT achieves state-of-the-art result () on EnFr translation for single-layer GRU sequence-to-sequence models trained with only supervised bilingual corpus on news-test 2014 test set.

Human Evaluation: Apart from the comparison based on the objective BLEU scores, to better appraise the performance of our model, we also involve human judgements as a subjective measure. To be more specific, we generate the translation results for 500 randomly selected English sentences from EnFr news-test 2014 dataset using both MRT [Shen et al.2016] and our Adversarial-NMT. Here MRT is chosen since it is the well representative of previous NMT methods which maximize sequence level objectives, achieving satisfactory results among all single layer models (i.e., in Table 1). Afterwards we ask three human labelers to choose the better one from the two versions of translated sentences. The evaluation process is conducted on Amazon Mechanical Turk 444https://www.mturk.com with all the workers to be native English or French speakers.

Adversarial-NMT MRT
evaluator 1 286 (57.2%) 214 (42.8%)
evaluator 2 310 (62.0%) 190 (38.0%)
evaluator 3 295 (59.0%) 205 (41.0%)
Overall 891 (59.4%) 609 (40.6%)
Table 2: Human evaluations for Adversarial-NMT and MRT on EnglishFrench translation. “286 (57.2%)” means that evaluator 1 made a decision that 286 (57.2%) out of 500 translations generated by Adversarial-NMT were better than MRT.

Result in Table 2 shows that sentences are better translated by our Adversarial-NMT, compared with MRT [Shen et al.2016]. Such human evaluation further demonstrates the effectiveness of our model and matches the expectation that Adversarial-NMT provides more human desired translation.

System System Configurations BLEU Representative end-to-end NMT systems Bahdanau et al. NMT RNNSearch 23.87 555Reported in  [Wiseman and Rush2016]. Ranzato et al. PG4Sequence CNN encoder + Sequence level objective 21.83 Bahdanau et al. AC4SequencePrediction CNN encoder + Sequence level actor-critic objective 22.45 Wiseman et al. BSO RNNSearch + Beam search optimization 25.48 Shen et al. RL4NMT RNNSearch + Minimum Risk Training Objective 25.84 666Result from our implementation, and we reproduced their reported EnFr result. Adversarial-NMT RNNSearch + Adversarial Training Objective 26.98 RNNSearch + Adversarial Training Objective + UNK Replace 27.94
Table 3: Different NMT systems’ performances on DeEn translation. The default setting is single layer GRU encoder-decoder model with MLE training objective, i.e., the RNNSearch model proposed by Bahdanau et al. NMT. : significantly better than RL4NMT ( 0.05).

Adversarial Training: Slow or Fast: In this subsection we analyze how to set the pace for training the NMT model and adversary , to make them combatting effectively. Specifically, for EnFr translation, we inspect how dev set BLEU varies along adversarial training process with different initial learning rates for (shown in 2(a)) and for (shown in 2(b)), conditioned on the other one fixed.

Overall speaking, these two figures show that Adversarial-NMT is much more robust with regard to the pace of making progress than that of , since the three curves in 2(b) grow in a similar pattern while curves in 2(a) drastically differ with each other. We conjecture the reason is that in Adversarial-NMT, CNN based is powerful in classification tasks, especially when it is warm started with sampled data from RNNSearch. As a comparison, the translation model is relatively weak in providing qualified translations. Therefore, training needs carefully configurations of learning rate: small value (e.g., ) leads to slower convergence (blue line in 2(a)), while large value (e.g., ) brings un-stability (green line in 2(a)). The proper learning rate (e.g. ) induces to make fast, meanwhile stable progress along training.

4.3 Result on DeEn translation

In Table 3 we provide the DeEn translation result of Adversarial-NMT, compared with some strong baselines such as RNNSearch [Bahdanau et al.2014] and MRT [Shen et al.2016].

Again, we can see that Adversarial-NMT performs best against other models from Table 3, achieves 27.94 BLEU scores, which is also a state-of-the-art result.

Source sentence
ich weiß , dass wir es können , und soweit es mich betrifft
ist das etwas ,was die welt jetzt braucht .
BLEU
1-2 Groundtruth translation
i know that we can , and as far as i 'm concerned ,
that 's something the world needs right now .
Translation by RNNSearch
i know we can do it , and as far as it 's in time ,
what the world needs now .
Translation by
Adversarial-NMT
i know that we can , and as far as it is to be something
that the world needs now .
Source sentence
wir müssen verhindern , dass die menschen kenntnis erlangen
von dingen , vor allem dann , wenn sie wahr sind .
BLEU
1-2 Groundtruth translation
we have to prevent people from finding about things ,
especially when they are true .
Translation by RNNSearch
we need to prevent people who are able to know
that people have to do , especially if they are true .
Translation by
Adversarial-NMT
we need to prevent people who are able to know
about things , especially if they are true .
Table 4: Cases-studies to demonstrate the translation quality improvement brought by Adversarial-NMT. We provide two DeEn translation examples, with the source German sentence, ground-truth English sentence, and two translation results respectively provided by RNNSearch and Adversarial-NMT. is the probability of model translation being ground-truth translation of , calculated from the adversary . BLEU is per-sentence translation bleu score for each translated sentence.

Effect of Adversarial Training: To better visualize and understand the advantages of adversarial training brought by Adversarial-NMT, we show several translation cases in Table 4. Concretely speaking, we give two GermanEnglish translation examples, including the source language sentence , the ground-truth translation sentence , and two NMT model translation sentences, respectively out from RNNSearch and Adversarial-NMT (trained after epochs) and emphasized on their different parts by bold fonts which lead to different translation quality. For each model translation , we also list , i.e., the probability that the adversary regards as ground-truth, in the third column, and the sentence level bleu score of in the last column.

Since RNNSearch model acts as the warm start for training Adversarial-NMT, its translation could be viewed as the result of Adversarial-NMT at its initial phase. Therefore, from Table 4, we can observe:

  • With adversarial training goes on, the quality of translation sentence output by gets improved, both in terms of subjective feelings and BLEU scores as a quantitative measure.

  • Correspondingly, the translation quality growth makes the adversary deteriorated, as shown by ’s successful recognition of by RNNSearch as translated from model, whereas makes mistakes in classifying out from Adversarial-NMT as ground-truth (by human).

5 Conclusion

We in this paper propose a novel and intuitive training objective for NMT, that is to force the translation results be as similar as ground-truth translations generated by human. Such an objective is achieved via an adversarial training framework called Adversarial-NMT which complements the original NMT model with an adversary based on CNN. Adversarial-NMT adopts both new network architectures to reflect the mapping within (source, target) sentence, and an efficient policy gradient algorithm to tackle the optimization difficulty brought by the discrete nature of machine translation. The experiments on both EnglishFrench and GermanEnglish translation tasks clearly demonstrate the effectiveness of such adversarial training method for NMT.

As to future works, with the hope of achieving new state-of-the-art performance for NMT system, we plan to fully exploit the potential of Adversarial-NMT by combining it with other powerful methods listed in subsection 4.2, such as training with large vocabulary, minimum-risk principle and deep structures. We additionally would like to emphasize and explore the feasibility of adversarial training to other text processing tasks, such as image caption, dependency parsing and sentiment classification.

References