Log In Sign Up

Mutual Information and Diverse Decoding Improve Neural Machine Translation

Sequence-to-sequence neural translation models learn semantic and syntactic relations between sentence pairs by optimizing the likelihood of the target given the source, i.e., p(y|x), an objective that ignores other potentially useful sources of information. We introduce an alternative objective function for neural MT that maximizes the mutual information between the source and target sentences, modeling the bi-directional dependency of sources and targets. We implement the model with a simple re-ranking method, and also introduce a decoding algorithm that increases diversity in the N-best list produced by the first pass. Applied to the WMT German/English and French/English tasks, the proposed models offers a consistent performance boost on both standard LSTM and attention-based neural MT architectures.


page 1

page 2

page 3

page 4


Towards Two-Dimensional Sequence to Sequence Model in Neural Machine Translation

This work investigates an alternative model for neural machine translati...

SYSTRAN Purely Neural MT Engines for WMT2017

This paper describes SYSTRAN's systems submitted to the WMT 2017 shared ...

Multi-Source Neural Translation

We build a multi-source machine translation model and train it to maximi...

The Effect of Normalization for Bi-directional Amharic-English Neural Machine Translation

Machine translation (MT) is one of the main tasks in natural language pr...

Mutual Information Alleviates Hallucinations in Abstractive Summarization

Despite significant progress in the quality of language generated from a...

Simple and Effective Noisy Channel Modeling for Neural Machine Translation

Previous work on neural noisy channel modeling relied on latent variable...

Continuous Space Reordering Models for Phrase-based MT

Bilingual sequence models improve phrase-based translation and reorderin...

1 Introduction

Sequence-to-sequence models for machine translation (Seq2Seq) [Sutskever et al.2014, Bahdanau et al.2014, Cho et al.2014, Kalchbrenner and Blunsom2013, Sennrich et al.2015a, Sennrich et al.2015b, Gulcehre et al.2015] are of growing interest for their capacity to learn semantic and syntactic relations between sequence pairs, capturing contextual dependencies in a more continuous way than phrase-based SMT approaches. Seq2Seq models require minimal domain knowledge, can be trained end-to-end, have a much smaller memory footprint than the large phrase tables needed for phrase-based SMT, and achieve state-of-the-art performance in large-scale tasks like English to French [Luong et al.2015b] and English to German [Luong et al.2015a, Jean et al.2014] translation.

Seq2Seq models are implemented as an encoder-decoder network, in which a source sequence input

is mapped (encoded) to a continuous vector representation from which a target output

will be generated (decoded). The framework is optimized through maximizing the log-likelihood of observing the paired output given :


While standard Seq2Seq models thus capture the unidirectional dependency from source to target, i.e., , they ignore , the dependency from the target to the source, which has long been an important feature in phrase-based translation [Och and Ney2002, Shen et al.2010]. Phrase based systems that combine , and other features like sentence length yield significant performance boost.

We propose to incorporate this bi-directional dependency and model the maximum mutual information (MMI) between source and target into Seq2Seq models. As li2015diversity recently showed in the context of conversational response generation, the MMI based objective function is equivalent to linearly combining and . With a tuning weight

, such a loss function can be written as :


But as also discussed in li2015diversity, direct decoding from (2) is infeasible because computing cannot be done until the target has been computed111 As demonstrated in [Li et al.2015]

Equ. 2 can be immediately achieved by applying bayesian rules

To avoid this enormous search space, we propose to use a reranking approach to approximate the mutual information between source and target in neural machine translation models. We separately trained two Seq2Seq models, one for and one for . The model is used to generate N-best lists from the source sentence . The lists are followed by a reranking process using the second term of the objective function, .

Because reranking approaches are dependent on having a diverse N-best list to rerank, we also propose a diversity-promoting decoding model tailored to neural MT systems. We tested the mutual information objective function and the diversity-promoting decoding model on EnglishFrench, EnglishGerman and German

English translation tasks, using both standard LSTM settings and the more advanced attention-model based settings that have recently shown to result in higher performance.

The next section presents related work, followed by a background section 3 introducing LSTM/Attention machine translation models. Our proposed model will be described in detail in Sections 4, with datasets and experimental results in Section 6 followed by conclusions.

2 Related Work

This paper draws on three prior lines of research: Seq2Seq models, modeling mutual information, and promoting translation diversity.

Seq2Seq Models

Seq2Seq models map source sequences to vector space representations, from which a target sequence is then generated. They yield good performance in a variety of NLP generation tasks including conversational response generation [Vinyals and Le2015, Serban et al.2015a, Li et al.2015], and parsing [Vinyals et al.2014, luong2015multi].

A neural machine translation system uses distributed representations to model the conditional probability of targets given sources, using two components, an encoder and a decoder. Kalchbrenner and Blunsom kalchbrenner2013recurrent used an encoding model akin to convolutional networks for encoding and standard hidden unit recurrent nets for decoding. Similar convolutional networks are used in

[Meng et al.2015] for encoding. sutskever2014sequence,luong2015effective employed a stacking LSTM model for both encoding and decoding. bahdanau2014neural, jean2014using adopted bi-directional recurrent nets for the encoder.

Maximum Mutual Information

Maximum Mutual Information (MMI) was introduced in speech recognition [Bahl et al.1986] as a way of measuring the mutual dependence between inputs (acoustic feature vectors) and outputs (words) and improving discriminative training [Woodland and Povey2002]. li2015diversity show that MMI could solve an important problem in Seq2Seq conversational response generation. Prior Seq2Seq models tended to generate highly generic, dull responses (e.g., I don’t know) regardless of the inputs [Sordoni et al.2015, Vinyals and Le2015, Serban et al.2015b]. Li et al. li2015diversity show that modeling the mutual dependency between messages and response promotes the diversity of response outputs.

Our goal, distinct from these previous uses of MMI, is to see whether the mutual information objective improves translation by bidirectionally modeling source-target dependencies. In that sense, our work is designed to incorporate into Seq2Seq models features that have proved useful in phrase-based MT, like the reverse translation probability or sentence length [Och and Ney2002, Shen et al.2010, Devlin et al.2014].

Generating Diverse Translations

Various algorithms have been proposed for generated diverse translations in phrase-based MT, including compact representations like lattices and hypergraphs [Macherey et al.2008, Tromble et al.2008, Kumar and Byrne2004], “traits” like translation length [Devlin and Matsoukas2012], bagging/boosting [Xiao et al.2013], or multiple systems [Cer et al.2013]

. gimpel2013systematic,batra2012diverse, produce diverse N-best lists by adding a dissimilarity function based on N-gram overlaps, distancing the current translation from already-generated ones by choosing translations that have higher scores but distinct from previous ones. While we draw on these intuitions, these existing diversity promoting algorithms are tailored to phrase-based translation frameworks and not easily transplanted to neural MT decoding which requires batched computation.

3 Background: Neural Machine Translation

Neural machine translation models map source to a continuous vector representation, from which target output is to be generated.

3.1 LSTM Models

A long-short term memory model

[Hochreiter and Schmidhuber1997] associates each time step with an input gate, a memory gate and an output gate, denoted respectively as , and . Let denote the vector for the current word , the vector computed by the LSTM model at time by combining and ., the cell state vector at time , and

the sigmoid function. The vector representation

for each time step is given by:


where , , , . The LSTM defines a distribution over outputs and sequentially predicts tokens using a softmax function:


denotes the activation function between

and , where is the representation output from the LSTM at time . Each sentence concludes with a special end-of-sentence symbol EOS. Commonly, the input and output each use different LSTMs with separate sets of compositional parameters to capture different compositional patterns. During decoding, the algorithm terminates when an EOS token is predicted.

3.2 Attention Models

Attention models adopt a look-back strategy that links the current decoding stage with input time steps to represent which portions of the input are most responsible for the current decoding state [Xu et al.2015, Luong et al.2015b, Bahdanau et al.2014].

Let be the collection of hidden vectors outputted from LSTMs during encoding. Each element in contains information about the input sequences, focusing on the parts surrounding each specific token. Let be the LSTM outputs for decoding at time . Attention models link the current-step decoding information, i.e., with each of the representations at decoding step using a weight variable . can be constructed from different scoring functions such as the dot product between the two vectors, i.e., , a general

model akin to tensor operation i.e.,

, and the concatenation model by concatenating the two vectors i.e., tanh). The behavior of different attention scoring functions have been extensively studied in luong2015effective. For all experiments in this paper, we adopt the general strategy where the relevance score between the current step of the decoding representation and the encoding representation is given by:


The attention vector is created by averaging weights over all input time-steps:


Attention models predict subsequent tokens based on the combination of the last step outputted LSTM vectors and attention vectors :


where , with V denoting vocabulary size. luong2015effective reported a significant performance boost by integrating into the next step LSTM hidden state computation (referred to as the input-feeding model), making LSTM compositions in decoding as follows:


where , , , . For the attention models implemented in this work, we adopt the input-feeding strategy.

3.3 Unknown Word Replacements

One of the major issues in neural MT models is the computational complexity of the softmax function for target word prediction, which requires summing over all tokens in the vocabulary. Neural models tend to keep a shortlist of 50,00-80,000 most frequent words and use an unknown (UNK) token to represent all infrequent tokens, which significantly impairs BLEU scores. Recent work has proposed to deal with this issue: [Luong et al.2015b] adopt a post-processing strategy based on aligner from IBM models, while [Jean et al.2014] approximates softmax functions by selecting a small subset of target vocabulary.

In this paper, we use a strategy similar to that of jean2014using, thus avoiding the reliance on external IBM model word aligner. From the attention models, we obtain word alignments from the training dataset, from which a bilingual dictionary is extracted. At test time, we first generate target sequences. Once a translation is generated, we link the generated UNK tokens back to positions in the source inputs, and replace each UNK token with the translation word of its correspondent source token using the pre-constructed dictionary.

As the unknown word replacement mechanism relies on automatic word alignment extraction which is not explicitly modeled in vanilla Seq2Seq models, it can not be immediately applied to vanilla Seq2Seq models. However, since unknown word replacement can be viewed as a post-processing technique, we can apply a pre-trained attention-model to any given translation. For Seq2Seq models, we first generate translations and replace UNK tokens within the translations using the pre-trained attention models to post-process the translations.

4 Mutual Information via Reranking

Figure 1: Illustration of Standard Beam Search and proposed diversity promoting Beam Search.

As discussed in li2015diversity, direct decoding from (2) is infeasible since the second part, , requires completely generating the target before it can be computed. We therefore use an approximation approach:

  1. Train and separately using vanilla Seq2Seq models or Attention models.

  2. Generate N-best lists from .

  3. Rerank the N-best list by linearly adding .

4.1 Standard Beam Search for N-best lists

N-best lists are generated using a beam search decoder with beam size set to from models. As illustrated in Figure 1, at time step in decoding, we keep record of hypotheses based on score . As we move on to time step , we expand each of the K hypotheses (denoted as , ), by selecting top of the translations, denoted as , , leading to the construction of new hypotheses:

The score for each of the hypotheses is computed as follows:


In a standard beam search model, the top hypotheses are selected (from the hypotheses computed in the last step) based on the score . The remaining hypotheses are ignored as we proceed to the next time step.

We set the minimum length and maximum length to 0.75 and 1.5 times the length of sources. Beam size N is set to 200. To be specific, at each time step of decoding, we are presented with word candidates. We first add all hypotheses with an EOS token being generated at current time step to the N-best list. Next we preserve the top K unfinished hypotheses and move to next time step. We therefore maintain batch size of 200 constant when some hypotheses are completed and taken down by adding in more unfinished hypotheses. This will lead the size of final N-best list for each input much larger than the beam size222For example, for the development set of the English-German WMT14 task, each input has an average of 2,500 candidates in the N-best list. .

4.2 Generating a Diverse N-best List

Unfortunately, the N-best lists outputted from standard beam search are a poor surrogate for the entire search space [Finkel et al.2006, Huang2008]. The beam search algorithm can only keep a small proportion of candidates in the search space and most of the generated translations in N-best list are similar, differing only by punctuation or minor morphological variations, with most of the words overlapping. Because this lack of diversity in the N-best list will significantly decrease the impact of our reranking process, it is important to find a way to generate a more diverse N-best list.

We propose to change the way is computed in an attempt to promote diversity, as shown in Figure 1. For each of the hypotheses ( and ), we generate the top translations, , as in the standard beam search model. Next we rank the translated tokens generated from the same parental hypothesis based on in descending order: he is ranks the first among he is and he has, and he has ranks second; similarly for it is and it has.

Next we rewrite the score for by adding an additional part , where denotes the ranking of the current hypothesis among its siblings, which is first for he is and it is, second for he has and it has.


The top hypothesis are selected based on as we move on to the next time step. By adding the additional term , the model punishes bottom ranked hypotheses among siblings (hypotheses descended from the same parent). When we compare newly generated hypotheses descended from different ancestors, the model gives more credit to top hypotheses from each of different ancestors. For instance, even though the original score for it is is lower than he has, the model favors the former as the latter is more severely punished by the intra-sibling ranking part . The model thus generally favors choosing hypotheses from diverse parents, leading to a more diverse N-best list.

The proposed model is straightforwardly implemented with minor adjustment to the standard beam search model333Decoding for neural based MT model using large batch-size can be expensive resulted from softmax word prediction function. The proposed model supports batched decoding using GPU, significantly speed up decoding process than other diversity fostering models tailored to phrase based MT systems. .

We employ the diversity evaluation metrics in

[Li et al.2015] to evaluate the degree of diversity of the N-best lists: calculating the average number of distinct unigrams distinct-1 and bigrams distinct-2 in the N-best list given each source sentence, scaled by the total number of tokens. By employing the diversity promoting model with tuned from the development set based on BLEU score, the value of distinct-1 increases from to , and distinct-2 increases from to for English-German translation. Similar phenomenon are observed from English-French translation tasks and details are omitted for brevity.

4.3 Reranking

The generated N-best list is then reranked by linearly combining with . The score of the source given each generated translation can be immediately computed from the previously trained .

Other than , we also consider , which denotes the average language model probability trained from monolingual data. It is worth nothing that integrating and into reranking is not a new one and has long been employed by in noisy channel models in standard MT. In neural MT literature, recent progress has demonstrated the effectiveness of modeling reranking with language model [Gulcehre et al.2015].

We also consider an additional term that takes into account the length of targets (denotes as ) in decoding. We thus linearly combine the three parts, making the final ranking score for a given target candidate as follows:


We optimize , and using MERT [Och2003] BLEU score [Papineni et al.2002] on the development set.

5 Experiments

Our models are trained on the WMT’14 training dataset containing 4.5 million pairs for English-German and German-English translation, and 12 million pairs for English-French translation. For English-German translation, we limit our vocabularies to the top 50K most frequent words for both languages. For English-French translation, we keep the top 200K most frequent words for the source language and 80K for the target language. Words that are not in the vocabulary list are noted as the universal unknown token.

For the English-German and English-German translation, we use newstest2013 (3000 sentence pairs) as the development set and translation performances are reported in BLEU [Papineni et al.2002] on newstest2014 (2737) sentences. For English-French translation, we concatenate news-test-2012 and news-test-2013 to make a development set (6,003 pairs in total) and evaluate the models on news-test-2014 with 3,003 pairs444As in [Luong et al.2015a]. All texts are tokenized with tokenizer.perl and BLEU scores are computed with multi-bleu.perl.

5.1 Training Details for and

We trained neural models on Standard Seq2Seq Models and Attention Models. We trained following the standard training protocols described in [Sutskever et al.2014]. is trained identically but with sources and targets swapped.

We adopt a deep structure with four LSTM layers for encoding and four LSTM layers for decoding, each of which consists of a different set of parameters. We followed the detailed protocols from luong2015effective: each LSTM layer consists of 1,000 hidden neurons, and the dimensionality of word embeddings is set to 1,000. Other training details include: LSTM parameters and word embeddings are initialized from a uniform distribution between [-0.1,0.1]; For English-German translation, we run 12 epochs in total. After 8 epochs, we start halving the learning rate after each epoch; for English-French translation, the total number of epochs is set to 8, and we start halving the learning rate after 5 iterations. Batch size is set to 128; gradient clipping is adopted by scaling gradients when the norm exceeded a threshold of 5. Inputs are reversed.

Our implementation on a single GPU555Tesla K40m, 1 Kepler GK110B, 2880 Cuda cores. processes approximately 800-1200 tokens per second. Training for the English-German dataset (4.5 million pairs) takes roughly 12-15 days. For the French-English dataset, comprised of 12 million pairs, training takes roughly 4-6 weeks.

Model Features BLEU scores
Standard p(yx) 13.2
Standard p(yx)+Length 13.6 (+0.4)
Standard p(yx)+p(xy)+Length 15.0 (+1.4)
Standard p(yx)+p(xy)+p(y)+Length 15.4 (+0.4)
Standard p(yx)+p(xy)+p(y)+Length+Diver decoding 15.8 (+0.4)
+2.6 in total
Standard+UnkRep p(yx) 14.7
Standard+UnkRep p(yx)+Length 15.2 (+0.7)
Standard+UnkRep p(yx)+p(xy)+Length 16.3 (+1.1)
Standard+UnkRep p(yx)+p(xy)+p(y)+Length 16.7 (+0.4)
Standard+UnkRep p(yx)+p(xy)+p(y)+Length+Diver decoding 17.3 (+0.3)
+2.6 in total
Attention+UnkRep p(yx) 20.5
Attention+UnkRep p(yx)+Length 20.9 (+0.4)
Attention+UnkRep p(yx)+p(xy)+Length 21.8 (+0.9)
Attention+UnkRep p(yx)+p(xy)+p(y)+Length 22.1 (+0.3)
Attention+UnkRep p(yx)+p(xy)+p(y)+Length+Diver decoding 22.6 (+0.3)
+2.1 in total
Jean et al., 2015 (without ensemble) 19.4
Jean et al., 2015 (with ensemble) 21.6
luong2015effective (with UnkRep, without ensemble) 20.9
luong2015effective (with UnkRep, with ensemble) 23.0
Table 1: BLEU scores from different models for on WMT14 English-German results. UnkRep denotes applying unknown word replacement strategy. diversity indicates diversity-promoting model for decoding being adopted. Baselines performances are reprinted from Jean et al. (2014), Luong et al. 2015a.
Model Features BLEU scores
Standard p(yx) 29.0
Standard p(yx)+Length 29.7 (+0.7)
Standard p(yx)+p(xy)+Length 31.2 (+1.5)
Standard p(yx)+p(xy)+p(y)+Length 31.7 (+0.5)
Standard p(yx)+p(xy)+p(y)+Length+Diver decoding 32.2 (+0.5)
+3.2 in total
Standard+UnkRep p(yx) 31.0
Standard+UnkRep p(yx)+Length 31.5 (+0.5)
Standard+UnkRep p(yx)+p(xy)+Length 32.9 (+1.4)
Standard+UnkRep p(yx)+p(xy)+p(y)+Length 33.3 (+0.4)
Standard+UnkRep p(yx)+p(xy)+p(y)+Length+Diver decoding 33.6 (+0.3)
+2.6 in total
Attention+UnkRep p(yx) 33.4
Attention+UnkRep p(yx)+Length 34.3 (+0.9)
Attention+UnkRep p(yx)+p(xy)+Length 35.2 (+0.9)
Attention+UnkRep p(yx)+p(xy)+p(y)+Length 35.7 (+0.5)
Attention+UnkRep p(yx)+p(xy)+p(y)+Length+Diver decoding 36.3 (+0.4)
+2.7 in total
LSTM (Google) (without ensemble)) 30.6
LSTM (Google) (with ensemble) 33.0
luong2015addressing, UnkRep (without ensemble) 32.7
luong2015addressing, UnkRep (with ensemble) 37.5
Table 2: BLEU scores from different models for on WMT’14 English-French results. Google is the LSTM-based model proposed in Sutskever et al. (2014). Luong et al. (2015) is the extension of Google models with unknown token replacements.

5.2 Training p(y) from Monolingual Data

We respectively train single-layer LSTM recurrent models with 500 units for German and French using monolingual data. We News Crawl corpora from WMT13666 as additional training data to train monolingual language models. We used a subset of the original dataset which roughly contains 50-60 millions sentences. Following [Gulcehre et al.2015, Sennrich et al.2015a], we remove sentences with more than Unknown words based on the vocabulary constructed using parallel datasets. We adopted similar protocols as we trained Seq2Seq models, such as gradient clipping and mini batch.

5.3 English-German Results

We reported progressive performances as we add in more features for reranking. Results for different models on WMT2014 English-German translation task are shown in Figure 1. Among all the features, reverse probability from mutual information (i.e., p(xy)) yields the most significant performance boost, +1.4 and +1.1 for standard Seq2Seq models without and with unknown word replacement, +0.9 for attention models777Target length has long proved to be one of the most important features in phrase based MT due to the BLEU score’s significant sensitiveness to target lengths. However, here we do not observe as large performance boost here as in phrase based MT. This is due to the fact that during decoding, target length has already been strictly constrained. As described in 4.1, we only consider candidates of lengths between 0.75 and 1.5 times that of the source.. In line with [Gulcehre et al.2015, Sennrich et al.2015a], we observe consistent performance boost introduced by language model.

We see the benefit from our diverse N-best list by comparing mutual+diversity models with diversity models. On top of the improvements from standard beam search due to reranking, the diversity models introduce additional gains of +0.4, +0.3 and +0.3, leading the total gains roughly up to +2.6, +2.6, +2.1 for different models. The unknown token replacement technique yields significant gains, in line with observations from jean2014using,luong2015effective.

We compare our English-German system with various others: (1) The end-to-end neural MT system from jean2014using using a large vocabulary size. (2) Models from luong2015effective that combines different attention models. For the models described in [Jean et al.2014] and [Luong et al.2015a], we reprint their results from both the single model setting and the ensemble setting, which a set of (usually 8) neural models that differ in random initializations and the order of minibatches are trained, the combination of which jointly contributes in the decoding process. The ensemble procedure is known to result in improved performance [Luong et al.2015a, Jean et al.2014, Sutskever et al.2014].

Note that the reported results from the standard Seq2Seq models and attention models in Table 1 (those without considering mutual information) are from models identical in structure to the corresponding models described in [Luong et al.2015a], and achieve similar performances (13.2 vs 14.0 for standard Seq2Seq models and 20.5 vs 20.7 for attention models). Due to time and computational constraints, we did not implement an ensemble mechanism, making our results incomparable to the ensemble mechanisms in these papers.

5.4 French-English Results

Results from the WMT’14 French-English datasets are shown in Table 2, along with results reprinted from sutskever2014sequence,luong2015addressing. We again observe that applying mutual information yields better performance than the corresponding standard neural MT models.

Relative to the English-German dataset, the English-French translation task shows a larger gap between our new model and vanilla models where reranking information is not considered; our models respectively yield up to +3.2, +2.6, +2.7 boost in BLEU compared to standard neural models without and with unknown word replacement, and Attention models.

6 Discussion

In this paper, we introduce a new objective for neural MT based on the mutual dependency between the source and target sentences, inspired by recent work in neural conversation generation [Li et al.2015]. We build an approximate implementation of our model using reranking, and then to make reranking more powerful we introduce a new decoding method that promotes diversity in the first-pass N-best list. On EnglishFrench and EnglishGerman translation tasks, we show that the neural machine translation models trained using the proposed method perform better than corresponding standard models, and that both the mutual information objective and the diversity-increasing decoding methods contribute to the performance boost..

The new models come with the advantages of easy implementation with sources and targets interchanged, and of offering a general solution that can be integrated into any neural generation models with minor adjustments. Indeed, our diversity-enhancing decoder can be applied to generate more diverse N-best lists for any NLP reranking task. Finding a way to introduce mutual information based decoding directly into a first-pass decoder without reranking naturally constitutes our future work.