On Using Monolingual Corpora in Neural Machine Translation

03/11/2015 ∙ by Caglar Gulcehre, et al. ∙ 0

Recent work on end-to-end neural network-based architectures for machine translation has shown promising results for En-Fr and En-De translation. Arguably, one of the major factors behind this success has been the availability of high quality parallel corpora. In this work, we investigate how to leverage abundant monolingual corpora for neural machine translation. Compared to a phrase-based and hierarchical baseline, we obtain up to 1.96 BLEU improvement on the low-resource language pair Turkish-English, and 1.59 BLEU on the focused domain task of Chinese-English chat messages. While our method was initially targeted toward such tasks with less parallel data, we show that it also extends to high resource languages such as Cs-En and De-En where we obtain an improvement of 0.39 and 0.47 BLEU scores over the neural machine translation baselines, respectively.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Neural machine translation (NMT) is a novel approach to machine translation that has shown promising results [Kalchbrenner2013, Sutskever2014, Cho2014, bahdanau2014neural]. Until recently, the application of neural networks to machine translation was restricted to extending standard machine translation tools for rescoring translation hypotheses or re-ranking n-best lists (see, e.g., [Schwenk2012, Schwenk2007]. In contrast, it has been shown that, it is possible to build a competitive translation system for English-French and English-German using an end-to-end neural network architecture [Sutskever2014, Jean2014] (also see Sec. Document). Arguably, a large part of the recent success of these methods has been due to the availability of large amounts of high quality, sentence aligned corpora. In the case of low resource language pairs or in a task with heavy domain restrictions, there can be a lack of such sentence aligned corpora. In contrast, monolingual corpora is almost always universally available. Despite being “unlabeled”, monolingual corpora still exhibit rich linguistic structure that may be useful for translation tasks. This presents an opportunity to leverage such corpora to give hints to an NMT system. In this work, we present a way to effectively integrate a language model (LM) trained only on monolingual data (target language) into an NMT system. We provide experimental results that incorporating monolingual corpora can improve a translation system on a low-resource language pair (Turkish-English) and a domain restricted translation problem (Chinese-English SMS chat). In addition, we show that these methods improve the performance on the relatively high-resource German-English (De-En) and Czech-English (Cs-En) translation tasks. In the following section (Sec. Document), we review recent work in neural machine translation. We present our basic model architecture in Sec. Document  and describe our shallow and deep fusion approaches in Sec. Document. Next, we describe our datasets in Sec. Document. Finally, we describe our main experimental results in Sec. Document.

Background: Neural Machine Translation

Statistical machine translation (SMT) systems maximize the conditional probability

of a correct target translation given a source sentence . This is done by maximizing separately a language model and the (inverse) translation model component by using Bayes’ rule: p(∣) ∝p(∣) p(). This decomposition into a language model and translation model is meant to make full use of available corpora: monolingual corpora for fitting the language model and parallel corpora for the translation model. In reality, however, SMT systems tend to model directly by linearly combining multiple features by using a so-called log-linear model: logp(∣) = ∑_j f_j(, ) + C, where is the -th feature based on both or either of the source and target sentences, and

is a normalization constant which is often ignored. These features include, for instance, pair-wise statistics between two sentences/phrases. The log-linear model is fitted to data, in most cases, by maximizing an automatic evaluation metric other than an actual conditional probability, such as BLEU. Neural machine translation, on the other hand, aims at directly optimizing

including the feature extraction as well as the normalization constant by a single neural network. This is typically done under the encoder-decoder framework 

[Kalchbrenner2013, Cho2014, Sutskever2014] consisting of neural networks. The first network encodes the source sentence

into a continuous-space representation from which the decoder produces the target translation sentence. By using RNN architectures equipped to learn long term dependencies such as Gated Recurrent Units (GRU) or Long Short-Term Memory (LSTM), the whole system can be trained end-to-end 

[Cho2014, Sutskever2014]. Once the model learns the conditional distribution or translation model, given a source sentence we can find a translation that approximately maximizes the conditional probability using, for instance, a beam search algorithm.

Model Description

We use the model recently proposed by [bahdanau2014neural] that learns to jointly (soft-)align and translate as the baseline neural machine translation system in this paper. Here we describe in detail this model to which we refer as “NMT”. The encoder of the NMT is a bidirectional RNN which consists of forward and backward RNNs [Schuster1997]. The forward RNN reads the input sequence/sentence in a forward direction, resulting in a sequence of hidden states . The backward RNN reads in an opposite direction and outputs . We concatenate a pair of hidden states at each time step to build a sequence of annotationvectors , where

Each annotation vector encodes information about the -th word with respect to all the other surrounding words in the sentence. In our decoder, which we construct with a single layer RNN, at each timestep a soft-alignment mechanism first decides on which annotation vectors are most relevant. The relevance weight of the -th annotation vector for the -th target word is computed by a feedforward neural network that takes as input , the previous decoder’s hidden state and the previous output : e_tj = (_t-1, _j, _t-1). The outputs are normalized over the sequence of the annotation vectors so that the they sum to :

()

and we call a relevance score, or an alignment weight, of the -th annotation vector. The relevance scores are used to get the context vector of the -th word in the translation: _t = ∑_j=1^T α_tj _j  , Then, the decoder’s hidden state at time is computed based on the previous hidden state , the context vector and the previously translated word :

()

where is the gated recurrent unit [Cho2014]. We use a deep output layer [Pascanu2014rec] to compute the conditional distribution over words:

()

where

is a one-hot encoded vector indicating one of the words in the target vocabulary.

is a learned weight matrix and is a bias. is a single-layer feedforward neural network with a two-way maxout non-linearity [Goodfellow2013]. The whole model, including both the encoder and decoder, is jointly trained to maximize the (conditional) log-likelihood of the bilingual training corpus: max_ 1N ∑_n=1^N logp_θ(^(n) — ^(n)), where the training corpus is a set of ’s, and denotes a set of all the tunable parameters.

Integrating Language Model into the Decoder

In this paper, we propose two alternatives to integrating a language model into a neural machine translation system which we refer as shallow fusion (Sec. Document) and deep fusion (Sec. Document

). Without loss of generality, we use a language model based on recurrent neural networks (RNNLM,

[mikolov2011rnnlm]) which is equivalent to the decoder described in the previous section except that it is not biased by a context vector (i.e., in Eqs. eq:dec:hidden–eq:dec:output). In the sections that follow, we assume that both an NMT model (on parallel corpora) as well as a recurrent neural network language model (RNNLM, on larger monolingual corpora) have been pre-trained separately before being integrated. We denoted the hidden state at time of the RNNLM with .

Shallow Fusion

Shallow fusion is analogous to how language models are used in the decoder of a usual SMT system [koehn2010]. At each time step, the translation model proposes a set of candidate words. The candidates are then scored according to the weighted sum of the scores given by the translation model and the language model. More specifically, at each time step , the translation model (in this case, the NMT) computes the score of every possible next word for each hypothesis all of hypotheses . Each score is the summation of the score of the hypothesis and the score given by the NMT to the next word. All these new hypotheses (a hypothesis from the previous timestep with a next word appended at the end) are then sorted according to their respective scores, and the top ones are selected as candidates . We then rescore these hypotheses with the weighted sum of the scores by the NMT and RNNLM, where we only need to recompute the score of the “new word” at the end of each candidate hypothesis. The score of the new word is computed by

()

where is a hyper-parameter that needs to be tuned to maximize the translation performance on a development set. See Fig. Document (a) for illustration.

[width=0.9]./shallow_fusion.pdf [width=]./DeepFusion_v2.pdf (a) Shallow Fusion (Sec. Document) (b) Deep Fusion (Sec. Document)
Figure : Graphical illustrations of the proposed fusion methods.

Deep Fusion

In deep fusion, we integrate the RNNLM and the decoder of the NMT by concatenating their hidden states next to each other (see Fig. Document (b)). The model is then finetuned to use the hidden states from both of these models when computing the output probability of the next word (see Eq. eq:dec:output). Unlike the vanilla NMT (without any language model component), the hidden layer of the deep output takes as input the hidden state of the RNNLM in addition to that of the NMT, the previous word and the context such that

()

where again we use the superscripts and to denote the hidden states of the RNNLM and NMT respectively. During the finetuning of the model, we tune only the parameters that were used to parameterize the output eq:dec:output_deepfusion. This is to ensure that the structure learned by the LM from monolingual corpora is not overwritten. It is possible to use monolingual corpora as well while finetuning all the parameters, but in this paper, we alter only the output parameters in the stage of finetuning.

Balancing the LM and TM

In order for the decoder to flexibly balance the input from the LM and TM, we augment the decoder with a “controller” mechanism. The need to flexibly balance the signals arises depending on the work being translated. For instance, in the case of Zh-En, there are no Chinese words that correspond to articles in English, in which case the LM may be more informative. On the other hand, if a noun is to be translated, it may be better to ignore any signal from the LM, as it may prevent the decoder from choosing the correct translation. Intuitively, this mechanism helps the model dynamically weight the different models depending on the word being translated. The controller mechanism is implemented as a function taking the hidden state of the LM as input and computing g_t = σ( ^⊤_g ^LM_t + b_g ), where

is a logistic sigmoid function.

and are learned parameters. The output of the controller is then multiplied with the hidden state of the LM. This lets the decoder use the signal from the TM fully, while the controller controls the magnitude of the LM signal. In our experiments, we empirically found that it was better to initialize the bias to a small, negative number. This allows the decoder to decide the importance of the LM only when it is deemed necessary. 111 In all our experiments, we set to ensure that is initially on average.

Datasets

We evaluate the proposed approaches on four diverse tasks: Chinese to English (Zh-En), Turkish to English (Tr-En), German to English (De-En) and Czech to English (Cs-En). We describe each of these datasets in more detail below.

Parallel Corpora

Zh-En: OpenMT’15

We use the parallel corpora made available as a part of the NIST OpenMT’15 Challenge. Sentence-aligned pairs from three domains are combined to form a training set: (1) SMS/CHAT and (2) conversational telephone speech (CTS) from DARPA BOLT Project, and (3) newsgroups/weblogs from DARPA GALE Project. In total, the training set consists of 430K sentence pairs (see Table Document for the detailed statistics). We train models with this training set and the development set (the concatenation of the provided development and tune sets from the challenge), and evaluate them on the test set. The domain of the development and test sets is restricted to CTS.

Chinese English # of Sentences 436K # of Unique Words 21K 150K # of Total Words 8.4M 5.9M Avg. Length 19.3 13.5 (a) Zh-En Turkish English # of Sentences 160K # of Unique Words 96K 95K # of Total Words 11.4M 8.1M Avg. Length 31.6 22.6 (b) Tr-En Czech English # of Sentences 12.1M # of Unique Words 1.5M 911K # of Total Words 151M 172M Avg. Length 12.5 14.2 (c) Cs-En German English # of Sentences 4.1M # of Unique Words 1.16M 742K # of Total Words 11.4M 8.1M Avg. Length 24.2 25.1 (d) De-En
Table : Statistics of the Parallel Corpora. : After segmentation, : After compound splitting.
Preprocessing

Importantly, we did “not segment” the Chinese sentences and considered each character as a symbol, unlike other approaches which use a separate segmentation tool to segment the Chinese characters into words [Devlin2014]. Any consecutive non-Chinese characters such as Latin alphabets were, however, considered as an individual word. Lastly, we removed any HTML/XML tags from the corpus, chose only the intended meaning word if both intended and literal translations are available, and ignored any indicator of, e.g., typos. The only preprocessing we did on the English side of the corpus was a simple tokenization using the tokenizer from Moses. 222https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl.

Tr-En: IWSLT’14

We used the WIT parallel corpus [cettolo2012] and SETimes parallel corpus made available as a part of IWSLT’14 (machine translation track). The corpus consists of the sentence-aligned subtitles of TED and TEDx talks, and we concatenated dev2010 and tst2010 to form a development set, and tst2011, tst2012, tst2013 and tst2014 to form a test set. See Table Document for the detailed statistics of the parallel corpora.

Preprocessing

As done with the case of Zh-En, initially we removed all special symbols from the corpora and tokenized the Turkish side with the tokenizer provided by Moses. To overcome the exploding vocabulary due to the rich inflections and derivations in Turkish, we segmented each Turkish sentence into a sequence of sub-word units using Zemberek333https://github.com/ahmetaa/zemberek-nlp followed by morphological disambiguation on the morphological analysis [sak2007]. We removed any non-surface morphemes corresponding to, for instance, part-of-speech tags.

Cs-En and De-En: WMT’15

For the training of our models, we used all the available training data provided for Cs-En and De-En in the WMT’15 competition. We used newstest2013 as a development set and newstest2014 for a test set. The detailed statistics of the parallel corpora is provided in Table Document.

Preprocessing

We tokenized the datasets with Moses tokenizer first. Sentences longer than eighty words and those that have large mismatch between lengths of the source and target sentences were removed from the training set. Then, we filtered the training data by removing sentence pairs in which one sentence (or both) was written in the wrong language by using a language detection toolkit [nakatani2010langdetect], unless the sentence had 5 words or less. For De-En, we also split the compounds in the German side by using Moses. Finally we shuffled the training corpora seven times and concatenated its outputs.

Monolingual Corpora

The English Gigaword corpus by the Linguistic Data Consortium (LDC), which mainly consists of newswire documents, was allowed in both OpenMT’15 and IWSLT-15 challenges for language modelling. We used the tokenized Gigaword corpus without any other preprocessing step to train three different RNNLM’s to fuse into NMT for Zh-En, Tr-En and the WMT’15 translation tasks (De-En and Cs-En.)

Settings

Training Procedure

Neural Machine Translation

The input and output of the network were sequences of one-hot vectors whose dimensionality correspond to the sizes of the source and target vocabularies, respectively. We constructed the vocabularies with the most common words in the parallel corpora. The sizes of the vocabularies for Chinese, Turkish and English were 10K, 30K and 40K, respectively, for the Tr-En and Zh-En tasks. Each word was projected into the continuous space of -dimensional Euclidean space first to reduce the dimensionality, on both the encoder and the decoder. We chose the size of the recurrent units for Zh-En and Tr-En to be and respectively. In Cs-En and De-En experiments, we were able to use larger vocabularies. We trained our NMT model for Cs-En and De-En with large vocabularies using the importance sampling based technique introduced in [Jean2014] and with this technique we were able to use large vocabulary of size . Each model was optimized using Adadelta [Zeiler2012] with minibatches of sentence pairs. At each update, we normalized the gradient such that if the norm of the gradient exceeds , gradient is renormalized back to  [Pascanu2013]. For the non-recurrent layers (see Eq. eq:dec:output), we used dropout [hinton2012improving] and additive Gaussian noise (mean and std. dev. ) on each parameter to prevent overfitting [graves2011practical]. Training was early-stopped to maximize the performance on the development set measured by BLEU. 444We compute the BLEU score using the multi-blue.perl script from Moses on tokenized sentence pairs. We initialized all recurrent weight matrices as random orthonormal matrices.

Language Model

We trained three RNNLM’s with long short-term memory (LSTM) [hochreiter1997long] units on English Gigaword Corpus using respectively the vocabularies constructed separately from the English sides of Zh-En and Tr-En corpora. The third language model was trained using

LSTM units on the English Gigaword Corpus again but with a vocabulary constructed from the intersection the English sides of Cs-En and De-En. The parameters of the former two language models were optimized using RMSProp 

[tieleman2012lecture], and Adam optimizer [Kingma2014] was used for the latter one. Any sentence with more than ten percent of its words out of vocabulary was discarded from the training set. We did early-stopping using the perplexity of development set.

Shallow and Deep Fusion

Shallow Fusion

The hyperparameter

in Eq. eq:dec:output_shallowfusion was selected to maximize the translation performance on the development set, from the range and . In preliminary experiments, we found it important to renormalize the softmax of the LM without the end-of-sequence and out-of-vocabulary symbols ( in Eq.  eq:dec:output_shallowfusion). This may be due to the difference in the domains of TM and LM.

Deep Fusion

We finetuned the parameters of the deep output layer (Eq. eq:dec:output_deepfusion) as well as the controller (see Eq. eq:controller using the Adam optimizer for Zh-En, and RMSProp with momentum for Tr-En. During the finetuning, the dropout probability and the standard deviation of the weight noise were set to

and , respectively. Based on our preliminary experiments, we reduced the level of regularization after the first updates. In Cs-En and De-En tasks with large vocabularies, the model parameters were finetuned using Adadelta while scaling down the magnitude of the update steps by .

Handling Rare Words

On the De-En and Cs-En translation tasks, we replaced the unknown words generated by the NMT with the words the NMT assigned to which the highest score in the source sentence (Eq. eq:dec:alpha). We copied the selected source word in the place of the corresponding unknown token in the target sentence. This method is similar to the technique proposed by [luong2014addressing] for addressing rare words. But instead of relying on an external alignment tool, we used the attention mechanism of the NMT model to extract alignments. This method consistently improved the results by approximately BLEU score.

Results and Analysis

Zh-En: OpenMT’15

In addition to NMT-based systems, we also trained a phrase-based as well as hierarchical phrase-based SMT systems [koehn2003statistical, Chiang2005hierarchical] with/without re-scoring by an external neural language model (CSLM) [schwenk2007continuous]. We present the results in Table Document. We observed that integrating an additional LM by deep fusion (see Sec. Document) helped the models achieving better performance in general, except in the case of the CTS task. We noticed that the NMT-based models, regardless of whether the LM was integrated or not, outperformed the more traditional phrase-based SMT systems.

SMS/CHAT CTS
Dev Test Dev Test
PB 15.5 14.73 21.94 21.68
+ CSLM 16.02 15.25 23.05 22.79
HPB 15.33 14.71 21.45 21.43
+ CSLM 15.93 15.8 22.61 22.17
NMT 17.32 17.36 23.4 23.59
Shallow 16.59 16.42 22.7 22.83
Deep 17.58 17.64 23.78 23.5
Table : Results on the task of Zh-En. PB and HPB stand for the phrase-based and hierarchical phrase-based SMT systems, respectively.
Development Set Test Set
dev2010 tst2010 tst2011 tst2012 tst2013 Test 2014
Previous Best (Single) 15.33 17.14 18.77 18.62 18.88 -
Previous Best (Combination) - 17.34 18.83 18.93 18.70 -
NMT 14.50 18.01 18.40 18.77 19.86 18.64
NMT+LM (Shallow) 14.44 17.99 18.48 18.80 19.87 18.66
NMT+LM (Deep) 15.69 19.34 20.17 20.23 21.34 20.56
Table : Results on Tr-En. We show for each set separately to make it easier to compare to previously reported scores.
De-En Cs-En
Dev Test Dev Test
NMT Baseline 25.51 23.61 21.47 21.89
Shallow Fusion 25.53 23.69 21.95 22.18
Deep Fusion 25.88 24.00 22.49 22.36
Table : Results for De-En and Cs-En translation tasks on WMT’15 dataset.

Tr-En: IWSLT’14

In Table Document, we present our results on Tr-En. Compared to Zh-En, we saw a greater performance improvement up to +1.19 BLEU points from the basic NMT to the NMT integrated with the LM under the proposed method of deep fusion. Furthermore, by incorporating the LM using deep fusion, the NMT systems were able to outperform the best previously reported result [yilmaztubitak] by up to BLEU points on all of the separate test sets.

Cs-En and De-En: WMT-15

We provide the results of Cs-En and De-En on Table Document. Shallow fusion achieved and BLEU score improvements respectively on De-En and Cs-En over the baseline NMT model. With deep fusion the improvements of and BLEU score were observed again over the NMT baseline.

Analysis: Effect of Language Model

The performance improvements we report in this paper reflect a heavy dependency on the degree of similarity between the domain of monolingual corpora and the target domain of translation. In the case of Zh-En, intuitively, we can tell that the style of writing in both SMS/CHAT as well as the conversational speech will be different from that of news articles (which constitutes the majority of the English Gigaword corpus). Empirically, this is supported by the high perplexity on the development set with our LM (see the column Zh-En of Table Document). This explains the marginal improvement we observed in Sec. Document. On the other hand, in the case of Tr-En, the similarity between the domains of the monolingual corpus and parallel corpora is higher (see the column Tr-En of Table Document). This led to a significantly larger improvement in translation performance by integrating the external language model than the case of Zh-En. Similarly, we observed the improvement by both shallow and deep fusion in the case of De-En and Cs-En, where the perplexity on the development set was much lower. Unlike shallow fusion, deep fusion allows a model to selectively incorporate the information from the additional LM by the controller mechanism from Sec. Document. Although this controller mechanism works on per-word basis, it can be expected that if the additional LM models the target domain better, the controller mechanism will be more frequently active on average, i.e., . From Table Document, we can see that, on average, the controller mechanism is most active with De-En and Cs-En, where the additional LM was able to model the target sentences best. This effectively means that deep fusion allows the model to be more robust to the domain mismatch between the TM and LM, thus suggests why deep fusion was more successful than shallow fusion in the experiments.

Zh-En Tr-En De-En Cs-En
Perplexity 223.68 163.73 78.20 78.20
Average 0.23 0.12 0.28 0.31
Std Dev 0.0009 0.02 0.003 0.008
Table : Perplexity of RNNLM’s on development sets and the statistics of the controller gating mechanism .

Conclusion and Future Work

In this paper, we propose and compare two methods for incorporating monolingual corpora into an existing NMT system. We empirically evaluate these approaches (shallow fusion and deep fusion) on low-resource En-Tr (TED/TEDx Subtitles), focused domain for En-Zh (SMS/Chat and conversational speech) and two high-resource language pairs: Cs-En and De-En. We show that with our approach on the Tr-En and Zh-En language pairs, the NMT models trained with deep fusion were able to achieve better results than the existing phrase-based statistical machine translation systems (up to a BLEU points on En-Tr). We also observed up to a BLEU score improvement for high resource language pairs such as De-En and Cs-En on the datasets provided in WMT’15 competition over our NMT baseline. This provides an evidence that our method can also improve the translation performance regardless of the amount of available parallel corpora. Our analysis also revealed that the performance improvement from incorporating an external LM was highly dependent on the domain similarity between the monolingual corpus and the target task. In the case where the domain of the bilingual and monolingual corpora were similar (De-En, Cs-En), we observed improvement with both deep and shallow fusion methods. In the case where they were dissimilar (Zh-En), the improvement using shallow fusion were much smaller. This trend might also explain why deep fusion, which implements an adaptive mechanism for modulating information from the integrated LM, works better than shallow fusion. This analysis also suggests that future work on domain adaption of the language model may further improve translations.