Vocabulary Manipulation for Neural Machine Translation

05/10/2016 ∙ by Haitao Mi, et al. ∙ ibm 0

In order to capture rich language phenomena, neural machine translation models have to use a large vocabulary size, which requires high computing time and large memory usage. In this paper, we alleviate this issue by introducing a sentence-level or batch-level vocabulary, which is only a very small sub-set of the full output vocabulary. For each sentence or batch, we only predict the target words in its sentence-level or batch-level vocabulary. Thus, we reduce both the computing time and the memory usage. Our method simply takes into account the translation options of each word or phrase in the source sentence, and picks a very small target vocabulary for each sentence based on a word-to-word translation model or a bilingual phrase library learned from a traditional machine translation model. Experimental results on the large-scale English-to-French task show that our method achieves better translation performance by 1 BLEU point over the large vocabulary neural machine translation system of Jean et al. (2015).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Neural machine translation (NMT) [Bahdanau et al.2014] has gained popularity in recent two years. But it can only handle a small vocabulary size due to the computational complexity. In order to capture rich language phenomena and have a better word coverage, neural machine translation models have to use a large vocabulary.

jean+:2015 alleviated the large vocabulary issue by proposing an approach that partitions the training corpus and defines a subset of the full target vocabulary for each partition. Thus, they only use a subset vocabulary for each partition in the training procedure without increasing computational complexity. However, there are still some drawbacks of jean+:2015’s method. First, the importance sampling is simply based on the sequence of training sentences, which is not linguistically motivated, thus, translation ambiguity may not be captured in the training. Second, the target vocabulary for each training batch is fixed in the whole training procedure. Third, the target vocabulary size for each batch during training still needs to be as large as , so the computing time is still high.

In this paper, we alleviate the above issues by introducing a sentence-level vocabulary, which is very small compared with the full target vocabulary. In order to capture the translation ambiguity, we generate those sentence-level vocabularies by utilizing word-to-word and phrase-to-phrase translation models which are learned from a traditional phrase-based machine translation system (SMT). Another motivation of this work is to combine the merits of both traditional SMT and NMT, since training an NMT system usually takes several weeks, while the word alignment and rule extraction for SMT are much faster (can be done in one day). Thus, for each training sentence, we build a separate target vocabulary which is the union of following three parts:

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. target vocabularies of word and phrase translations that can be applied to the current sentence. (to capture the translation ambiguity)

  3. top most frequent target words. (to cover the unaligned target words)

  4. target words in the reference of the current sentence. (to make the reference reachable)

As we use mini-batch in the training procedure, we merge the target vocabularies of all the sentences in each batch, and update only those related parameters for each batch. In addition, we also shuffle the training sentences at the beginning of each epoch, so the target vocabulary for a specific sentence varies in each epoch. In the beam search for the development or test set, we apply the similar procedure for each source sentence, except the third bullet (as we do not have the reference) and mini-batch parts. Experimental results on large-scale English-to-French task (Section 

5) show that our method achieves significant improvements over the large vocabulary neural machine translation system.

2 Neural Machine Translation

Figure 1: The attention-based NMT architecture. and are bi-directional encoder states. is the attention prob at time , position . is the weighted sum of encoding states. is the hidden state. is an intermediate output state. A single feedforward layer projects to a target vocabulary

, and applies softmax to predict the probability distribution over the output vocabulary.

As shown in Figure 1, neural machine translation [Bahdanau et al.2014]

is an encoder-decoder network. The encoder employs a bi-directional recurrent neural network to encode the source sentence

, where is the sentence length, into a sequence of hidden states , each is a concatenation of a left-to-right and a right-to-left ,

where and

are two gated recurrent units (GRU).

Given , the decoder predicts the target translation by maximizing the conditional log-probability of the correct translation , where is the length of target sentence. At each time , the probability of each word from a target vocabulary is:

(1)

where

is a multi layer feed-forward neural network, which takes the embedding of the previous word

, the hidden state , and the context state as input. The output layer of is a target vocabulary , in the training procedure. is originally defined as the full target vocabulary  [Cho et al.2014]. We apply the softmax function over the output layer, and get the probability of . In Section 3, we differentiate from by adding a separate and sentence-dependent for each source sentence. In this way, we enable to maintain a large , and use a small for each sentence.

The is computed as:

(2)
(3)

where is a GRU, is a weighted sum of , the weights, , are computed with a feed-forward neural network :

(4)

3 Target Vocabulary

The output of function is the probability distribution over the target vocabulary . As is defined as in cho+:2014, the softmax function over requires to compute all the scores for all words in , and results in a high computing complexity. Thus, bahdanau+:2014 only uses top most frequent words for both and , and replaces all other words as unknown words (UNK).

3.1 Target Vocabulary Manipulation

In this section, we aim to use a large vocabulary of (e.g. , to have a better word coverage), and, at the same, to reduce the size of as small as possible (in order to reduce the computing time). Our basic idea is to maintain a separate and small vocabulary for each sentence so that we only need to compute the probability distribution of over a small vocabulary for each sentence. Thus, we introduce a sentence-level vocabulary to be our , which depends on the sentence . In the following part, we show how we generate the sentence-dependent .

The first objective of our method aims to capture the real translation ambiguity for each word, and the target vocabulary of a sentence is supposed to cover as many as those possible translation candidates. Take the English to Chinese translation for example, the target vocabulary for the English word should contain yínháng (a financial institution) and héàn (sloping land) in Chinese.

So we first use a word-to-word translation dictionary to generate some target vocaularies for . Given a dictionary , where is a source word, is a sorted list of candidate translations, we generate a target vocabulary for a sentence by merging all the candidates of all words in .

As the word-to-word translation dictionary only focuses on the source words, it can not cover the target unaligned functional or content words, where the traditional phrases are designed for this purpose. Thus, in addition to the word dictionary, given a word aligned training corpus, we also extract phrases , where is a consecutive source words, and is a list of target words111Here we change the definition of a phrase in traditional SMT, where the should also be a consecutive target words. But our task in this paper is to get the target vocabulary, so we only care about the target word set, not the order.. For each sentence , we collect all the phrases that can be applied to sentence , e.g. is a sub-sequence of sentence .

where is all the possible sub-sequence of with a length limit.

In order to cover target un-aligned functional words, we need top most common target words.

Training: in our training procedure, our optimization objective is to maximize the log-likelihood over the whole training set. In order to make the reference reachable, besides , and , we also need to include the target words in the reference ,

where and are a translation pair. So for each sentence , we have a target vocabulary :

Then, we start our mini-batch training by randomly shuffling the training sentences before each epoch. For simplicity, we use the union of all in a batch,

where is the batch size. This merge gives an advantage that changes dynamically in each epoch, which leads to a better coverage of parameters.

Decoding: different from the training, the target vocabulary for a sentence is

and we do not use mini-batch in decoding.

4 Related Work

To address the large vocabulary issue in NMT, jean+:2015 propose a method to use different but small sub vocabularies for different partitions of the training corpus. They first partition the training set. Then, for each partition, they create a sub vocabulary , and only predict and apply softmax over the vocabularies in in training procedure. When the training moves to the next partition, they change the sub vocabulary set accordingly.

Noise-contrastive estimation [Gutmann and Hyvarinen2010, Mnih and Teh2012, Mikolov et al.2013, Mnih and Kavukcuoglu2013] and hierarchical classes [Mnih and Hinton2009] are introduced to stochastically approximate the target word probability. But, as suggested by jean+:2015, those methods are only designed to reduce the time complexity in training, not for decoding.

set
10 20 50 10 20 50 10 20 50
train 73.6 82.1 87.8 93.5 86.6 89.4 93.7 92.7 94.2 96.2
development 73.5 80.0 85.5 91.0 86.6 88.4 91.7 91.7 92.7 94.3
Table 1: The average reference coverage ratios (in word-level) on the training and development sets. We use fixed top candidates for each phrase when generating , and top most common words for . Then we check various top (10, 20, and 50) candidates for the word-to-word dictionary for .

5 Experiments

5.1 Data Preparation

system train dev.
sentence mini-batch sentence
Jean (2015)
Ours
Table 2: Average vocabulary size for each sentence or mini-batch (80 sentences). The full vocabulary is , all other words are UNKs.

We run our experiments on English to French (En-Fr) task. The training corpus consists of approximately 12 million sentences, which is identical to the set of jean+:2015 and ilya+:2015. Our development set is the concatenation of news-test-2012 and news-test-2013, which has 6003 sentences in total. Our test set has 3003 sentences from WMT news-test 2014. We evaluate the translation quality using the case-sensitive BLEU-4 metric [Papineni et al.2002] with the multi-bleu.perl script.

Same as jean+:2015, our full vocabulary size is , we use AdaDelta [Zeiler2012], and mini-batch size is 80. Given the training set, we first run the ‘fast_align’ [Dyer et al.2013]

in one direction, and use the translation table as our word-to-word dictionary. Then we run the reverse direction and apply ‘grow-diag-final-and’ heuristics to get the alignment. The phrase table is extracted with a standard algorithm in Moses 

[Koehn et al.2007].

In the decoding procedure, our method is very similar to the ‘candidate list’ of jean+:2015, except that we also use bilingual phrases and we only include top most frequent target words. Following jean+:2015, we dump the alignments for each sentence, and replace UNKs with the word-to-word dictionary or the source word.

5.2 Results

5.2.1 Reference Reachability

The reference coverage or reachability ratio is very important when we limit the target vocabulary for each source sentence, since we do not have the reference in the decoding time, and we do not want to narrow the search space into a bad space. Table 1 shows the average reference coverage ratios (in word-level) on the training and development sets. For each source sentence , here is a set of target word indexes (the vocabulary size is , others are mapped to UNK). The average reference vocabulary size for each sentence is 23.7 on the training set (22.6 on the dev. set). The word-to-word dictionary has a better coverage than phrases , and when we combine the three sets we can get better coverage ratios. Those statistics suggest that we can not use each of them alone due to the low reference coverage ratios. The last three columns show three combinations, all of which have higher than 90% coverage ratios. As there are many combinations, training an NMT system is time consuming, and we also want to keep the output vocabulary size small (the setting in the last column in Table 1 results in an average vocabulary size for mini-batch 80), thus, in the following part, we only run one combination (top 10 candidates for both and , and top for ), where the full sentence coverage ratio is 20.7% on the development set.

top common words 50 200 500 1000 2000 10000
BLEU on dev. 30.61 30.65 30.70 30.70 30.72 30.69
avg. size of 202 324 605 1089 2067 10029
Table 3: Given a trained NMT model, we decode the development set with various top most common target words. For En-Fr task, the results suggest that we can reduce the to 50 without losing much in terms of BLEU score. The average size of is reduced to as small as 202, which is significant lower than 2067 (the default setting we use in our training).

5.2.2 Average Size of

With the setting shown in bold column in Table 1, we list average vocabulary size of jean+:2015 and ours in Table 2. jean+:2015 fix the vocabulary size to for each sentence and mini-batch, while our approach reduces the vocabulary size to 2080 for each sentence, and 6153 for each mini-batch. Especially in the decoding time, our vocabulary size for each sentence is about 14.5 times smaller than .

5.2.3 Translation Results

The red solid line in Figure 2 shows the learning curve of our method on the development set, which picks at epoch 7 with a BLEU score of 30.72. We also fix word embeddings at epoch 5, and continue several more epochs. The corresponding blue dashed line suggests that there is no significant difference between them.

We also run two more experiments: and separately (always have in training). The final results on the test set are 34.20 and 34.23 separately. Those results suggest that we should use both the translation dictionary and phrases in order to get better translation quality.

single system dev. test
Moses from cho+:2014 N/A 33.30
Jean (2015) candidate list 29.32 33.36
+UNK replace 29.98 34.11
Ours voc. manipulation 30.15 34.45
+UNK replace 30.72 35.11
best from durrani+:2014 N/A 37.03
Table 4: Single system results on En-Fr task.
Figure 2: The learning curve on the development set. An epoch means a complete update through the full training set.

Table 4 shows the single system results on En-Fr task. The standard Moses in cho+:2014 on the test set is 33.3. Our target vocabulary manipulation achieves a BLEU score of 34.45 on the test set, and 35.11 after the UNK replacement. Our approach improves the translation quality by 1.0 BLEU point on the test set over the method of jean+:2015. But our single system is still about 2 points behind of the best phrase-based system [Durrani et al.2014].

5.2.4 Decoding with Different Top Most Common Target Words

Another interesting question is what is the performance if we vary the size top most common target words in . As the training for NMT is time consuming, we vary the size only in the decoding time. Table 3 shows the BLEU scores on the development set. When we reduce the from 2000 to 50, we only loss 0.1 points, and the average size of sentence level is reduced to 202, which is significant smaller than 2067 (shown in Table 2). But we should notice that we train our NMT model in the condition of the bold column in Table 2, and only test different in our decoding procedure only. Thus there is a mismatch between the training and testing when is not 2000.

5.2.5 Speed

In terms of speed, as we have different code bases222 Two code bases share the same architecture, initial states, and hyper-parameters. We simulate jean+:2015’s work with our code base in the both training and test procedures, the final results of our simulation are 29.99 and 34.16 on dev. and test sets respectively. Those scores are very close to jean+:2015. between jean+:2015 and us, it is hard to conduct an apple to apple comparison. So, for simplicity, we run another experiment with our code base, and increase size to for each batch (the same size in jean+:2015). Results show that increasing the to slows down the training speed by 1.5 times.

6 Conclusion

In this paper, we address the large vocabulary issue in neural machine translation by proposing to use a sentence-level target vocabulary , which is much smaller than the full target vocabulary . The small size of reduces the computing time of the softmax function in each predict step, while the large vocabulary of enable us to model rich language phenomena. The sentence-level vocabulary is generated with the traditional word-to-word and phrase-to-phrase translation libraries. In this way, we decrease the size of output vocabulary under for each sentence, and we speedup and improve the large-vocabulary NMT system.

Acknowledgment

We thank the anonymous reviewers for their comments.

References

  • [Bahdanau et al.2014] D. Bahdanau, K. Cho, and Y. Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. ArXiv e-prints, September.
  • [Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of EMNLP, pages 1724–1734, Doha, Qatar, October.
  • [Durrani et al.2014] Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. 2014. Edinburgh’s phrase-based machine translation systems for wmt-14. In Proceedings of WMT, pages 97–104, Baltimore, Maryland, USA, June.
  • [Dyer et al.2013] Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648, Atlanta, Georgia, June. Association for Computational Linguistics.
  • [Gutmann and Hyvarinen2010] Michael Gutmann and Aapo Hyvarinen. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of AISTATS.
  • [Jean et al.2015] Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of ACL, pages 1–10, Beijing, China, July.
  • [Koehn et al.2007] P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of ACL.
  • [Mikolov et al.2013] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013.

    Efficient estimation of word representations in vector space.

    In International Conference on Learning Representations: Workshops Track.
  • [Mnih and Hinton2009] Andriy Mnih and Geoffrey Hinton. 2009. A scalable hierarchical distributed language model. In Advances in Neural Information Processing Systems, volume 21, pages 1081–1088.
  • [Mnih and Kavukcuoglu2013] Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Proceedings of NIPS, pages 2265–2273.
  • [Mnih and Teh2012] Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In

    Proceedings of the 29th International Conference on Machine Learning

    , pages 1751–1758.
  • [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318, Philadephia, USA, July.
  • [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS, pages 3104–3112, Quebec, Canada, December.
  • [Zeiler2012] Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR.