A major strength of neural machine translation, which has recently become de facto
standard in machine translation research, is the capability of seamlessly integrating information from multiple sources. Due to the nature of continuous representation used within a neural machine translation system, any information, in addition to tokens from source and target sentences, can be integrated as long as such information can be projected into a vector space. This has allowed researchers to build a non-standard translation system, such as multilingual neural translation systems(see, e.g., Firat et al., 2016; Zoph and Knight, 2016), multimodal translation systems (see, e.g., Caglayan et al., 2016; Specia et al., 2016) and syntax-aware neural translation systems (see, e.g., Nadejde et al., 2017; Eriguchi et al., 2016, 2017). At the core of all these recent extensions is the idea of using context larger than a current source sentence to facilitate the process of translation.
In this paper, we try our first attempt at investigating the potential for implicitly incorporating discourse-level structure into neural machine translation. As an initial attempt, we focus on incorporating a small number of preceding and/or following source sentences into the attention-based neural machine translation model (Bahdanau et al., 2014). More specifically, instead of modelling the conditional distribution over translations given a source sentence, we build a network that models the conditional distribution , where is the -th preceding source sentence, and the -th following source sentence. We propose a novel larger-context neural machine translation model based on the recent works on larger-context language modelling (Wang and Cho, 2016) and multi-way, multilingual neural machine translation (Firat et al., 2016).
We first evaluate the proposed model against the baseline model without any context other than a source sentence using BLEU and RIBES (Isozaki et al., 2010), both of which measure translation quality averaged
over all the sentences in a corpus. This evaluation strategy reveals that the benefit of larger context is not always apparent when the evaluation metric is average translation quality, confirming the earlier observation, for instance, byHardmeier et al. (2015). Then, we turn to a more focused evaluation based on pronoun prediction (Guillou et al., 2016a) which was a shared task at WMT’16. On this cross-lingual pronoun prediction task, we notice benefits from incorporating larger context when training models on small corpora, but not on larger ones. Interestingly, we also observe that neural machine translation can predict pronouns as well as the top ranking approaches from the shared task at WMT’16.
Neural Machine Translation
Neural Machine Translation
Attention-based neural machine translation, proposed by Bahdanau et al. (2014), has become de facto standard in recent years, both in academia (Bojar et al., 2016) and industry (Wu et al., 2016; Crego et al., 2016)
. An attention-based translation system consists of three components; (1) encoder, (2) decoder and (3) attention model. The encoder is often a bidirectional recurrent network with a gated recurrent unit(GRU, Cho et al., 2014; Hochreiter and Schmidhuber, 1997), which encodes a source sentence into a set of annotation vectors , where . and are the -th hidden states from the forward and reverse recurrent networks respectively.
The decoder is a recurrent language model (Mikolov et al., 2010; Graves, 2013) which generates one target symbol at a time by first computing the attention scores over the annotation vectors. Each attention score is computed by
where is the attention model implemented as a feedforward network taking as input the previous target symbol , the previous decoder hidden state and one of the annotation vector . These attention scores are used to compute the time-dependent source vector , based on which the decoder’s hidden state and the output distribution over all possible target symbols are computed:
Neural Machine Translation
We extend the attention-based neural machine translation described above by including an additional set of an encoder and attention model. This additional encoder is similarly a bidirectional recurrent network, and it encodes a context sentence, in our case a source sentence immediately before the current source sentence,111 Although we use a single preceding sentence in this paper, the proposed method can easily handle multiple preceding and/or following sentences either by having multiple sets of encoder and attention mechanism or by concatenating all the context sentences into a long single sequence. into a set of context annotation vectors , where . Similarly to the original source encoder, these two vectors are from the forward and reverse recurrent networks.
On the other hand, the additional attention model is different from the original one. The goal of incorporating larger context into translation is to provide additional discourse-level information necessary for translating a given source token, or a phrase. This implies that the attention over, or selection of, tokens from larger context be done with respect to which source token, or phrase, is being considered. We thus propose to make this attention model take as input the previous target symbol, the previous decoder hidden state, a context annotation vector as well as the source vector from the main attention model. That is,
Similarly to the source vector, we compute the time-dependent context vector as the weight sum of the context annotation vectors: .
Now that there are two vectors from both the current source sentence and the context sentence, the decoder transition in Eq. (1) changes accordingly:
We call this model a larger-context neural machine translation model.
3 Evaluating Larger-Context
Neural Machine Translation
A standard metric for automatically evaluating the translation quality of a machine translation system is BLEU (Papineni et al., 2002). BLEU is computed on a validation or test corpus by inspecting the overlap of -grams (often up to 4-grams) between the reference and generated corpora. BLEU has become de facto standard after it has been found to correlate well with human judgement for phrase-based and neural machine translation systems. Other metrics, such as METEOR (Denkowski and Lavie, 2014) and TER (Snover et al., 2006), are often used together with BLEU, and they also measure the average translation quality of a machine translation system over an entire validation or test corpus.
It is not well-known how much positive or negative effect larger context has on machine translation. It is understood that larger context allows a machine translation system to capture properties not apparent from a single source sentence, such as style, genre, topical patterns, discourse coherence and anaphora (see, e.g., the preface of Webber et al., 2015), but the degree of its impact on the average translation quality is unknown.
It is rather agreed that the impact should be measured by a metric specifically designed to evaluate a specific effect of larger context. For instance, discourse coherence has been used as one of such metrics in analyzing larger-context language modelling in recent years (Ji et al., 2015, 2016). In the context of machine translation, cross-lingual pronoun prediction (Hardmeier et al., 2015; Guillou et al., 2016b) has been one of the few established tasks by which the effect of larger-context modelling, or the ability of a machine translation system for incorporating larger-context information, is evaluated.
In this paper, we therefore compare the vanilla neural machine translation model against the proposed larger-context model based on both the average translation quality, measured by BLEU, and the pronoun prediction accuracy, measured in macro-averaged recall. In order to further investigate the relationship between the average translation quality and the pronoun prediction accuracy, we use a single corpus per language pair provided as a part of the 2016 WMT shared task on cross-lingual pronoun prediction (Guillou et al., 2016b).
Unlike the existing approaches to cross-lingual pronoun prediction, we do not train any of the models specifically for the pronoun prediction task, but train them to maximize the average translation quality. Once the model is trained, we conduct pronoun prediction by
where is the set of all possible pronouns,222 In addition all possible pronouns, there is a class designated for any non-pronoun token. and the goal is to predict the pronoun in the -th position in the target sentence.
4 Experimental Settings
4.1 Data and Tasks
We use En-Fr and En-De for our experiments. The target side of the parallel corpus for each language pair has been heavily preprocssed, including tokenization and lemmatization. Although both of the corpora come with POS tags, we do not use them. In the case of En-Fr, the set of all pronouns includes “ce”, “elle”, “elles”, “il”, “ils”, “cela”, “on” and OTHER. The set consists of “er”, “sie”, “es”, “man” and OTHER in the case of En-De. Macro-average recall is used as a main evaluation metric. There are 2,441,410 and 2,356,313 sentence pairs in the En-Fr and En-De training corpora, respectively.
For pronoun prediction, the input to the model is a source sentence and the corresponding target sentence of which some pronouns are replaced with a special token REPLACE. The goal is then to figure out which pronoun should replaced the REPLACE token, and this is done by finding a combination that maximizes the log-probability, as in Eq. (3). When there are multiple REPLACE tokens in a single example, we exhaustively try all possible combinations, which is feasible as the size of the pronoun set is small.
For translation, the input to the model is a source sentence alone, and the model is expected to generate a translation. We use beam search to approximately find the maximum-a-posterior translation, i.e, .
In addition to the data/tasks from the cross-lingual pronoun prediction shared task, we also check the average translation quality using IWSLT’15 En-De as training set. We use the IWSLT’12 and IWSLT’14 test set for development and test respectively. This is to ensure that our observation from the earlier lemmatized corpora transfers to non-lemmatized ones. This corpus has 194,371 sentence pairs for training, and 1700 and 1305 for development and test.
4.2.1 Models and Learning
Naive Model (NMT)
We train a naive attention-based neural machine translation system based on the code publicly available online.333 https://github.com/nyu-dl/dl4mt-tutorial/ The dimensionalities of word vectors, encoder recurrent network and decoder recurrent network are 620, 1000 and 1000, respectively. We use a one-layer feedforward network with one hidden units as an attention model. We regularize the models with Dropout(Pham et al., 2014).
Larger-Context Model (LC-NMT)
A larger-context model closely follows the configuration of the naive model. The additional encoder has two GRU’s, and thus outputs a 2000-dimensional time-dependent context vector each time.
We train both types of models to maximize the log-likelihood given a training corpus using Adadelta (Zeiler, 2012). We early-stop with BLEU on a validation set.444We use greedy decoding for early-stopping. We do not do anything particular for the cross-lingual pronoun prediction task.
Varying training corpus sizes
We experiment by varying the size of the training corpus to see if there is any meaningful difference in performance between the vanilla and larger-context models w.r.t. the size of training set. We do it for the corpora from the pronoun prediction task, using 5%, 10%, 20%, 40% and 100% of the original training set.
From the results presented in Table 2, we observe that the larger-context models generally outperform the vanilla ones in terms of BLEU, RIBES and macro-average recall. However, this improvement vanishes as the size of training set grows. We confirm that this is not due to the lemmatization of the target side of the pronoun task corpora by observing that the proposed larger-context model also outperforms the vanilla one on IWSLT En-De, of which the training corpus size is approximately 10% of the full pronoun task corpus, as shown in Table 3).
In this paper, we have proposed a novel extension of attention-based neural machine translation that seamlessly incorporates the context from surrounding sentences. Our extensive evaluation, measured both in terms of average translation quality and cross-lingual pronoun prediction, has revealed that the benefit from larger context is moderate when there were a few training sentence pairs. We were not able to observe a similar level of benefit with a larger training corpus. We suspect that a large corpus allows the model to capture subtle word relations from a source sentence alone. We believe that a better more-focused evaluation metric may be necessary in order to properly evaluate the influence of discourse-level information in translation.
This work was supported by Samsung Electronics (Larger-Context Neural Machine Translation). KC thanks Google (Faculty Award 2016), NVIDIA (NVAIL), Facebook and eBay for their generous support.
- Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 .
- Bojar et al. (2016) Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurelie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation. Association for Computational Linguistics, Berlin, Germany, pages 131–198. http://www.aclweb.org/anthology/W/W16/W16-2301.
- Caglayan et al. (2016) Ozan Caglayan, Loïc Barrault, and Fethi Bougares. 2016. Multimodal attention for neural machine translation. arXiv preprint arXiv:1609.03976 .
- Cho et al. (2014) Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 .
- Crego et al. (2016) Josep Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, et al. 2016. Systran’s pure neural machine translation systems. arXiv preprint arXiv:1610.05540 .
- Dabre et al. (2016) Raj Dabre, Yevgeniy Puzikov, Fabien Cromieres, and Sadao Kurohashi. 2016. The kyoto university cross-lingual pronoun translation system. In Proceedings of the First Conference on Machine Translation. Association for Computational Linguistics, Berlin, Germany, pages 571–575. http://www.aclweb.org/anthology/W/W16/W16-2349.
- Denkowski and Lavie (2014) Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation.
- Eriguchi et al. (2016) Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In ACL.
- Eriguchi et al. (2017) Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate improves neural machine translation. arXiv preprint arXiv:1702.03525 .
- Firat et al. (2016) Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In NAACL.
- Graves (2013) Alex Graves. 2013. Generating sequences with recurrent neural networks. CoRR abs/1308.0850.
- Guillou et al. (2016a) Liane Guillou, Christian Hardmeier, Preslav Nakov, Sara Stymne, Jörg Tiedemann, Yannick Versley, Mauro Cettolo, Bonnie Webber, and Andrei Popescu-Belis. 2016a. Findings of the 2016 wmt shared task on cross-lingual pronoun prediction. In Proceedings of the First Conference on Machine Translation. Association for Computational Linguistics, Berlin, Germany, pages 525–542. http://www.aclweb.org/anthology/W/W16/W16-2345.
- Guillou et al. (2016b) Liane Guillou, Christian Hardmeier, Preslav Nakov, Sara Stymne, Jörg Tiedemann, Yannick Versley, Mauro Cettolo, Bonnie Webber, and Andrei Popescu-Belis. 2016b. Findings of the 2016 wmt shared task on cross-lingual pronoun prediction. In Proceedings of the First Conference on Machine Translation (WMT16), Berlin, Germany. Association for Computational Linguistics.
- Hardmeier et al. (2015) Christian Hardmeier, Preslav Nakov, Sara Stymne, Jörg Tiedemann, Yannick Versley, and Mauro Cettolo. 2015. Pronoun-focused mt and cross-lingual pronoun prediction: Findings of the 2015 discomt shared task on pronoun translation. In Proceedings of the Second Workshop on Discourse in Machine Translation. Association for Computational Linguistics, Lisbon, Portugal, pages 1–16. http://aclweb.org/anthology/W15-2501.
- Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780.
Isozaki et al. (2010)
Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada.
Automatic evaluation of translation quality for distant language
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 944–952.
- Ji et al. (2015) Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. 2015. Document context language models. arXiv preprint arXiv:1511.03962 .
- Ji et al. (2016) Yangfeng Ji, Gholamreza Haffari, and Jacob Eisenstein. 2016. A latent variable recurrent neural network for discourse relation language models. arXiv preprint arXiv:1603.01913 .
- Luotolahti et al. (2016) Juhani Luotolahti, Jenna Kanerva, and Filip Ginter. 2016. Cross-lingual pronoun prediction with deep recurrent neural networks. In Proceedings of the First Conference on Machine Translation. Association for Computational Linguistics, Berlin, Germany, pages 596–601. http://www.aclweb.org/anthology/W/W16/W16-2353.
- Mikolov et al. (2010) Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. INTERSPEECH 2:3.
- Nadejde et al. (2017) Maria Nadejde, Siva Reddy, Rico Sennrich, Tomasz Dwojak, Marcin Junczys-Dowmunt, Philipp Koehn, and Alexandra Birch. 2017. Syntax-aware neural machine translation using ccg. arXiv preprint arXiv:1702.01147 .
- Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 311–318.
- Pham et al. (2014) Vu Pham, Théodore Bluche, Christopher Kermorvant, and Jérôme Louradour. 2014. Dropout improves recurrent neural networks for handwriting recognition. In Frontiers in Handwriting Recognition (ICFHR), 2014 14th International Conference on. IEEE, pages 285–290.
- Snover et al. (2006) Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas. pages 223–231.
- Specia et al. (2016) Lucia Specia, Stella Frank, Khalil Sima’an, and Desmond Elliott. 2016. A shared task on multimodal machine translation and crosslingual image description. In Proceedings of the First Conference on Machine Translation, Berlin, Germany. Association for Computational Linguistics.
- Stymne (2016) Sara Stymne. 2016. Feature exploration for cross-lingual pronoun prediction. In Proceedings of the First Conference on Machine Translation. Association for Computational Linguistics, Berlin, Germany, pages 609–615. http://www.aclweb.org/anthology/W/W16/W16-2355.
- Wang and Cho (2016) Tian Wang and Kyunghyun Cho. 2016. Larger-context language modelling. In ACL.
- Webber et al. (2015) Bonnie Webber, Marine Carpuat, Andrei Popescu-Belis, and Christian Hardmeier, editors. 2015. Proceedings of the Second Workshop on Discourse in Machine Translation. Association for Computational Linguistics, Lisbon, Portugal. http://aclweb.org/anthology/W15-25.
- Werbos (1990) Paul J Werbos. 1990. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78(10):1550–1560.
- Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 .
- Zeiler (2012) Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 .
- Zoph and Knight (2016) Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. arXiv preprint arXiv:1601.00710 .