We participated in the WMT 2017 shared news translation task on two different translation directions: EnglishGerman and GermanEnglish.
The paper is structured as follows: Section 2 overviews our neural MT engine. Section 3 describes the set of experiments carried out to build the EnglishGerman and GermanEnglish neural translation models. Experiments and results are detailed in Section 3. Finally, conclusions are drawn in Section 4.
2 Neural MT System
Neural machine translation (NMT) is a new methodology for machine translation that has led to remarkable improvements, particularly in terms of human evaluation, compared to rule-based and statistical machine translation (SMT) systems (Crego et al., 2016; Wu et al., 2016). NMT has now become a widely-applied technique for machine translation, as well as an effective approach for other related NLP tasks such as dialogue, parsing, and summarisation.
. It is implemented as an encoder-decoder network with multiple layers of a RNN with Long Short-Term Memory (LSTM) hidden units(Zaremba et al., 2014). Figure 1 illustrates an schematic view of the MT network.
The left-hand side of the figure illustrates the bidirectional encoder, which actually consists of two independent LSTM encoders: one encoding the normal sequence (solid lines) that calculates a forward sequence of hidden states , the second encoder reads the input sequence in reversed order (dotted lines) and calculates the backward sequence . The final encoder outputs consist of the sum of both encoders final outputs. The right-hand side of the figure illustrates the RNN decoder. Each word is predicted based on a recurrent hidden state and a context vector that aims at capturing relevant source-side information.
. The idea of a global attentional model is to consider all the hidden states of the encoder when deriving the context vector. Hence, global alignment weights are derived by comparing the current target hidden state with each source hidden state :
with the content-based score function:
Given the alignment vector as weights, the context vector is computed as the weighted average over all the source hidden states.
In this section we detail the corpora and training experiments used to build our EnglishGerman neural translation models.
We used the parallel corpora made available for the shared task: Europarl v7, Common Crawl corpus, News Commentary v12 and Rapid corpus of EU press releases. Both English and German texts were preprocessed with standard tokenisation tools. German words were further preprocessed to split compounds, following a similar algorithm as the built-in for Moses. Additional monolingual data was also used for both German and English available for the shared task: News Crawl: articles from 2016. Basic statistics of the tokenised data are available in Table 1.
We used a byte pair encoding technique222https://github.com/rsennrich/subword-nmt (BPE) to segment word forms and achieve open-vocabulary translation with a fixed vocabulary of source and target tokens. BPE was originally devised as a compression algorithm, adapted to word segmentation (Sennrich et al., 2016b). It recursively replaces frequent consecutive bytes with a symbol that does not occur elsewhere. Each such replacement is called a merge, and the number of merges is a tuneable parameter. Encodings were computed over the union of both German and English training corpora after preprocessing, aiming at improving consistency between source and target segmentations.
Finally, case information was considered by the network as an additional feature. It allowed us to work with a lowercased vocabulary and treat re-casing as a separate problem (Crego et al., 2016).
3.2 Training Details
All experiments employ the NMT system detailed in Section 2. The encoder and the decoder consist of a four-layer stacked LSTM with cells each. We use a bidirectional RNN encoder. Size of word embedding is
cells. We use stochastic gradient descent, a minibatch size ofsentences and
for dropout probability. Maximum sentence length is set totokens. All experiments are performed on NVidia GeForce GTX 1080 on a single GPU per optimisation work. Newstest2008 (2008) is employed as validation test set and newstest from 2009 to 2016 (2009-16) as internal test sets.
3.2.1 Training on parallel data
Table 2 outlines training work. All parallel data (P
) is used on each training epoch. Row LR indicates the learning rate value used for each epoch. Note that learning rate was initially set toduring several epochs until no or little perplexity (PPL) reduction is measured on the validation set. Afterwards, additional epochs are performed with learning rate decayed by at each epoch. BLEU score (averaged over the eight internal test sets) after each training epoch is also shown. Note that all BLEU scores shown in this paper are computed using multi-bleu.perl333https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl. Training time per epoch is also shown in row Time measured in number of hours.
As expected, a perplexity reduction is observed for the initial epochs, until epochs (GermanEnglish) and (EnglishGerman) where little or no improvement is observed. The decay mode is then started allowing to further boost accuracy (between and BLEU points) after additional epochs.
3.2.2 Training on parallel and synthetic data
Following (Sennrich et al., 2016a), we selected a subset of the available target-side in-domain monolingual corpora, translate it into the source side (back-translate) of the respective language pair, and then use this synthetic parallel data for training. The best performing models for each translation direction (epoch 13 on Table 2 of both translation directions) were used to back-translate monolingual data. (Sennrich et al., 2016a) motivate the use of monolingual data with domain adaptation, reducing overfitting, and better modelling of fluency.
Synthetic corpus was then divided into different splits containing each million sentence pairs (except for the last split that contains less sentences). Table 3 shows continuation of the training work using at each epoch the union of the entire parallel data together with a split of the monolingual back-translated data (P+M). Hence, balancing the amount of reference and synthetic data, summing up to around million sentence pairs per epoch. Note that training work described in Table 3 is built as continuation of the model at epoch on Table 2. Table 3 shows also BLEU scores over newstest2017 for the best performing network.
As for the experiments detailed in Table 2, once all splits of the synthetic corpus were used to train our models with learning rate always set to ( epochs for GermanEnglish and epochs for EnglishGerman), we began a decay mode. In this case, we decided to reduce the amount of training examples from to millions due to time restrictions. To select the training data we employed the algorithm detailed in (Moore and Lewis, 2010). It aims at identifying sentences in a generic corpus that are closer to domain-specific data. Figure 3 outlines the algorithm. In our experiments, parallel and monolingual back-translated corpus are considered as the generic corpora (P+M) while all available newstest test sets, from 2009 to 2017, are considered as the domain-specific data (T). Hence, we aim at selecting from P+M the closest million sentences to the newstest2009-17 data ( from the P and from the M subsets).
Obviously, we base our selection procedure on the source-side text of each translation direction as references for newstest2017 are not available.
Sentences of the generic corpus are scored in terms of cross-entropy computed from two language models: a -gram LM trained on the domain-specific data and a -gram LM trained on a random sample taken from itself . Finally, sentences of the generic corpus are sorted regarding the computation of the difference between domain-specific and generic scores (score & sort).
3.2.3 Hyper-specialisation on news test sets
Similar to domain adaptation, we explore a post-process approach, which hyper-specialises a neural network to a specific domain by running additional training epochs over newly available in-domain data (Servan et al., 2016). In our context, we utilise all newstest sets (T) (around sentences), as in-domain data and run a single learning iteration in order to fine tune the resulting network. Translations are not available for newstest2017, instead we use the single best hypotheses produced by the best performing system in Table 3. In a similar task, (Crego and Senellart, 2016) report translation accuracy gains by employing a neural system trained over a synthetic corpus built from source reference sentences and target translation hypotheses. The authors claim that text simplification is achieved when translating with an automatic engine compared to reference (human) translations, leading to higher accuracy results.
Table 4 details the hyper-specialisation training work. Note that the entire hyper-specialisation process was performed on approximately minutes. We used a learning rate set to . Further experiments need to be conducted for a better understanding of the learning rate role in hyper-specialisation work.
Accuracy gains are obtained despite using automatic (noisy) translation hypotheses to hyper-specialise: (GermanEnglish) and (EnglishGerman). In order to measure the impact of using newstest2017 as training data (sefl-training) we repeated the hyper-specialisation experiment using as training data newstest sets from 2009 to 2016. This is, excluding newstest2017 (T-2017). Slightly lower accuracy results were obtained by this second configuration (last column in Table 4) but still outperforming the systems without hyper-specialisation: (GermanEnglish) and (EnglishGerman).
We described SYSTRAN’s submissions to the WMT 2017 shared news translation task for English-German. Our systems are built using OpenNMT. We experimented using monolingual data automatically back-translated. Our resulting models were successfully hyper-specialised with an adaptation technique that finely tunes models according to the evaluation test sentences. Note that all our submitted systems are single networks. No ensemble experiments were carried out, what typically results in higher accuracy results.
We would like to thank the anonymous reviewers for their careful reading of the paper and their many insightful comments and suggestions.
- Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. Demoed at NIPS 2014: http://lisa.iro.umontreal.ca/mt-demo/. http://arxiv.org/abs/1409.0473.
- Crego et al. (2016) Josep Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Jing Zhou, and Peter Zoldan. 2016. Systran’s pure neural machine translation systems. CoRR abs/1610.05540. http://arxiv.org/abs/1610.05540.
- Crego and Senellart (2016) Josep Maria Crego and Jean Senellart. 2016. Neural machine translation from simplified translations. CoRR abs/1612.06139. http://arxiv.org/abs/1612.06139.
- Klein et al. (2017) Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Accepted to ACL 2017 Conference Demo Papers. Association for Computational Linguistics, Vancouver, Canada.
Luong et al. (2015)
Thang Luong, Hieu Pham, and Christopher D. Manning. 2015.
Effective approaches to
attention-based neural machine translation.
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1412–1421. http://aclweb.org/anthology/D15-1166.
- Moore and Lewis (2010) Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Proceedings of the ACL 2010 Conference Short Papers. Association for Computational Linguistics, Uppsala, Sweden, pages 220–224. http://www.aclweb.org/anthology/P10-2041.
- Sennrich et al. (2016a) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics pages 86–96. http://www.aclweb.org/anthology/P16-1009.
- Sennrich et al. (2016b) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1715–1725. http://www.aclweb.org/anthology/P16-1162.
- Servan et al. (2016) Christophe Servan, Josep Maria Crego, and Jean Senellart. 2016. Domain specialization: a post-training domain adaptation for neural machine translation. CoRR abs/1612.06141. http://arxiv.org/abs/1612.06141.
- Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. Technical report, Google. https://arxiv.org/abs/1609.08144.
- Zaremba et al. (2014) Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. CoRR abs/1409.2329. http://arxiv.org/abs/1409.2329.