SYSTRAN Purely Neural MT Engines for WMT2017

09/12/2017 ∙ by Yongchao Deng, et al. ∙ SYSTRAN Software Inc 0

This paper describes SYSTRAN's systems submitted to the WMT 2017 shared news translation task for English-German, in both translation directions. Our systems are built using OpenNMT, an open-source neural machine translation system, implementing sequence-to-sequence models with LSTM encoder/decoders and attention. We experimented using monolingual data automatically back-translated. Our resulting models are further hyper-specialised with an adaptation technique that finely tunes models according to the evaluation test sentences.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We participated in the WMT 2017 shared news translation task on two different translation directions: EnglishGerman and GermanEnglish.

The paper is structured as follows: Section 2 overviews our neural MT engine. Section 3 describes the set of experiments carried out to build the EnglishGerman and GermanEnglish neural translation models. Experiments and results are detailed in Section 3. Finally, conclusions are drawn in Section 4.

2 Neural MT System

Neural machine translation (NMT) is a new methodology for machine translation that has led to remarkable improvements, particularly in terms of human evaluation, compared to rule-based and statistical machine translation (SMT) systems (Crego et al., 2016; Wu et al., 2016). NMT has now become a widely-applied technique for machine translation, as well as an effective approach for other related NLP tasks such as dialogue, parsing, and summarisation.

Our NMT system (Klein et al., 2017) follows the architecture presented in  (Bahdanau et al., 2014)

. It is implemented as an encoder-decoder network with multiple layers of a RNN with Long Short-Term Memory (LSTM) hidden units  

(Zaremba et al., 2014). Figure 1 illustrates an schematic view of the MT network.

Figure 1: Schematic view of our MT network.

Source words are first mapped to word vectors and then fed into a bidirectional recurrent neural network (RNN) that reads an input sequence

. Upon seeing the <eos> symbol, the final time step initialises a target RNN. The decoder is a RNN that predicts a target sequence , being and respectively the source and target sentence lengths. Translation is finished when the decoder predicts the <eos> symbol.

The left-hand side of the figure illustrates the bidirectional encoder, which actually consists of two independent LSTM encoders: one encoding the normal sequence (solid lines) that calculates a forward sequence of hidden states , the second encoder reads the input sequence in reversed order (dotted lines) and calculates the backward sequence . The final encoder outputs consist of the sum of both encoders final outputs. The right-hand side of the figure illustrates the RNN decoder. Each word is predicted based on a recurrent hidden state and a context vector that aims at capturing relevant source-side information.

Figure 2 illustrates the attention layer. It implements the "general" attentional architecture from  (Luong et al., 2015)

. The idea of a global attentional model is to consider all the hidden states of the encoder when deriving the context vector

. Hence, global alignment weights are derived by comparing the current target hidden state with each source hidden state :

with the content-based score function:

Given the alignment vector as weights, the context vector is computed as the weighted average over all the source hidden states.

Figure 2: Attention layer of the MT network.

Note that for the sake of simplicity figure 1 illustrates a two-layers LSTM encoder/decoder while any arbitrary number of LSTM layers can be stacked. More details about our system can be found in (Crego et al., 2016).

3 Experiments

In this section we detail the corpora and training experiments used to build our EnglishGerman neural translation models.

3.1 Corpora

We used the parallel corpora made available for the shared task: Europarl v7, Common Crawl corpus, News Commentary v12 and Rapid corpus of EU press releases. Both English and German texts were preprocessed with standard tokenisation tools. German words were further preprocessed to split compounds, following a similar algorithm as the built-in for Moses. Additional monolingual data was also used for both German and English available for the shared task: News Crawl: articles from 2016. Basic statistics of the tokenised data are available in Table 1.

#sents #words vocab. Parallel En 4.6M 103.7M 627k 22.6 De 4.6M 104.5M 836k 22.8 Monolingual En 20,6M 463,6M 1.18M 22.5 De 34,7M 620,8M 3.36M 17.8

Table 1: English-German parallel and monolingual corpus statistics. indicates mean sentence lengths. M stand for millions, k for thousands.

We used a byte pair encoding technique222https://github.com/rsennrich/subword-nmt (BPE) to segment word forms and achieve open-vocabulary translation with a fixed vocabulary of source and target tokens. BPE was originally devised as a compression algorithm, adapted to word segmentation (Sennrich et al., 2016b). It recursively replaces frequent consecutive bytes with a symbol that does not occur elsewhere. Each such replacement is called a merge, and the number of merges is a tuneable parameter. Encodings were computed over the union of both German and English training corpora after preprocessing, aiming at improving consistency between source and target segmentations.

Finally, case information was considered by the network as an additional feature. It allowed us to work with a lowercased vocabulary and treat re-casing as a separate problem (Crego et al., 2016).

3.2 Training Details

All experiments employ the NMT system detailed in Section 2. The encoder and the decoder consist of a four-layer stacked LSTM with cells each. We use a bidirectional RNN encoder. Size of word embedding is

cells. We use stochastic gradient descent, a minibatch size of

sentences and

for dropout probability. Maximum sentence length is set to

tokens. All experiments are performed on NVidia GeForce GTX 1080 on a single GPU per optimisation work. Newstest2008 (2008) is employed as validation test set and newstest from 2009 to 2016 (2009-16) as internal test sets.

3.2.1 Training on parallel data

Table 2 outlines training work. All parallel data (P

) is used on each training epoch. Row LR indicates the learning rate value used for each epoch. Note that learning rate was initially set to

during several epochs until no or little perplexity (PPL) reduction is measured on the validation set. Afterwards, additional epochs are performed with learning rate decayed by at each epoch. BLEU score (averaged over the eight internal test sets) after each training epoch is also shown. Note that all BLEU scores shown in this paper are computed using multi-bleu.perl333https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl. Training time per epoch is also shown in row Time measured in number of hours.

Epoch 1 2 3 4 5 6 7 8 9 10 11 12 13
GermanEnglish
Data P P P P P P P P P P P P P
Time (hours) 24 24 24 24 24 24 24 24 24 24 24 24 24
LR
PPL (2008)
BLEU (2009-16)
EnglishGerman
Data P P P P P P P P P P P P P
Time (hours) 24 24 24 24 24 24 24 24 24 24 24 24 24
LR
PPL (2008)
BLEU (2009-16)
Table 2: Training on parallel data.

As expected, a perplexity reduction is observed for the initial epochs, until epochs (GermanEnglish) and (EnglishGerman) where little or no improvement is observed. The decay mode is then started allowing to further boost accuracy (between and BLEU points) after additional epochs.

3.2.2 Training on parallel and synthetic data

Following (Sennrich et al., 2016a), we selected a subset of the available target-side in-domain monolingual corpora, translate it into the source side (back-translate) of the respective language pair, and then use this synthetic parallel data for training. The best performing models for each translation direction (epoch 13 on Table 2 of both translation directions) were used to back-translate monolingual data. (Sennrich et al., 2016a) motivate the use of monolingual data with domain adaptation, reducing overfitting, and better modelling of fluency.

Synthetic corpus was then divided into different splits containing each million sentence pairs (except for the last split that contains less sentences). Table 3 shows continuation of the training work using at each epoch the union of the entire parallel data together with a split of the monolingual back-translated data (P+M). Hence, balancing the amount of reference and synthetic data, summing up to around million sentence pairs per epoch. Note that training work described in Table 3 is built as continuation of the model at epoch on Table 2. Table 3 shows also BLEU scores over newstest2017 for the best performing network.

Epoch 1 2 3 4 5 6 7 8 9 10 11 12 13
GermanEnglish
Data P+M P+M P+M P+M P+M P’+M’ P’+M’ P’+M’ P’+M’ P’+M’
Time (hours)
LR
PPL (2008)
BLEU (2009-16)
BLEU (2017)
EnglishGerman
Data P+M P+M P+M P+M P+M P+M P+M P+M P’+M’ P’+M’ P’+M’ P’+M’ P’+M’
Time (hours)
LR
PPL (2008)
BLEU (2009-16)
BLEU (2017)
Table 3: Training on parallel and synthetic data.

As for the experiments detailed in Table 2, once all splits of the synthetic corpus were used to train our models with learning rate always set to ( epochs for GermanEnglish and epochs for EnglishGerman), we began a decay mode. In this case, we decided to reduce the amount of training examples from to millions due to time restrictions. To select the training data we employed the algorithm detailed in (Moore and Lewis, 2010). It aims at identifying sentences in a generic corpus that are closer to domain-specific data. Figure  3 outlines the algorithm. In our experiments, parallel and monolingual back-translated corpus are considered as the generic corpora (P+M) while all available newstest test sets, from 2009 to 2017, are considered as the domain-specific data (T). Hence, we aim at selecting from P+M the closest million sentences to the newstest2009-17 data ( from the P and from the M subsets).

Figure 3: Data selection process.

Obviously, we base our selection procedure on the source-side text of each translation direction as references for newstest2017 are not available.

Sentences of the generic corpus are scored in terms of cross-entropy computed from two language models: a -gram LM trained on the domain-specific data and a -gram LM trained on a random sample taken from itself . Finally, sentences of the generic corpus are sorted regarding the computation of the difference between domain-specific and generic scores (score & sort).

3.2.3 Hyper-specialisation on news test sets

Similar to domain adaptation, we explore a post-process approach, which hyper-specialises a neural network to a specific domain by running additional training epochs over newly available in-domain data (Servan et al., 2016). In our context, we utilise all newstest sets (T) (around sentences), as in-domain data and run a single learning iteration in order to fine tune the resulting network. Translations are not available for newstest2017, instead we use the single best hypotheses produced by the best performing system in Table 3. In a similar task, (Crego and Senellart, 2016) report translation accuracy gains by employing a neural system trained over a synthetic corpus built from source reference sentences and target translation hypotheses. The authors claim that text simplification is achieved when translating with an automatic engine compared to reference (human) translations, leading to higher accuracy results.

Table 4 details the hyper-specialisation training work. Note that the entire hyper-specialisation process was performed on approximately minutes. We used a learning rate set to . Further experiments need to be conducted for a better understanding of the learning rate role in hyper-specialisation work.

Epoch 1 1
GermanEnglish
Data T T-2017
Time (seconds)
LR
BLEU (2017)
EnglishGerman
Data T T-2017
Time (seconds)
LR
BLEU (2017)
Table 4: Hyper-specialisation on news test sets.

Accuracy gains are obtained despite using automatic (noisy) translation hypotheses to hyper-specialise: (GermanEnglish) and (EnglishGerman). In order to measure the impact of using newstest2017 as training data (sefl-training) we repeated the hyper-specialisation experiment using as training data newstest sets from 2009 to 2016. This is, excluding newstest2017 (T-2017). Slightly lower accuracy results were obtained by this second configuration (last column in Table 4) but still outperforming the systems without hyper-specialisation: (GermanEnglish) and (EnglishGerman).

4 Conclusions

We described SYSTRAN’s submissions to the WMT 2017 shared news translation task for English-German. Our systems are built using OpenNMT. We experimented using monolingual data automatically back-translated. Our resulting models were successfully hyper-specialised with an adaptation technique that finely tunes models according to the evaluation test sentences. Note that all our submitted systems are single networks. No ensemble experiments were carried out, what typically results in higher accuracy results.

Acknowledgements

We would like to thank the anonymous reviewers for their careful reading of the paper and their many insightful comments and suggestions.

References