Domain Control for Neural Machine Translation

12/19/2016 ∙ by Catherine Kobus, et al. ∙ 0

Machine translation systems are very sensitive to the domains they were trained on. Several domain adaptation techniques have been deeply studied. We propose a new technique for neural machine translation (NMT) that we call domain control which is performed at runtime using a unique neural network covering multiple domains. The presented approach shows quality improvements when compared to dedicated domains translating on any of the covered domains and even on out-of-domain data. In addition, model parameters do not need to be re-estimated for each domain, making this effective to real use cases. Evaluation is carried out on English-to-French translation for two different testing scenarios. We first consider the case where an end-user performs translations on a known domain. Secondly, we consider the scenario where the domain is not known and predicted at the sentence level before translating. Results show consistent accuracy improvements for both conditions.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine translation systems are very sensitive to the domain(s) they were trained on because each domain has its own style, sentence structure and terminology. There is often a mismatch between the domain for which training data are available and the target domain of a machine translation system. If there is a strong deviation between training and testing data, translation quality will be dramatically deteriorated. Word ambiguities are often an issue for machine translation systems. For instance, the English word "administer" has to be translated differently if it appears in medical or political contexts. Our work is motivated by the idea that neural models could benefit from having domain information to choose the most appropriate terminology and sentence structure while using the information from all the domains to improve the base translation quality. Recently, Sennrich et al. (2016) report on the neural network ability to control politeness through side constraints. We extend this idea to domain control. Our goal is to allow a model built from a diverse set of training data to produce in-domain translations. This is, to extend the coverage of generic NMT models to specific domains, with their specialized terminology and style, without lowering translation quality on more generic data. We present two frameworks to feed domain meta-information on the NMT encoder side.

The paper is structured as follows: Section 2 overviews related work. Details of our neural MT engine are given in Section 3. Section 4 describes the proposed approach. Experiments and results are detailed in Section 5. Finally, conclusions and further work are drawn in Section 6.

2 Related Work

A lot a work has already been done for domain adaptation in Statistical Machine Translation. The approaches vary from in-domain data selection based methods Hildebrand et al. (2005) Moore and Lewis (2010) Sethy et al. (2006) to in-domain models mixture-based methods Foster and Kuhn (2007) Koehn and Schroeder (2007) Schwenk and Koehn (2008).

Regarding neural MT Luong and Manning (2015) adapt a generic NMT network (trained on out-of-domain data) by running additional training iterations over an in-domain data set. The authors claim to obtain a domain adapted model in a very limited training time. However, it differs from our work since we aim at performing domain-adapted translations using a unique network that covers multiple domains.

Recent works have especially dealt with domain adaptation for NMT by providing meta-information to the Neural Network. Our work is in line with this kind of approach. Chen et al. (2016) feeds Neural Network with topic information on the decoder side; topics are numerous and consist in human-labeled product categories. Zhang et al. (2016)

includes topic modelling on both encoder and decoder sides. A given number of topics are automatically inferred from the training data using LDA; each word in a sentence is assigned its own vector of topics. In our work, we also provide meta-information about domain to the network. However, we introduce domain information at the sentence level.

3 Neural MT

Our NMT system follows the architecture presented in  Bahdanau et al. (2014)

. It is implemented as an encoder-decoder network with multiple layers of a RNN with Long Short-Term Memory hidden units  

Zaremba et al. (2014). Figure 1 illustrates an schematic view of the MT network.

Source words are first mapped to word vectors and then fed into a bidirectional recurrent neural network (RNN) that reads an input sequence

. Upon seeing the <eos> symbol, the final time step initialises a target RNN. The decoder is a RNN that predicts a target sequence , being and respectively the source and target sentence lengths. Translation is finished when the decoder predicts the <eos> symbol.

The left-hand side of the figure illustrates the bidirectional encoder, which actually consists of two independent LSTM encoders: one encoding the normal sequence (solid lines) that calculates a forward sequence of hidden states , the second encoder reads the input sequence in reversed order (dotted lines) and calculates the backward sequence . The final encoder outputs consist of the sum of both encoders final outputs. The right-hand side of the figure illustrates the RNN decoder. Each word is predicted based on a recurrent hidden state and a context vector that aims at capturing relevant source-side information.

Figure 2 illustrates the attention layer. It implements the "general" attentional architecture from  Luong et al. (2015)

. The idea of a global attentional model is to consider all the hidden states of the encoder when deriving the context vector

. Hence, global alignment weights are derived by comparing the current target hidden state with each source hidden state :

with the content-based score function:

Given the alignment vector as weights, the context vector is computed as the weighted average over all the source hidden states.

Figure 2: Attention layer of the MT network.

The framework is available on the open-source project seq2seq-attn111 More details about our system can be found in Anonymised.

4 Domain control

Two different techniques are implemented to integrate domain control: additional token and domain feature.

4.1 Additional Token

The additional token method, inspired by the politeness control technique detailed in Sennrich et al. (2016) consists in adding an artificial token at the end of each source sentence in order to let the network pay attention to the domain of each sentence pair. For instance, consider the next English-French translation:

Src: Headache may be experienced
Tgt: Des c phal es peuvent survenir

The network reads off the sentence pair with the appropriate Medical domain tag @MED@:

Src: Headache may be experienced @MED@
Tgt: Des c phal es peuvent survenir

Domain tags are appropriately selected in order to avoid overlaps with words present in the source language vocabulary. This method, though simple, has already proven to be effective to control the politeness level of a translation Sennrich et al. (2016), or to support multi-lingual NMT models Johnson et al. (2016).

4.2 Word Feature

We present a second technique to introduce domain control in our neural translation model. We use word-level features as described in Crego et al. (2016). The first layer of the network is the word embedding layer. We adapt this layer to extend each word embedding with an arbitrary number of cells, designed to encode domain information. Notice that using additional features does not increase the vocabulary of source words; there are separate vocabularies for words and domain tags. Figure 4.2 illustrates a word embedding layer extended with domain information.

Following with the example of Section 4.1, the sentence pair is given to the network with the appropriate Medical domain tag on each source word as follows:

Src: Headache  may  be  experienced
Tgt: Des c phal es peuvent survenir
Domain Lines Src words Tgt words Lines Src words Tgt words
Train Test
IT 399k 6.0M 7.3M 2k 36,8k 45,1k
Literature 35k 881k 943k 2k 50.1k 54.0k
Medical 923k 10.5M 12.3M 2k 35.6k 43.0k
News 194k 5.4M 6.7M 2k 53.5k 66.4k
Parliamentary 1.6M 37.6M 43.8M 2k 40.7k 49.4k
Tourism 1.1M 23.3M 27.5M 2k 39.1k 45.5k
Total 4,3M 83,7M 98.5M
Table 1: Statistics for training and test sets of each domain corpus. Note that k stand for thousands and M for millions.

Note that under this feature framework, the sentence-level domain information is added on a word-by-word basis to all the words in a sentence. We reuse an existing framework that was originally implemented to include linguistic features at the word level (Anonymised).

5 Experiments

We evaluate the presented approach on English-to-French translation. Section 5.1 describes the data used for the experiments and details training configurations. Finally, Section 5.2 reports on translation accuracy results.

5.1 Training Details

We used training corpora covering six different domains : IT, Literature, Medical, News, Parliamentary and Tourism. Medical, News, Parliamentary data come from public corpora (respectively EMEA, News Commentary and Europarl), available from the OPUS repository Tiedemann (2012). IT, Literature and Tourism are proprietary data. Statistics of the corpora used are given in Table 1.

Domain Single Join Token Feature Feature
Constraint None Oracle RNN Acc (%)
IT 52.73 53.81 53.76  54.56 (+0.75) 54.42 97.8
Literature 20.25 29.81 29.96  30.73 (+0.92) 30.71 93.1
Medical 33.97 41.83 42.02  42.51 (+0.68) 42.34 89.4
News 29.70 33.83 34.47  34.61 (+0.78) 34.49 88.3
Parliamentary 37.34 37.53 37.13  37.79 (+0.26) 37.77 82.7
Tourism 37.05 37.46 37.72  38.30 (+0.84) 38.01 90.6
Dialogs 19.25 19.55
Table 2:

BLEU scores for the different systems and RNN-based domain classifier accuracy.

All experiments employ the NMT system detailed in Section 3 and are performed on NVidia GeForce GTX 1080. We use BPE222 with a total of source and target tokens as vocabulary, computed over the entire training corpora. Word embedding size is

cells. During training, we use stochastic gradient descent, a minibatch size of

with dropout probability set to

and bidirectional RNN. We train our models for epochs. Learning rate is set to and starts decaying after epoch by . It takes about days to train models on the complete training data set (M sentence pairs).

Four different training configurations are considered. The first includes six in-domain NMT models. Each model is trained using its corresponding domain data set (henceforth Single models). The Join network configuration is built using all the training data after concatenation. Note that this model does not include any information about domain. A Token network is also trained using all the available training data. It includes domain information through the additional token approach detailed in Section 4.1. Finally, Feature network is also trained on all available training data, it introduces domain information in the model by means of the feature framework detailed in Section 4.2.

5.2 Results

Src: Your doctor’s instructions should be carefully observed .
Ref: Vous devrez respecter scrupuleusement les instructions de votre m decin .
Join: Les instructions de votre m decin doivent tre soigneusement surveill es .
Feature: Les instructions de votre m decin doivent tre suivies attentivement .
Src: All injections of Macugen will be administered by your doctor.
Ref: Toutes les injections de Macugen doivent tre r alis es par votre m decin.
Join: Toutes les injections de Macugen seront l’ordre du jour de votre m decin.
Feature: Toutes les injections de Macugen seront effectu es par votre m decin.
Table 3: Translation examples of in-domain medical sentences with and without domain feature.
Test Domain feature

IT Literature Medical News Parl. Tourism
IT 54.56 -12.76 -10.25 -12.43 -13.83 -14.18
Literature -5.96 30.73 -5.13 -2.89 -3.50 -3.03
Medical -4.82 -6.23 42.51 -5.06 -5.39 -4.74
News -3.36 -1.58 -3.04 34.61 -0.81 -2.48
Parliamentary -4.14 -1.92 -3.09 -0.39 37.79 -3.01
Tourism -6.72 -3.2 -4.16 -4.26 -4.35 38.30
Table 4: BLEU score decreases using different predefined domain tags.

Table 2 shows translation accuracy results for the different training configurations. Accuracies are measured using BLEU333multi-bleu.perl. As expected, the Join model outperforms all Single models on their corresponding test sets, showing that NMT engines benefit from additional training data. Differences in accuracy are lower for domains with a higher representation in the Join model, like Parliamentary and Tourism. No domain information is used on these first configurations (none).

Results for models incorporating domain information are detailed in columns Token and Feature. Oracle experiments indicate that the test set domains are known in advance, thus allowing to use the correct side-constraint. The additional token approach gives mixed results; it improves translation quality on some tasks and degrades on some others compared to the Join model. On the contrary, incorporating domain information through the Feature approach consistently improves translation quality on all the tasks. Adding domain information on all the source words seems to be a good technique to convey domain side-constraint and to improve NMT target words choice consistency. Differences between the Feature and Join configurations are shown in parentheses. Note that an average improvement of is observed on all test sets with the exception of Parliamentary translations, for which accuracy was only improved by . This can be explained by the fact that Parliamentary is the best represented domain in Join training set.

Translation examples are shown in Table 3 in a medical context. They show the impact on domain adaptation introduced by the Feature approach. The first example shows the preference of the Feature model for the French translation suivies attentivement of the English carefully observed. It seems more suitable than the hypothesis soigneusement surveill es output by the Join model. A similar effect is shown on the second example where the French effectu es is clearly more adapted as translation of administered than l’ordre du jour.

Finally, we also evaluate the ability of our presented approach (Feature) to face test sets for which the domain is not known in advance. Hence, before translation, the domain tag is automatically detected using an in-house domain classification module based on RNN technology to disambiguate between the six different domains. The tool predicts the domain on a sentence-by-sentence basis, then translation is carried out using the predicted domain value in Feature model. Last row of Table 2 shows the accuracy of the domain classification tool for sentences on each of the predefined domains.

Results for this last condition are shown in column RNN. Event though domain is wrongly predicted in some cases, translation accuracy is still improved when compared to the Join model. Notice that domain classification at sentence level is a challenging task as short context is considered. We also confront our approach with a final test set from a brand new domain, Dialogs, that is not present in our training data. Sentences are selected from TED Talks corpora. The RNN toolkit is able to assign each test sentence to one of the source domains, leading to outperform the Join model.

In order to better understand the influence of the predicted domain, we conduct a final set of experiments. Using the Feature model, we run each test set using all domain values. Results are detailed in Table 4 showing that translation quality can significantly be degraded when translating sentences with the wrong domain tag. It is especially the case for IT domain, where translating with the wrong domain tag dramatically reduces accuracy. Results also reveal proximities between different domains like, for example, News and Parliamentary. Translating the News test set with the Parliamentary domain tag (and vice versa) does not seem to hurt translation quality compared to other domain tag mismatches. Inversely, when translation and reference are provided, using BLEU as a similarity measure we observe that the model learned to classify: this is actually expected as RNN and word embeddings provide more powerful discriminant features.

6 Conclusions and Further Work

We have presented a method that incorporates domain information into a neural network. It allows to perform domain-adapted translations using a unique network that covers multiple domains. The presented method does not need to re-estimate model parameters when performing translations on any of the available domains.

We plan to further improve the feature technique detailed in this work. Rather than providing the network with a hard decision about domain, we want to introduce a vector of distance values of the given source sentence to each domain, thus allowing to smooth the proximity of each sentence to each domain.

Additionally, Table 4 shows indirectly that the neural network has learnt the ability to classify domains at the sentence level. We also plan to implement a joint approach for domain classification and translation, avoiding dependency with the RNN classifier.

Finally, since domain classification is a document level task, it would be interesting to extend the current study to document level translation.