Machine translation systems are very sensitive to the domain(s) they were trained on because each domain has its own style, sentence structure and terminology. There is often a mismatch between the domain for which training data are available and the target domain of a machine translation system. If there is a strong deviation between training and testing data, translation quality will be dramatically deteriorated. Word ambiguities are often an issue for machine translation systems. For instance, the English word "administer" has to be translated differently if it appears in medical or political contexts. Our work is motivated by the idea that neural models could benefit from having domain information to choose the most appropriate terminology and sentence structure while using the information from all the domains to improve the base translation quality. Recently, Sennrich et al. (2016) report on the neural network ability to control politeness through side constraints. We extend this idea to domain control. Our goal is to allow a model built from a diverse set of training data to produce in-domain translations. This is, to extend the coverage of generic NMT models to specific domains, with their specialized terminology and style, without lowering translation quality on more generic data. We present two frameworks to feed domain meta-information on the NMT encoder side.
2 Related Work
A lot a work has already been done for domain adaptation in Statistical Machine Translation. The approaches vary from in-domain data selection based methods Hildebrand et al. (2005) Moore and Lewis (2010) Sethy et al. (2006) to in-domain models mixture-based methods Foster and Kuhn (2007) Koehn and Schroeder (2007) Schwenk and Koehn (2008).
Regarding neural MT Luong and Manning (2015) adapt a generic NMT network (trained on out-of-domain data) by running additional training iterations over an in-domain data set. The authors claim to obtain a domain adapted model in a very limited training time. However, it differs from our work since we aim at performing domain-adapted translations using a unique network that covers multiple domains.
Recent works have especially dealt with domain adaptation for NMT by providing meta-information to the Neural Network. Our work is in line with this kind of approach. Chen et al. (2016) feeds Neural Network with topic information on the decoder side; topics are numerous and consist in human-labeled product categories. Zhang et al. (2016)
includes topic modelling on both encoder and decoder sides. A given number of topics are automatically inferred from the training data using LDA; each word in a sentence is assigned its own vector of topics. In our work, we also provide meta-information about domain to the network. However, we introduce domain information at the sentence level.
3 Neural MT
Our NMT system follows the architecture presented in Bahdanau et al. (2014)
. It is implemented as an encoder-decoder network with multiple layers of a RNN with Long Short-Term Memory hidden unitsZaremba et al. (2014). Figure 1 illustrates an schematic view of the MT network.
Source words are first mapped to word vectors and then fed into a bidirectional recurrent neural network (RNN) that reads an input sequence. Upon seeing the <eos> symbol, the final time step initialises a target RNN. The decoder is a RNN that predicts a target sequence , being and respectively the source and target sentence lengths. Translation is finished when the decoder predicts the <eos> symbol.
The left-hand side of the figure illustrates the bidirectional encoder, which actually consists of two independent LSTM encoders: one encoding the normal sequence (solid lines) that calculates a forward sequence of hidden states , the second encoder reads the input sequence in reversed order (dotted lines) and calculates the backward sequence . The final encoder outputs consist of the sum of both encoders final outputs. The right-hand side of the figure illustrates the RNN decoder. Each word is predicted based on a recurrent hidden state and a context vector that aims at capturing relevant source-side information.
. The idea of a global attentional model is to consider all the hidden states of the encoder when deriving the context vector. Hence, global alignment weights are derived by comparing the current target hidden state with each source hidden state :
with the content-based score function:
Given the alignment vector as weights, the context vector is computed as the weighted average over all the source hidden states.
The framework is available on the open-source project seq2seq-attn111http://nlp.seas.harvard.edu. More details about our system can be found in Anonymised.
4 Domain control
Two different techniques are implemented to integrate domain control: additional token and domain feature.
4.1 Additional Token
The additional token method, inspired by the politeness control technique detailed in Sennrich et al. (2016) consists in adding an artificial token at the end of each source sentence in order to let the network pay attention to the domain of each sentence pair. For instance, consider the next English-French translation:
|Src:||Headache may be experienced|
|Tgt:||Des c phal es peuvent survenir|
The network reads off the sentence pair with the appropriate Medical domain tag @MED@:
|Src:||Headache may be experienced @MED@|
|Tgt:||Des c phal es peuvent survenir|
Domain tags are appropriately selected in order to avoid overlaps with words present in the source language vocabulary. This method, though simple, has already proven to be effective to control the politeness level of a translation Sennrich et al. (2016), or to support multi-lingual NMT models Johnson et al. (2016).
4.2 Word Feature
We present a second technique to introduce domain control in our neural translation model. We use word-level features as described in Crego et al. (2016). The first layer of the network is the word embedding layer. We adapt this layer to extend each word embedding with an arbitrary number of cells, designed to encode domain information. Notice that using additional features does not increase the vocabulary of source words; there are separate vocabularies for words and domain tags. Figure 4.2 illustrates a word embedding layer extended with domain information.
Following with the example of Section 4.1, the sentence pair is given to the network with the appropriate Medical domain tag on each source word as follows:
|Src:||Headache may be experienced|
|MED MED MED MED|
|Tgt:||Des c phal es peuvent survenir|
|Domain||Lines||Src words||Tgt words||Lines||Src words||Tgt words|
Note that under this feature framework, the sentence-level domain information is added on a word-by-word basis to all the words in a sentence. We reuse an existing framework that was originally implemented to include linguistic features at the word level (Anonymised).
We evaluate the presented approach on English-to-French translation. Section 5.1 describes the data used for the experiments and details training configurations. Finally, Section 5.2 reports on translation accuracy results.
5.1 Training Details
We used training corpora covering six different domains : IT, Literature, Medical, News, Parliamentary and Tourism. Medical, News, Parliamentary data come from public corpora (respectively EMEA, News Commentary and Europarl), available from the OPUS repository Tiedemann (2012). IT, Literature and Tourism are proprietary data. Statistics of the corpora used are given in Table 1.
BLEU scores for the different systems and RNN-based domain classifier accuracy.
All experiments employ the NMT system detailed in Section 3 and are performed on NVidia GeForce GTX 1080. We use BPE222https://github.com/rsennrich/subword-nmt with a total of source and target tokens as vocabulary, computed over the entire training corpora. Word embedding size is
cells. During training, we use stochastic gradient descent, a minibatch size of
with dropout probability set toand bidirectional RNN. We train our models for epochs. Learning rate is set to and starts decaying after epoch by . It takes about days to train models on the complete training data set (M sentence pairs).
Four different training configurations are considered. The first includes six in-domain NMT models. Each model is trained using its corresponding domain data set (henceforth Single models). The Join network configuration is built using all the training data after concatenation. Note that this model does not include any information about domain. A Token network is also trained using all the available training data. It includes domain information through the additional token approach detailed in Section 4.1. Finally, Feature network is also trained on all available training data, it introduces domain information in the model by means of the feature framework detailed in Section 4.2.
|Src:||Your doctor’s instructions should be carefully observed .|
|Ref:||Vous devrez respecter scrupuleusement les instructions de votre m decin .|
|Join:||Les instructions de votre m decin doivent tre soigneusement surveill es .|
|Feature:||Les instructions de votre m decin doivent tre suivies attentivement .|
|Src:||All injections of Macugen will be administered by your doctor.|
|Ref:||Toutes les injections de Macugen doivent tre r alis es par votre m decin.|
|Join:||Toutes les injections de Macugen seront l’ordre du jour de votre m decin.|
|Feature:||Toutes les injections de Macugen seront effectu es par votre m decin.|
Table 2 shows translation accuracy results for the different training configurations. Accuracies are measured using BLEU333multi-bleu.perl. As expected, the Join model outperforms all Single models on their corresponding test sets, showing that NMT engines benefit from additional training data. Differences in accuracy are lower for domains with a higher representation in the Join model, like Parliamentary and Tourism. No domain information is used on these first configurations (none).
Results for models incorporating domain information are detailed in columns Token and Feature. Oracle experiments indicate that the test set domains are known in advance, thus allowing to use the correct side-constraint. The additional token approach gives mixed results; it improves translation quality on some tasks and degrades on some others compared to the Join model. On the contrary, incorporating domain information through the Feature approach consistently improves translation quality on all the tasks. Adding domain information on all the source words seems to be a good technique to convey domain side-constraint and to improve NMT target words choice consistency. Differences between the Feature and Join configurations are shown in parentheses. Note that an average improvement of is observed on all test sets with the exception of Parliamentary translations, for which accuracy was only improved by . This can be explained by the fact that Parliamentary is the best represented domain in Join training set.
Translation examples are shown in Table 3 in a medical context. They show the impact on domain adaptation introduced by the Feature approach. The first example shows the preference of the Feature model for the French translation suivies attentivement of the English carefully observed. It seems more suitable than the hypothesis soigneusement surveill es output by the Join model. A similar effect is shown on the second example where the French effectu es is clearly more adapted as translation of administered than l’ordre du jour.
Finally, we also evaluate the ability of our presented approach (Feature) to face test sets for which the domain is not known in advance. Hence, before translation, the domain tag is automatically detected using an in-house domain classification module based on RNN technology to disambiguate between the six different domains. The tool predicts the domain on a sentence-by-sentence basis, then translation is carried out using the predicted domain value in Feature model. Last row of Table 2 shows the accuracy of the domain classification tool for sentences on each of the predefined domains.
Results for this last condition are shown in column RNN. Event though domain is wrongly predicted in some cases, translation accuracy is still improved when compared to the Join model. Notice that domain classification at sentence level is a challenging task as short context is considered. We also confront our approach with a final test set from a brand new domain, Dialogs, that is not present in our training data. Sentences are selected from TED Talks corpora. The RNN toolkit is able to assign each test sentence to one of the source domains, leading to outperform the Join model.
In order to better understand the influence of the predicted domain, we conduct a final set of experiments. Using the Feature model, we run each test set using all domain values. Results are detailed in Table 4 showing that translation quality can significantly be degraded when translating sentences with the wrong domain tag. It is especially the case for IT domain, where translating with the wrong domain tag dramatically reduces accuracy. Results also reveal proximities between different domains like, for example, News and Parliamentary. Translating the News test set with the Parliamentary domain tag (and vice versa) does not seem to hurt translation quality compared to other domain tag mismatches. Inversely, when translation and reference are provided, using BLEU as a similarity measure we observe that the model learned to classify: this is actually expected as RNN and word embeddings provide more powerful discriminant features.
6 Conclusions and Further Work
We have presented a method that incorporates domain information into a neural network. It allows to perform domain-adapted translations using a unique network that covers multiple domains. The presented method does not need to re-estimate model parameters when performing translations on any of the available domains.
We plan to further improve the feature technique detailed in this work. Rather than providing the network with a hard decision about domain, we want to introduce a vector of distance values of the given source sentence to each domain, thus allowing to smooth the proximity of each sentence to each domain.
Additionally, Table 4 shows indirectly that the neural network has learnt the ability to classify domains at the sentence level. We also plan to implement a joint approach for domain classification and translation, avoiding dependency with the RNN classifier.
Finally, since domain classification is a document level task, it would be interesting to extend the current study to document level translation.
- Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. Demoed at NIPS 2014: http://lisa.iro.umontreal.ca/mt-demo/.
- Chen et al. (2016) Wenhu Chen, Evgeny Matusov, Shahram Khadivi, and Jan-Thorsten Peter. 2016. Guided alignment training for topic-aware neural machine translation. CoRR abs/1607.01628v1.
- Crego et al. (2016) Josep Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Jing Zhou, and Peter Zoldan. 2016. Systran’s pure neural machine translation systems. CoRR abs/1610.05540.
- Foster and Kuhn (2007) George Foster and Roland Kuhn. 2007. Mixture-model adaptation for SMT. In Proceedings of the Second Workshop on Statistical Machine Translation. Association for Computational Linguistics, Prague, Czech Republic, pages 128–135.
- Hildebrand et al. (2005) Almut Silja Hildebrand, Matthias Eck, Stephan Vogel, and Alex Waibel. 2005. Adaptation of the translation model for statistical machine translation based on information retrieval. In Proceedings of the 10th Conference of the European Association for Machine Translation (EAMT). Budapest.
- Johnson et al. (2016) Melvin Johnson, Mike Schuster, Quoc V.Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, and Nikkik Thorat. 2016. Google’s multilingual neural machine translation system: Enabling zero-shot translation arXiv:1611.04558v.
- Koehn and Schroeder (2007) Philipp Koehn and Josh Schroeder. 2007. Experiments in domain adaptation for statistical machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation. Association for Computational Linguistics, Prague, Czech Republic, pages 224–227.
- Luong and Manning (2015) Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spoken language domains. In IWSLT2015. Da Nang, Vietnam.
Luong et al. (2015)
Thang Luong, Hieu Pham, and Christopher D. Manning. 2015.
Effective approaches to attention-based neural machine translation.
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1412–1421.
- Moore and Lewis (2010) Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Proceedings of the ACL 2010 Conference Short Papers. Association for Computational Linguistics, Uppsala, Sweden, pages 220–224.
- Schwenk and Koehn (2008) Holger Schwenk and Philipp Koehn. 2008. Large and diverse language models for statistical machine translation. In Proceedings of the 3rd International Joint Conference on Natural Language Processing (IJCNLP).
- Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, USA, pages 35–40.
- Sethy et al. (2006) Abhinav Sethy, Panayiotis Georgiou, and Shrikanth Narayanan. 2006. Selecting relevant text subsets from web-data for building topic specific language models. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers. Association for Computational Linguistics, New York City, USA, pages 145–148.
- Tiedemann (2012) Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In LREC. pages 2214–2218.
- Zaremba et al. (2014) Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. CoRR abs/1409.2329.
- Zhang et al. (2016) Jian Zhang, Liangyou Li, Andy Way, and Qun Liu. 2016. Topic-informed neural machine translation. In COLING.