Neural machine translation (NMT) [1, 2, 3] achieves the state-of-the-art translation performance in resource-rich scenarios. However, domain specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios .
Domain adaptation techniques leverage out-of-domain data for in-domain translation. In the context of NMT, fine tuning based techniques have been very successful for resource-poor domain translation [5, 6, 7]
. On the other hand, cross-lingual transfer learning methods111These are also fine tuning techniques. Fine tuning for domain adaptation is a simpler version of cross-lingual transfer learning. have been successful in improving the performance of low resource languages such as Hausa-English using resource-rich French-English data .
Most of these methods, however, do not modify the internal structure of the model and rely on black-box approaches. They often incorporating the use of artificial tokens, to improve translation quality. As such, it is not clear, how the artificial tokens affect the learning of the model. Thus, we decide to explicitly model multiple domains by making simple modifications to the decoder. In particular, we either modify the representation of the decoder states before softmax or learn special bias vectors depending on the domain to which the sentence belongs.
There are studies where either multilingual [8, 9] or multi-domain models  are trained. However, none that attempt to develop a method and investigate the effect of using both multilingual and multi-domain data, which are more available than either and could be more effective. In this paper, we present the first work on multilingual and multi-domain NMT models. Our contributions are as follows:
We propose two novel domain adaptation methods that explicitly model domain information for the Transformer: domain specialization that learns domain specialized hidden state representations, and domain extremization that learns predictor biases for each domain.
We introduce multilingualism into the fine tuning [10, 11, 12, 5], multi-domain , and mixed fine tuning  and our proposed methods for domain adaptation. We show that it not only significantly improves translation for an extremely resource-poor domain but also the translations for resource-rich domains. We further combine our methods with mixed fine tuning using multilingual and multi-domain data, and achieves the best results.
Ii Domain Adaptation for NMT
Ii-a Existing Black-Box Methods
In this paper, we reproduce previously proposed methods for domain adaptation, which are black-box in nature. As such they are simple and do not need any modifications to the model architecture. In particular, we work with fine tuning, multi-domain and mixed fine tuning, which uses one out-of-domain corpus to improve the translation of one in-domain corpus.
Ii-A1 Fine Tuning
We first train an NMT model on a resource-rich out-of-domain corpus (parent model) till convergence, and then resume training on a resource-poor in-domain corpus (child model).
Ii-A3 Mixed Fine Tuning
This was proposed by  and is a combination of the above methods. Instead of fine tuning the out-of-domain model on in-domain data, we fine tune on an in-domain and out-of-domain mixed corpus. This prevents over-fitting and enables smooth domain transition. Refer to the original paper  for additional details.
Ii-B Proposed Methods
Note that for all the proposed methods presented in this section, the resource-poor in-domain corpus is oversampled.
Ii-B1 Domain Specialization (domspec)
The motivation of domain specialization is to learn specialized as well as common representations for different domains in a single NMT model. To achieve this, we modify the vanilla NMT model according to the feature replication idea proposed by  for easy domain adaptation. Figure 1 describes the modification. Accordingly, we perform a simple modification to the decoder state before computing the softmax and call the resultant model as the domain specialization model. Assuming that there are 2 domains, if
is the state of the decoder for the i’th word to be predicted, the new state passed to the softmax layer isfor words belonging to sentences for the first domain. For the second domain the new state is . The represents a vector which has the same size as . By doing so, we expect that the decoder will use the first to learn some common features for both domains and the remaining at the other positions for domain specific features. The resultant decoder state is 3 times the size of the original and thus the softmax layer contains 3 times the number of parameters as the original. In order to reduce the parameter explosion, we down-project by a factor of 3 and then perform the replication. For down-projection, we simply perform a linear projection using a weight matrix . This leads to a an insignificant increase in the number of parameters. Note that the input sentences are not pre-pended with an artificial token indicating the domain and hence leave it to feature replication to determine the domain.
Ii-B2 Domain Extremization (domextr)
shows how domain extremization can be performed. The extremization refers to the fact that the softmax decision is guided not by learning special features but by a bias which can help generate extremely different probability distributions. Again, assuming that there are 2 domains, we create two extremization vectors. and are vectors of the size of the target sequence vocabulary. The probability distribution to predict the current target word is now computed as:
where is the source sequence, are the previously predicted target words, is the current decoder hidden state, is a matrix to map to a vector of the size of the vocabulary of the target sequence, and denotes either of the two domain bias vectors used for domain extremization.
Ii-B3 Domain Specialization with Extremization (domspecextr)
This is a simple combination of the domain specialization and extremization methods that incorporates the two domain hidden states and into Equation 1 instead of . By doing so, we hope that the differentiation of domains will take place before as well as during softmax computation.
Ii-B4 Combination with Mixed Fine Tuning (+MFT)
Mixed fine tuning is a black-box method and thus is complementary with the above three models. We first train the model with domspec/domextr/domspecextr on the out-of-domain data and then continue training on the combination of the out-of-domain and the in-domain data. Note that artificial domain tags are not used here.
Iii Multilingual Multi-Domain Adaptation
We propose to use both multilingual and multi-domain data for domain adaptation. Figure 3 gives an overview of our multilingual and multi-domain method. This is a combination of multi-domain  and multilingual NMT , both of which use artificial tokens to control the target domain or language. Assume that there are multiple source languages, domains and target languages. For simplicity, consider there are two language pairs, src1-tgt1 and src2-tgt2. For the src1-tgt1 pair, there are one in-domain corpus and two out-of-domain corpora. For the src2-tgt2 pair, there is one out-of-domain corpus.
Iii-a Based on Existing Black-Box Methods
Iii-A1 Fine Tuning
To train a multilingual out-of-domain parent model (upper part of Figure 3), we append the target language tokens (2tgt1, and 2tgt2)222Note that when there is only one target language, this language tag can be removed. and the domain tokens (2d1, 2d2 and 2d3) to the respective corpora; then we merge them by oversampling the smaller corpora and feed this corpus to the NMT training pipeline. After that, we fine tune the in-domain model with the parent model.
To train a multilingual and multi-domain model, the merged out-of-domain multilingual corpora and the in-domain corpus are further merged into a single corpus by oversampling the smaller corpus. This is then fed to the NMT training pipeline.
Iii-A3 Mixed Fine Tuning
Instead of training a model from scratch, we can apply mixed fine tuning by initializing the multilingual and multi-domain child model using the previous multilingual out-of-domain parent model. This method can reap the benefits of multilingualism as well as mixed fine tuning for domain adaptation.
Iii-B Based on Proposed Methods
Iii-B1 Domain Specialization (domspec)
Instead of using two different hidden states to represent different domains, we use multiple different hidden states to represent multiple domains and languages, respective.
Iii-B2 Domain Extremization (domextr)
Similar to domain specialization, we use multiple domain bias vectors for multiple domains and languages instead of two.
Iii-B3 Domain Specialization with Extremization (domspecextr)
We use multiple different hidden states and domain bias vectors simultaneously for multiple domains and languages, respective.
Iii-B4 Combination with Mixed Fine Tuning (+MFT)
The training process is the same as mixed fine tuning using multiple domains and languages, but we use multiple hidden states or domain bias vectors for the decoder.
Iv Experimental Settings
Iv-a Multilingual Multi-Domain Settings
We focused on Japanese-English Wikinews translation as the in-domain task. This task was conducted on the Japanese-English subset of Asian language treebank (ALT) parallel corpus333http://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/index.html . This task contains 18088, 1000, and 1018 sentences for training, development, and testing, respectively. In order to augment the resource-poor ALT-JE in-domain data, we utilized two different out-of-domain corpora with the same source and target languages, and one out-of-domain corpus that only shared the target language with the in-domain corpus.
The first out-of-domain data was the Kyoto free translation task (KFTT) corpus444http://www.phontron.com/kftt/ . This corpus contains Japanese-English translation that focuses on Wikipedia articles related to the city of Kyoto. This task contains 440288, 1166, and 1160 sentences for training, development, and testing, respectively. The second out-of-domain data was the spoken domain IWSLT 2017 Japanese-English corpus created by the WIT project . This task contains 223108 sentences for training. We used the dev 2010 and test 2010 sets containing 871 and 1549 sentences for development and testing, respectively. The third out-of-domain data was the spoken domain IWSLT 2015 Chinese-English corpus . This task contains 209491 sentences for training. We used the dev 2010 and test 2010 sets containing 887 and 1570 sentences for development and testing, respectively.
Iv-B MT Systems
We used the open source implementation of the Transformer model  in tensor2tensor555https://github.com/tensorflow/tensor2tensor for all our NMT experiments. We used the Transformer because it is the current state-of-the-art NMT model. For training, we used the default model settings corresponding to transformer_base_single_gpu in the implementation.666Note that we used the default adjustment strategy of optimizer’s learning rate for all the models for fair comparison and replicability, leaving the tuning for each model as future work. Also, the number of parameters for all the methods are the same as the transformer_base_single_gpu in  because we used the same hyper-parameters and vocabulary sizes. For domain adaptation development, we used the in-domain development data in the fine tuning method, while for all the other methods we used a mix of the in-domain and out-of-domain development data. We trained the models until convergence.777When there is no change in 0.05 BLEU score over several thousand batches on the development data. For decoding, we averaged the last 20 checkpoints with a beam size of 4 and length penalty . We also compared with phrase based SMT (PBSMT) using Moses888http://www.statmt.org/moses/  for the tasks without domain adaptation as a baseline, because vanilla SMT has been reported to perform better than vanilla NMT in resource-poor translation . We used default Moses settings999We trained 5-gram KenLM language models, used GIZA++ for alignment and MERT  for tuning. for all our experiments.
For both MT systems, we preprocessed the data as follows: Japanese was segmented using JUMAN101010http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN ; English was tokenized and lowercased using the tokenizer.perl script in Moses; for Chinese, we used KyotoMorph111111https://bitbucket.org/msmoshen/kyotomorph-beta for segmentation. In order to reduce the number of out of vocabulary words in NMT, we pre-processed the corpora using the default sub-word segmentation mechanism, which is a part of tensor2tensor. For all our NMT experiments, we set the source and target vocabularies sizes to 32000 sub-words. We followed the vocabulary acquisition methods of  for all the domain adaptation methods, but used a vocabulary random mapping from Chinese to Japanese following  when fine tuning on the ALT-JE data using the IWSLT-CE data.
|ALT-JE SMT||11.03||(2.16)||( 1.93)||(0.20)|
|4||mixed fine tuning||21.74|
|14||mixed fine tuning||19.76|
|24||mixed fine tuning||19.10|
V-a Without Domain Adaptation
Table I shows the vanilla PBSMT and NMT results. Each system was trained for a particular MT task without any domain adaptation. We can see that SMT performs better for the in-domain translation and using out-of-domain models for in-domain translation shows poor performance. It is clear that a domain or language mismatch leads to poor translation quality (i.e., scores in parentheses in Table I) and thus do not report the BLEU scores when there is a mismatch in the domain adaptation experiments (i.e., Sections V-B, V-C, and V-D).
V-B Adaptation with One Out-of-Domain Corpus
Table II shows the in-domain results for domain adaptation using only one out-of-domain corpus.121212Refer to Table V the appendix for the out-of-domain results. We also conducted NMT experiments that simply concatenated the corpora, denoting as “concat” in the table. We can see that using a single out-of-domain corpus improves the in-domain translation. Although the corpus size of KFTT-JE is two times larger than that of IWSLT-JE, the performance is not better besides using mixed fine tuning. This indicates that the size of the out-of-domain corpus is not the only decisive factor for domain adaptation but the method also matters, which also can be indicated in the multiple domain and multilingual multi-domain adaptation results.
Unfortunately, domspec, domextr and domainspecextr are not always significantly better than previous methods. We suspect the reason for this is due to the small amount of in-domain data, making it difficult to learn either the domain specific hidden states or biases. The proposed methods learn models from scratch and the small in-domain data makes it difficult to learn the in-domain models, while both fine tuning and mixed fine tuning depend on transfer learning.131313Given a sufficiently large in-domain corpus, it is possible for our methods to beat fine tuning and mixed fine tuning. But this could not provide solutions for small in-domain translation. However, combining them with mixed fine tuning significantly improves BLEU scores, which also achieves the best in-domain performance using KFTT-JE data. We believe the reason for this is the robustness of the mixed fine tuning model that is pre-trained on the out-of-domain only, making our methods learn better domain representations. However, different from KFTT-JE, for the IWSLT-JE and IWSLT-CE data, combining the proposed methods with mixed fine tuning does not show better performance than the existing black-box methods. We believe that the main reason is the size difference of the out-of-domain corpora.
An interesting observation is that cross-lingual transfer across domains shows comparable results compared to using the out-of-domain corpus from the same language pair when the two corpora have similar characteristics, i.e., IWSLT-CE v.s. IWSLT-JE. This means that, in cases where out-of-domain corpora for the same language pair are not available, using out-of-domain corpora that share only the target language is also useful.
|Multiple Out-of-domain Corpora|
|4||mixed fine tuning||24.29|
|Multilingual Single Out-of-domain Corpora|
|14||mixed fine tuning||19.35|
|Multilingual Multi-Domain Adaptation|
|24||mixed fine tuning||24.04|
V-C Multilingual Multi-Domain Adaptation
Table III shows the in-domain results for multilingual multi-domain adaptation,141414Refer to Table VI the appendix for the out-of-domain results. where the above sub-table contains results for using two out-of-domain corpora in the same language pair; the middle sub-table contains results for using multilingual single domain corpora; the below sub-table is for multilingual multi-domain adaptation where multiple out-of-domain corpora with different source languages are used. Again, “concat” denotes the NMT baselines that simply concatenate the corpora.
We can see that increasing the number of domains further boosts the in-domain performance. Although data from mixing different domains increases the difficulty of training a single NMT model,151515This is due to the increase of domain diversity such as vocabularies and styles. This involves learning domain specialized representations, which increases the difficulty. the increase of data size leads to better parent models that consequently improves the in-domain translation. Combination of our proposed methods with mixed fine tuning again performs the best.
Domain adaptation with multilingual single out-of-domain corpora also perform better than using one out-of-domain corpus with large improvements. Although the source language is different, the decoder model is boosted by mixing IWSLT-JE and IWSLT-CE, which we think is the main reason for improvement. Similarly, the combination of our proposed methods with mixed fine tuning outperforms the other methods.
For multilingual multi-domain adaptation, we can see that using multilingualism together with multi-domain data shows the best results. From Table III, we can see that domextr+MFT always favors ALT-JE, for which the least amount of data is available. As such, future methods for improving the translation quality for the language pairs with the smallest datasets should incorporate domain extremization methods.
|3||IWSLT-EC (multi-domain, cross-lingual)||14.9||-||-||11.2|
|4||IWSLT-EC (mixed fine tuning, cross-lingual)||19.5||-||-||11.1|
|6||IWSLT-EJ_IWSLT-EC (mixed fine tuning)||24.7||-||11.4||12.7|
|8||KFTT-EJ_IWSLT-EJ_IWSLT-EC (mixed fine tuning)||27.4||31.1||12.0||13.3|
It turns out that when we use multiple out-of-domain corpora, domextr works consistently better than domspec for in-domain translation. We think the reason for this is that when the number of out-of-domain corpora increases, the in-domain representation ability of domspec decreases due to down-projection, but for domextr it remains the same. Unfortunately, domspecextr that combines domain specialization and extremization approaches fails to improve beyond the individual methods. Domspecextr combines both, which significantly increases the number of new parameters, making the model harder to learn.
We randomly investigated 50 ALT-JE translations by domextr+MFT. We found that the translation quality reaches to a practical level after our adaptation. The use of multi-domain data improves the translation of not only common words (23/50 sentences) but also domain specific terminologies (7/50 sentences). Multilingualism is mainly helpful for common word translation (11/50 sentences), but it also leads to some noise (4/50 sentences) for terminology translation due to the vocabulary mismatch of low frequent terms. Using both multilingual and multi-domain data improves common word translation more than using only multi-domain data (31/50 sentences).
V-D Feasibility in Multiple Target Language Scenarios
We also conduced experiments when there are multiple target languages. We flipped the translation directions and trained multilingual, multi-domain models for ALT-EJ, KFTT-EJ, IWSLT-EJ and IWSLT-EC. As the fine tuning, domain specialization, and extremization methods were not designed for multiple target languages, we only experimented with the basic multilingual multi-domain models (see Section III-A2) in combination with mixed fine tuning. The datasets and model training settings were the same as mentioned in the experimental setting section. Table IV shows the results. Lines 1 and 2 give the scores of the vanilla SMT and NMT models for the tasks. Lines 3 to 8 contain the results of 3 data settings for adaptation (IWSLT-EC for ALT-EJ, IWSLT-EJ_IWSLT-EC for ALT-EJ, and KFTT-EJ_IWSLT-EJ_IWSLT-EC for ALT-EJ) where 2 target languages are available. We skipped other data settings, because they do not involve multiple target languages.
We can see that the results are similar to the ones with one target language that: IWSLT-EC also helps improve ALT-EJ although it is cross-lingual. Multilingual and multi-domain adaptation further significantly improves the performance of the in-domain ALT-EJ translation. Using more data performs better (i.e., KFTT-EJ_IWSLT-EJ_IWSLT-EC v.s. IWSLT-EJ_IWSLT-EC). Multilingual and multi-domain adaptation also improves the translation quality of out-of-domain IWSLT-EJ and IWSLT-EC, but not KFTT-EJ. We believe the reason for this is because KFTT-EJ already has enough training data. Mixed fine tuning performs significantly better than multi-domain, which is a new finding in multiple target language translation settings.
Vi Related Work
Kim et al.  extended the feature replication idea of  for neural domain adaptation on slot tagging tasks, where they use one RNN layer for common representations and additional multiple RNN layers for domain specific representations. In contrast, our domspec method implements the feature replication idea in the decoding state of NMT. Michel and Neubig  conducted adaptation for each speaker in the TED tasks by leaning a speaker bias vector, while our domextr method learns a bias for each domain. Thompson et al.  analyzed the effect of each component in fine tuning based NMT adaptation. Britz et al.  proposed to use a feed-forward network as a domain discriminator for NMT domain adaptation, which is jointly optimized with NMT.
Fine tuning has also been explored for domain adaptation for other NLP tasks using neural networks (NN). Mou et al. () used fine tuning for both equivalent/similar tasks but with different data sets and different tasks but that share the same NN architecture. They found that the effectiveness of fine tuning depends on the relatedness of the tasks. Tag based NMT has also been shown to be effective for other sub tasks of NMT. Sennrich et al. () tried to control the politeness of translations by appending a politeness tag to the source side language that uses honorific. Johnson et al. () mixed different language pairs by appending a target language tag to the source text of each language for training a multilingual NMT system.
Monolingual corpora are widely used for SMT. In SMT, they are used for training a LM, and the LM is used as a feature for the decoder in a log-linear model , . In-domain monolingual data has been used for NMT in other ways . Currey et al. () copied the target monolingual data to the source side and used the copied data for training NMT. Domhan and Hieber () proposed using target monolingual data for the decoder with LM and NMT multitask learning. Zhang and Zong () used source side monolingual data to strengthen the NMT encoder. Cheng et al. (
) used both source and target monolingual data for NMT trough reconstructing the monolingual data with an autoencoder. We leave the comparison with these recently proposed methods as a topic for future work.
In this paper, we proposed two novel domain adaptation methods that explicitly model domain information in the decoder. Combining with mixed fine tuning, our methods achieved the best translation performance. Furthermore, we proposed to use both multilingual and multi-domain data for improving in-domain NMT. We also explored the feasibility of mixed fine tuning in a multiple target languages scenario. Experiments on the Transformer showed the effectiveness of multilingual and multi-domain adaptation. As future work, we plan to experiment on more domains and language pairs, and much larger datasets to compare with state-of-the-art results.
Appendix A Out-of-domain Translation Results
Table V shows the out-of-domain results after domain adaptation with only one out-of-domain corpus. We can see that after combining with mixed fine tuning, our proposed methods also improve out-of-domain translation with the exception of IWSLT-CE. The reason for IWSLT-CE not improving as much as the JE language pairs is because this translation direction does not benefit from additional source side Chinese corpus.
|4||mixed fine tuning||25.08|
|14||mixed fine tuning||11.23|
|24||mixed fine tuning||16.28|
Table VI shows the out-of-domain results after multilingual multi-domain domain adaptation. We can see that the combination of our proposed methods with mixed fine tuning performs the best. For multilingual multi-domain adaptation, using multilingualism together with multi-domain data shows the best results for two out-of-domain translations, i.e., IWSLT-JE and IWSLT-CE. We also observe that combining our proposed methods with MFT has a positive impact on the relatively resource-rich IWSLT-JE and IWSLT-CE translation directions. In contrast, vanilla MFT does not achieve this kind of improvement. According to us, our methods learn better specialized representations or biases when provided with additional types of domains. Our results indicate that it is possible to package multiple language pairs and domains into a single NMT model with significant improvement for both in-domain and out-of-domain translations.
|Multiple Out-of-domain Corpora|
|4||mixed fine tuning||25.33||12.33||-|
|Multilingual Single Out-of-domain Corpora|
|14||mixed fine tuning||-||10.91||14.89|
|Multilingual Multi-Domain Adaptation|
|24||mixed fine tuning||26.00||11.77||16.40|
This work was supported by Grant-in-Aid for Research Activity Start-up #17H06822, JSPS.
K. Cho, B. van Merriënboer, Ç. Gülçehre, D. Bahdanau,
F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations
using rnn encoder–decoder for statistical machine translation,” in
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar: Association for Computational Linguistics, Oct. 2014, pp. 1724–1734. [Online]. Available: http://www.aclweb.org/anthology/D14-1179
-  I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, ser. NIPS’14. Cambridge, MA, USA: MIT Press, 2014, pp. 3104–3112. [Online]. Available: http://dl.acm.org/citation.cfm?id=2969033.2969173
-  D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” in In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015). San Diego, USA: International Conference on Learning Representations, May 2015.
-  B. Zoph, D. Yuret, J. May, and K. Knight, “Transfer learning for low-resource neural machine translation,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, 2016, pp. 1568–1575. [Online]. Available: http://aclweb.org/anthology/D/D16/D16-1163.pdf
-  M. Freitag and Y. Al-Onaizan, “Fast domain adaptation for neural machine translation,” arXiv preprint arXiv:1612.06897, 2016.
-  C. Chu, R. Dabre, and S. Kurohashi, “An empirical comparison of domain adaptation methods for neural machine translation,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver, Canada: Association for Computational Linguistics, July 2017, pp. 385–391. [Online]. Available: http://aclweb.org/anthology/P17-2061
-  H. Sajjad, N. Durrani, F. Dalvi, Y. Belinkov, and S. Vogel, “Neural machine translation training in a multi-domain scenario,” in Proceedings of the Twelfth International Workshop on Spoken Language Translation (IWSLT), Tokyo, Japan, 2017. [Online]. Available: http://www.aclweb.org/anthology/D14-1179
-  O. Firat, K. Cho, and Y. Bengio, “Multi-way, multilingual neural machine translation with a shared attention mechanism,” in NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, 2016, pp. 866–875. [Online]. Available: http://aclweb.org/anthology/N/N16/N16-1101.pdf
-  M. Johnson, M. Schuster, Q. Le, M. Krikun, Y. Wu, Z. Chen, N. Thorat, F. a. Viï¾ï½©gas, M. Wattenberg, G. Corrado, M. Hughes, and J. Dean, “Google’s multilingual neural machine translation system: Enabling zero-shot translation,” Transactions of the Association for Computational Linguistics, vol. 5, pp. 339–351, 2017. [Online]. Available: https://transacl.org/ojs/index.php/tacl/article/view/1081
-  M.-T. Luong and C. D. Manning, “Stanford neural machine translation systems for spoken language domains,” in Proceedings of the 12th International Workshop on Spoken Language Translation, Da Nang, Vietnam, December 2015, pp. 76–79.
-  R. Sennrich, B. Haddow, and A. Birch, “Improving neural machine translation models with monolingual data,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics, August 2016, pp. 86–96. [Online]. Available: http://www.aclweb.org/anthology/P16-1009
-  C. Servan, J. Crego, and J. Senellart, “Domain specialization: a post-training domain adaptation for neural machine translation,” arXiv preprint arXiv:1612.06141, 2016.
-  C. Kobus, J. Crego, and J. Senellart, “Domain control for neural machine translation,” arXiv preprint arXiv:1612.06140, 2016.
-  A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., 2017, pp. 5998–6008. [Online]. Available: http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf
-  R. Sennrich, B. Haddow, and A. Birch, “Controlling politeness in neural machine translation via side constraints,” in Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego, California: Association for Computational Linguistics, June 2016, pp. 35–40. [Online]. Available: http://www.aclweb.org/anthology/N16-1005
-  H. Daume III, “Frustratingly easy domain adaptation,” in Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Prague, Czech Republic: Association for Computational Linguistics, June 2007, pp. 256–263. [Online]. Available: http://www.aclweb.org/anthology/P/P07/P07-1033
-  P. Michel and G. Neubig, “Extreme adaptation for personalized neural machine translation,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Melbourne, Australia: Association for Computational Linguistics, Jul. 2018, pp. 312–318. [Online]. Available: https://www.aclweb.org/anthology/P18-2050
-  Y. K. Thu, W. P. Pa, M. Utiyama, A. Finch, and E. Sumita, “Introducing the asian language treebank (alt),” in Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), N. C. C. Chair), K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, and S. Piperidis, Eds. Paris, France: European Language Resources Association (ELRA), may 2016.
-  G. Neubig, “The Kyoto free translation task,” http://www.phontron.com/kftt, 2011.
-  M. Cettolo, C. Girardi, and M. Federico, “Wit: Web inventory of transcribed and translated talks,” in Proceedings of the 16 Conference of the European Association for Machine Translation (EAMT), Trento, Italy, May 2012, pp. 261–268.
-  M. Cettolo, J. Niehues, S. Stüker, L. Bentivogli, R. Cattoni, and M. Federico, “The iwslt 2015 evaluation campaign,” in Proceedings of the Twelfth International Workshop on Spoken Language Translation (IWSLT), 2015.
-  P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst, “Moses: Open source toolkit for statistical machine translation,” in Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions. Prague, Czech Republic: Association for Computational Linguistics, June 2007, pp. 177–180. [Online]. Available: http://www.aclweb.org/anthology/P/P07/P07-2045
-  F. J. Och, “Minimum error rate training in statistical machine translation,” in Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Sapporo, Japan: Association for Computational Linguistics, July 2003, pp. 160–167. [Online]. Available: http://www.aclweb.org/anthology/P03-1021
-  S. Kurohashi, T. Nakamura, Y. Matsumoto, and M. Nagao, “Improvements of Japanese morphological analyzer JUMAN,” in Proceedings of the International Workshop on Sharable Natural Language, 1994, pp. 22–28.
-  Y.-B. Kim, K. Stratos, and R. Sarikaya, “Frustratingly easy neural domain adaptation,” in Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. Osaka, Japan: The COLING 2016 Organizing Committee, Dec. 2016, pp. 387–396. [Online]. Available: https://www.aclweb.org/anthology/C16-1038
-  B. Thompson, H. Khayrallah, A. Anastasopoulos, A. D. McCarthy, K. Duh, R. Marvin, P. McNamee, J. Gwinnup, T. Anderson, and P. Koehn, “Freezing subnetworks to analyze domain adaptation in neural machine translation,” in Proceedings of the Third Conference on Machine Translation: Research Papers. Belgium, Brussels: Association for Computational Linguistics, Oct. 2018, pp. 124–132. [Online]. Available: https://www.aclweb.org/anthology/W18-6313
-  D. Britz, Q. Le, and R. Pryzant, “Effective domain mixing for neural machine translation,” in Proceedings of the Second Conference on Machine Translation. Copenhagen, Denmark: Association for Computational Linguistics, September 2017, pp. 118–126. [Online]. Available: http://www.aclweb.org/anthology/W17-4712
-  L. Mou, Z. Meng, R. Yan, G. Li, Y. Xu, L. Zhang, and Z. Jin, “How transferable are neural networks in nlp applications?” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas: Association for Computational Linguistics, November 2016, pp. 479–489. [Online]. Available: https://aclweb.org/anthology/D16-1046
-  M. Johnson et al., “Google’s multilingual neural machine translation system: Enabling zero-shot translation,” CoRR, vol. abs/1611.04558, 2016.
-  F. J. Och and H. Ney, “Discriminative training and maximum entropy models for statistical machine translation,” in Proceedings of 40th Annual Meeting of the Association for Computational Linguistics. Philadelphia, Pennsylvania, USA: Association for Computational Linguistics, July 2002, pp. 295–302.
-  C. Chu and R. Wang, “A survey of domain adaptation for neural machine translation,” in Proceedings of the 27th International Conference on Computational Linguistics. Association for Computational Linguistics, 2018, pp. 1304–1319. [Online]. Available: http://aclweb.org/anthology/C18-1111
-  A. Currey, A. V. Miceli Barone, and K. Heafield, “Copied monolingual data improves low-resource neural machine translation,” in Proceedings of the Second Conference on Machine Translation. Copenhagen, Denmark: Association for Computational Linguistics, September 2017, pp. 148–156. [Online]. Available: http://www.aclweb.org/anthology/W17-4715
-  T. Domhan and F. Hieber, “Using target-side monolingual data for neural machine translation through multi-task learning,” in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Copenhagen, Denmark: Association for Computational Linguistics, September 2017, pp. 1500–1505.
-  J. Zhang and C. Zong, “Exploiting source-side monolingual data in neural machine translation,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas: Association for Computational Linguistics, November 2016, pp. 1535–1545. [Online]. Available: https://aclweb.org/anthology/D16-1160
Y. Cheng, W. Xu, Z. He, W. He, H. Wu, M. Sun, and Y. Liu, “Semi-supervised learning for neural machine translation,” inProceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics, August 2016, pp. 1965–1974. [Online]. Available: http://www.aclweb.org/anthology/P16-1185