Heterogeneity is pervasive in NLP, arising from corpora being constructed from different sources, featuring different topics, register, writing style, etc. An important, yet elusive, goal is to produce NLP tools that are capable of handling all types of texts, such that we can have, e.g., text classifiers that work well on texts from newswire to wikis to micro-blogs. A key roadblock is application to new domains, unseen in training. Accordingly, training needs to be robust to domain variation, such that domain-general concepts are learned in preference to domain-specific phenomena, which will not transfer well to out-of-domain evaluation. To illustrate, bitvai15acl report learning formatting quirks of specific reviewers in a review text regression task, which are unlikely to prove useful on other texts.
This classic problem in NLP has been tackled under the guise of “domain adaptation”, also known as unsupervised transfer learning, using feature-based methods to support knowledge transfer over multiple domainsBlitzer et al. (2007); Daumé III (2007); Joshi et al. (2012); Williams (2013); Kim et al. (2016). More recently, ganin2015unsupervised proposed a method to encourage domain-general text representations, which transfer better to new domains.
Inspired by the above methods, in this paper we propose a novel technique for multitask learning of domain-general representations.111Code, data and evaluation scripts available at https://github.com/lrank/Domain_Robust_Text_Representation.git
Specifically, we propose deep learning architectures for multi-domain learning, featuring ashared representation, and domain private
representation. Our approach generalises the feature augmentation method of daume2007frustratingly to convolutional neural networks, as part of a larger deep learning architecture. Additionally, we use adversarial training such that theshared representation is explicitly discouraged from learning domain identifying information Ganin and Lempitsky (2015). We present two architectures which differ in whether domain is conditioned on or generated, and in terms of parameter sharing in forming private representations.
We primarily evaluate on the task of language identification (“LangID”: cavnar1994n), using the corpora of lui2012langid, which combine large training sets over a diverse range of text domains. Domain adaptation is an important problem for this task Lui and Baldwin (2014); Jurgens et al. (2017), where text resources are collected from numerous sources, and exhibit a wide variety of language use. We show that while domain adversarial training overall improves over baselines, gains are modest. The same applies to twin shared/private architectures, but when the two methods are combined, we observe substantial improvements. Overall, our methods outperform the state-of-the-art Lui and Baldwin (2012) in terms of out-of-domain accuracy. As a secondary evaluation, we use the Multi-Domain Sentiment Dataset Blitzer et al. (2007), where we once again observe a clear advantage for our approaches, illustrating the potential of our technique more broadly in NLP.
2 Multi-domain Learning
A primary consideration when formulating models of multi-domain data is how best to use the domain. Basic methods might learn several separate models, or simply ignore the domain and learn a single model. Neither method is ideal: the former fails to share statistics between the models to capture the general concept, while the latter discards information that can aid classification, e.g., domain-specific vocabulary or class skew.
To address these issues, we propose two architectures as illustrated in Figure 1 (a and b), parameterised as a convolutional network (CNN) over the input instance, chosen based on the success of CNNs for text categorisation problems Kim (2014); note, however, that our method is general and can be applied with other network types. Both representations are based on the idea of twin representations of each instance,222This differs from standard architectures, e.g., ‘baseline’ in Figure 0(c), which uses a single representation. denoted shared and private
representations, which are trained to capture domain-general versus domain-specific concepts, respectively. This is achieved using various loss functions, most notably an adversarial loss to discourage learning of domain-specific concepts in the shared representations. The two architectures differ in whether the domain is provided as an input (Cond) or an output (Gen). Below, we elaborate on the details of the two models.
2.1 Domain-Conditional Model (Cond)
The first model, illustrated in Figure 0(a), includes a collection of domain-specific s, and for each training instance , the domain-specific is used to compute its private representation . In this manner, the model conditions on the domain identifier. The Cond model also computes a shared representation, , directly from , using a shared , and the two representations are concatenated together to form input to linear softmax classification function for predicting class label . Thus far, the approach resembles daume2007frustratingly, a method for multitask learning based on feature augmentation in a linear model, which works by replicating the input features to create both general shared features, and domain-specific features. Note that the approaches differ in that our method uses deep learning to form the two representations, in place of feature replication.
A key challenge for the Cond model is that the ‘shared’ representation can be contaminated by domain-specific concepts. To address this, we borrow ideas from adversarial learning Goodfellow et al. (2014); Ganin et al. (2016). The central idea is to learn a good general representation (suitable for the shared component) to maximize end task performance, yet obscure the domain information, as modelled by a discriminator, . This reduces the domain-specific information in the shared representation, however note that important domain-specific components can still be captured in the private representation.
Overall, this results in the training objective:
where denotes the cross-entropy classification loss, are the shared representations for the training set of instances, and likewise are the private representations, which are both functions of and , respectively. Note the negative sign of the adversarial loss (referred to as ), and the maximisation with respect to the discriminator parameters . This has the effect of learning a maximally accurate discriminator wrt , while making it maximally inaccurate wrt representation
, and is implemented using a gradient reversal step during backpropagationGanin et al. (2016).
Minimum Entropy Inference
As Cond conditions on the domain, this imposes the requirement that the domain of the test data is known (and covered in training), which is incompatible with our goal of unsupervised adaptation. To deal with this situation, we consider each domain in the test set as belonging to one of the training domains, and then select the domain with the minimum entropy classification distribution. This is based on an assumption that a closely matching domain should be able to make confident predictions.333The minimum entropy method is quite effective, trailing oracle selection by only accuracy.
2.2 Domain-Generative Model (Gen)
The second model is based on generation of, rather than conditioning on, the domain, which allows the model to learn domain signals that transfer across some, but not all, domains. Most components are common with the Cond model as described in §2.1, including the use of private and shared representations, their use in the classification output, and the adversarial loss based on discriminating the domain from the shared representation. There are two key differences: (1) the private representation, , is computed using a single , rather than several domain-specific s, which confers benefits of domain-generalisation, a more compact model, and simpler test inference;444The domain need not be known for test examples, so the model can be used directly. and (2) the private representation is used to positively predict the domain, which further encourages the split between domain general and domain-specific aspects of the representation.
Gen has the following training objective,
where notation follows that used in §2.1, with the exception of that is redefined, with a function of , and the addition of the last term to capture the generation loss . The same gradient reversal method from §2.1 is used during training for the adversarial component.
3.1 Language Identification
To evaluate our approach, we first consider the language identification task.
We follow the settings of lui2012langid, involving training sets from different domains with languages in total: Debian, JRC-Acquis, Wikipedia, ClueWeb and RCV2, derived from DBLP:conf/ijcnlp/LuiB11.555As ClueWeb in lui2012langid is not publicly accessible, we used a slightly different set of languages but comparable number of documents for training. We evaluate accuracy on seven holdout benchmarks: EuroGov, TCL, Wikipedia2 666Note that the two Wikipedia datasets have no overlap. (all from DBLP:conf/naacl/BaldwinL10), EMEA Tiedemann (2009), EuroPARL Koehn (2005), T-BE Tromp and Pechenizkiy (2011), and T-SC Carter et al. (2013).
Documents are tokenized as a byte sequence (consistent with lui2012langid), and truncated or padded to a length of 1k bytes.777We also tried different document length limits, such as 10k, but observed no substantial change in performance.
We perform a grid search for the hyper-parameters, and selected the following settings to optimise accuracy over heldout data from each of the training domains. All byte tokens are mapped to byte embeddings, which are random initialized with size . We use the filter sizes of and , with filters for each, to capture -gram features of different lengths. A dropout rate of was applied to all the representation layers. We set the factors and to . All the models are optimized using the Adam Optimizer Kingma and Ba (2015) with a learning rate of .
3.1.1 Results and Analysis
Baseline and comparisons
For comparison, we implement a baseline888 The baseline here used a double capacity hidden representation, in order to better match the increased expressivity of the shared/private models.
The baseline here used a double capacity hidden representation, in order to better match the increased expressivity of the shared/private models.which is trained using all the data without domain knowledge (i.e. the simple union of the different training datasets). We also employ adversarial learning () and generation () of domain to the baseline model to better understand the utility of these methods. Note that the baseline + is a multi-domain variant of ganin2015unsupervised, albeit trained without any text in the testing domains. For our models, we report results of configurations both with and without the and components. We also report the results for two state-of-the-art off-the-shelf LangID tools: (1) langid.py 999https://github.com/saffsd/langid.py Lui and Baldwin (2012); and (2) Google’s cld2.101010https://github.com/CLD2Owners/cld2
Our primary concern in terms of evaluating the ability of the different models to generalise, is out-of-domain performance. Table 1 provides a breakdown of out-of-domain results over the holdout domains. The accuracy varies greatly between test domains, depending on the mix of languages, length of test documents, etc. Both our models, Cond and Gen, achieve competitive performance, and are further improved by and .
For the baseline, applying either or results in mild improvements over the baseline, which is surprising as the two forms of supervision work in opposite directions. Overall the small change in performance means neither method appears to be a viable technique for domain adaptation.
Overall, the raw Cond and Gen perform better than the baseline. Specifically, for Cond, we observed performance gains on EuroPARL, T-BE and T-SC. These three datasets are notable in containing shorter documents, which benefit the most from shared learning. However, as discussed earlier, multi-domain data can introduce noise to the shared representation, causing the performance to drop over TCL, Wikipedia2 and EMEA. This observation demonstrates the necessity of applying adversarial learning to Cond. On the other hand, it is a different story for Gen: vanilla Gen achieves accuracy gains relative to the baseline over domains, but is slightly below Cond for domains, a result of parameter-sharing over the private representation.
In terms of the adversarial learning, we see that by adding an adversarial component ( or ), Cond and Gen realises substantial improvements out of domain, with the exception of EMEA. As we motivated, the domain adversarial part can obscure the domain-specific information in the shared representation, which helps Cond have better generalisation to other domains. Additionally, applying to Gen helps the private representation to generalize better. These results demonstrate that both and are necessary components of multi-domain models. EMEA is noteworthy in that its pattern of results is overall different to the other domains, in that applying hurts performance. For this domain, the baseline performs very well, and Gen does much better than Cond. We believe the reason is that, as a medical domain, EMEA
is very much an outlier and does not align to any single training domain. Also, there is a lot of borrowing of terms such as drug and disease names verbatim between languages, further complicating the task.
Overall, our best models (Cond and Gen ) outperform both Langid.py and CLD2 in terms of average out-of-domain accuracy.
Table 2 reports the in-domain performance over the training domains, using -fold cross validation, as well as the macro-averaged accuracy. Our proposed methods (Cond and Cond ) consistently achieve better performance than the baseline. Both Cond and Gen achieve competitive performance with the state-of-the-art langid.py in the in-domain scenario. Although langid.py performs slightly better on average accuracy, our best model outperforms langid.py for three of the five datasets.
3.2 Product Reviews
To evaluate the generalization of our methods to other tasks, we experiment with the Multi-Domain Sentiment Dataset Blitzer et al. (2007).111111From https://www.cs.jhu.edu/~mdredze/datasets/sentiment/, using the positive and negative files from unprocessed, up to 2,000 instances per domain. For the four test domains we automatically aligned the reviews in the processed and unprocessed, such that we can compare results directly against prior work. We select the 20 domains with the most review instances, and discard the remaining 5 domains.
For model parameterization, we adopt the same basic hyper-parameter settings and training process as for LangID in §3.1, but change the filter sizes to , and , use word-based tokenisation, and truncate sentences to tokens, for better compatible with shorter documents.
We perform a out-of-domain evaluation over four target domains, “book” (B), “dvd” (D), “electronics” (E) and “kitchen & housewares” (K), as used in blitzer2007biographies. Our experimental setup differs from theirs, in that they train on a single domain and then evaluate on another, while we train over domains, then evaluate on the four test domains.
Table 3 presents the results. Overall, our proposed methods consistently outperform the baselines, with the Gen approach a consistent winner over all other techniques. Note also the lacklustre performance when the baseline is trained with the adversarial loss, mirroring our findings for language identification in §3.1. For comparison, we also report the best results of SCL-MI and DANN, in both cases using an oracle selection of source domain. Our method consistently outperform these approaches, despite having no test oracle, although note that we use more diverse data sources for training.
We have proposed a novel deep learning method for multi-domain learning, based on joint learning of domain-specific and domain-general components, using either domain conditioning or domain generation. Based on our evaluation over multi-domain language identification and multi-domain sentiment analysis, we show our models to substantially outperform a baseline deep learning method, and set a new benchmark for state-of-the-art cross-domain LangID. Our approach has potential to benefit other NLP applications involving multi-domain data.
We thank the anonymous reviewers for their helpful feedback and suggestions, and the National Computational Infrastructure Australia for computation resources. This work was supported by the Australian Research Council (FT130101105).
- Baldwin and Lui (2010) Timothy Baldwin and Marco Lui. 2010. Language identification: The long and the short of the matter. In Proceedings of Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics. pages 229–237.
Bitvai and Cohn (2015)
Zsolt Bitvai and Trevor Cohn. 2015.
Non-linear text regression with a deep convolutional neural network.
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing Short Papers.
- Blitzer et al. (2007) John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. pages 440–447.
- Carter et al. (2013) Simon Carter, Wouter Weerkamp, and Manos Tsagkias. 2013. Microblog language identification: Overcoming the limitations of short, unedited and idiomatic text. Language Resources and Evaluation 47(1):195–215.
- Cavnar and Trenkle (1994) William B Cavnar and John M Trenkle. 1994. N-gram-based text categorization. In Proceedings of the Third Symposium on Document Analysis and Information Retrieval.
- Daumé III (2007) Hal Daumé III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. pages 256–263.
Ganin and Lempitsky (2015)
Yaroslav Ganin and Victor Lempitsky. 2015.
Unsupervised domain adaptation by backpropagation.
International Conference on Machine Learning 2015. pages 1180–1189.
- Ganin et al. (2016) Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research 17:59:1–59:35.
- Goodfellow et al. (2014) Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems 27. pages 2672–2680.
- Joshi et al. (2012) Mahesh Joshi, Mark Dredze, William W. Cohen, and Carolyn Penstein Rosé. 2012. Multi-domain learning: When do domains matter? In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. pages 1302–1312.
- Jurgens et al. (2017) David Jurgens, Yulia Tsvetkov, and Dan Jurafsky. 2017. Incorporating dialectal variability for socially equitable language identification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. pages 51–57.
- Kim (2014) Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. pages 1746–1751.
- Kim et al. (2016) Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016. Frustratingly easy neural domain adaptation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. pages 387–396.
- Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations.
- Koehn (2005) Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit 2005. pages 79–86.
Lui and Baldwin (2011)
Marco Lui and Timothy Baldwin. 2011.
Cross-domain feature selection for language identification.In Fifth International Joint Conference on Natural Language Processing. pages 553–561.
- Lui and Baldwin (2012) Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Proceedings of ACL 2012 System Demonstrations. pages 25–30.
- Lui and Baldwin (2014) Marco Lui and Timothy Baldwin. 2014. Accurate language identification of Twitter messages. In Proceedings of the 5th workshop on language analysis for social media. pages 17–25.
- Tiedemann (2009) Jörg Tiedemann. 2009. News from OPUS – a collection of multilingual parallel corpora with tools and interfaces. In Recent Advances in Natural Language Processing. volume 5, pages 237–248.
- Tromp and Pechenizkiy (2011) Erik Tromp and Mykola Pechenizkiy. 2011. Graph-based n-gram language identification on short texts. In Proceedings of the 20th Machine Learning Conference of Belgium and The Netherlands. pages 27–34.
- Williams (2013) Jason Williams. 2013. Multi-domain learning and generalization in dialog state tracking. In Proceedings of the SIGDIAL 2013 Conference. pages 433–441.