Modular Domain Adaptation

04/26/2022
by   Junshen K. Chen, et al.
0

Off-the-shelf models are widely used by computational social science researchers to measure properties of text, such as sentiment. However, without access to source data it is difficult to account for domain shift, which represents a threat to validity. Here, we treat domain adaptation as a modular process that involves separate model producers and model consumers, and show how they can independently cooperate to facilitate more accurate measurements of text. We introduce two lightweight techniques for this scenario, and demonstrate that they reliably increase out-of-domain accuracy on four multi-domain text classification datasets when used with linear and contextual embedding models. We conclude with recommendations for model producers and consumers, and release models and replication code to accompany this paper.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2020

A Label Proportions Estimation technique for Adversarial Domain Adaptation in Text Classification

Many text classification tasks are domain-dependent, and various domain ...
research
06/08/2021

Predicting the Success of Domain Adaptation in Text Similarity

Transfer learning methods, and in particular domain adaptation, help exp...
research
01/28/2022

Multiple-Source Domain Adaptation via Coordinated Domain Encoders and Paired Classifiers

We present a novel multiple-source unsupervised model for text classific...
research
05/04/2023

ReMask: A Robust Information-Masking Approach for Domain Counterfactual Generation

Domain shift is a big challenge in NLP, thus, many approaches resort to ...
research
03/23/2016

Lightweight Unsupervised Domain Adaptation by Convolutional Filter Reconstruction

End-to-end learning methods have achieved impressive results in many are...

Please sign up or login with your details

Forgot password? Click here to reset