DeepAI AI Chat
Log In Sign Up

Multiple-Source Domain Adaptation via Coordinated Domain Encoders and Paired Classifiers

by   Payam Karisani, et al.

We present a novel multiple-source unsupervised model for text classification under domain shift. Our model exploits the update rates in document representations to dynamically integrate domain encoders. It also employs a probabilistic heuristic to infer the error rate in the target domain in order to pair source classifiers. Our heuristic exploits data transformation cost and the classifier accuracy in the target feature space. We have used real world scenarios of Domain Adaptation to evaluate the efficacy of our algorithm. We also used pretrained multi-layer transformers as the document encoder in the experiments to demonstrate whether the improvement achieved by domain adaptation models can be delivered by out-of-the-box language model pretraining. The experiments testify that our model is the top performing approach in this setting.


page 1

page 2

page 3

page 4


Multi-step domain adaptation by adversarial attack to ℋ Δℋ-divergence

Adversarial examples are transferable between different models. In our p...

On-target Adaptation

Domain adaptation seeks to mitigate the shift between training on the so...

Increasing Model Generalizability for Unsupervised Domain Adaptation

A dominant approach for addressing unsupervised domain adaptation is to ...

Modular Domain Adaptation

Off-the-shelf models are widely used by computational social science res...

Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation

Prompt tuning, or the conditioning of a frozen pretrained language model...

Theoretic Analysis and Extremely Easy Algorithms for Domain Adaptive Feature Learning

Domain adaptation problems arise in a variety of applications, where a t...

Model Compression for Domain Adaptation through Causal Effect Estimation

Recent improvements in the predictive quality of natural language proces...