DeepAI AI Chat
Log In Sign Up

Unsupervised Domain Adaptation with Adapter

by   Rongsheng Zhang, et al.

Unsupervised domain adaptation (UDA) with pre-trained language models (PrLM) has achieved promising results since these pre-trained models embed generic knowledge learned from various domains. However, fine-tuning all the parameters of the PrLM on a small domain-specific corpus distort the learned generic knowledge, and it is also expensive to deployment a whole fine-tuned PrLM for each domain. This paper explores an adapter-based fine-tuning approach for unsupervised domain adaptation. Specifically, several trainable adapter modules are inserted in a PrLM, and the embedded generic knowledge is preserved by fixing the parameters of the original PrLM at fine-tuning. A domain-fusion scheme is introduced to train these adapters using a mix-domain corpus to better capture transferable features. Elaborated experiments on two benchmark datasets are carried out, and the results demonstrate that our approach is effective with different tasks, dataset sizes, and domain similarities.


On Fine-Tuned Deep Features for Unsupervised Domain Adaptation

Prior feature transformation based approaches to Unsupervised Domain Ada...

Unsupervised Out-of-Domain Detection via Pre-trained Transformers

Deployed real-world machine learning applications are often subject to u...

UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers

Many information retrieval tasks require large labeled datasets for fine...

Adapt𝒪r: Objective-Centric Adaptation Framework for Language Models

Progress in natural language processing research is catalyzed by the pos...

Joint Learning of Pre-Trained and Random Units for Domain Adaptation in Part-of-Speech Tagging

Fine-tuning neural networks is widely used to transfer valuable knowledg...

Vernacular Search Query Translation with Unsupervised Domain Adaptation

With the democratization of e-commerce platforms, an increasingly divers...

AstroLLaMA: Towards Specialized Foundation Models in Astronomy

Large language models excel in many human-language tasks but often falte...