Unsupervised Domain Adaptation with Adapter

11/01/2021
by   Rongsheng Zhang, et al.
0

Unsupervised domain adaptation (UDA) with pre-trained language models (PrLM) has achieved promising results since these pre-trained models embed generic knowledge learned from various domains. However, fine-tuning all the parameters of the PrLM on a small domain-specific corpus distort the learned generic knowledge, and it is also expensive to deployment a whole fine-tuned PrLM for each domain. This paper explores an adapter-based fine-tuning approach for unsupervised domain adaptation. Specifically, several trainable adapter modules are inserted in a PrLM, and the embedded generic knowledge is preserved by fixing the parameters of the original PrLM at fine-tuning. A domain-fusion scheme is introduced to train these adapters using a mix-domain corpus to better capture transferable features. Elaborated experiments on two benchmark datasets are carried out, and the results demonstrate that our approach is effective with different tasks, dataset sizes, and domain similarities.

READ FULL TEXT
research
10/25/2022

On Fine-Tuned Deep Features for Unsupervised Domain Adaptation

Prior feature transformation based approaches to Unsupervised Domain Ada...
research
06/02/2021

Unsupervised Out-of-Domain Detection via Pre-trained Transformers

Deployed real-world machine learning applications are often subject to u...
research
03/01/2023

UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers

Many information retrieval tasks require large labeled datasets for fine...
research
03/08/2022

Adapt𝒪r: Objective-Centric Adaptation Framework for Language Models

Progress in natural language processing research is catalyzed by the pos...
research
04/07/2019

Joint Learning of Pre-Trained and Random Units for Domain Adaptation in Part-of-Speech Tagging

Fine-tuning neural networks is widely used to transfer valuable knowledg...
research
08/07/2022

Vernacular Search Query Translation with Unsupervised Domain Adaptation

With the democratization of e-commerce platforms, an increasingly divers...
research
09/12/2023

AstroLLaMA: Towards Specialized Foundation Models in Astronomy

Large language models excel in many human-language tasks but often falte...

Please sign up or login with your details

Forgot password? Click here to reset