Multi-Prompt Alignment for Multi-source Unsupervised Domain Adaptation

09/30/2022
by   Haoran Chen, et al.
0

Most existing methods for multi-source unsupervised domain adaptation (UDA) rely on a common feature encoder to extract domain-invariant features. However, learning such an encoder involves updating the parameters of the entire network, which makes the optimization computationally expensive, particularly when coupled with min-max objectives. Inspired by recent advances in prompt learning that adapts high-capacity deep models for downstream tasks in a computationally economic way, we introduce Multi-Prompt Alignment (MPA), a simple yet efficient two-stage framework for multi-source UDA. Given a source and target domain pair, MPA first trains an individual prompt to minimize the domain gap through a contrastive loss, while tuning only a small set of parameters. Then, MPA derives a low-dimensional latent space through an auto-encoding process that maximizes the agreement of multiple learned prompts. The resulting embedding further facilitates generalization to unseen domains. Extensive experiments show that our method achieves state-of-the-art results on popular benchmark datasets while requiring substantially fewer tunable parameters. To the best of our knowledge, we are the first to apply prompt learning to the multi-source UDA problem and our method achieves the highest reported average accuracy of 54.1 dataset to date, with only 15.9M parameters trained. More importantly, we demonstrate that the learned embedding space can be easily adapted to novel unseen domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/10/2023

CDA: Contrastive-adversarial Domain Adaptation

Recent advances in domain adaptation reveal that adversarial learning on...
research
08/02/2022

Making the Best of Both Worlds: A Domain-Oriented Transformer for Unsupervised Domain Adaptation

Extensive studies on Unsupervised Domain Adaptation (UDA) have propelled...
research
08/27/2022

CLUDA : Contrastive Learning in Unsupervised Domain Adaptation for Semantic Segmentation

In this work, we propose CLUDA, a simple, yet novel method for performin...
research
06/23/2021

Secure Domain Adaptation with Multiple Sources

Multi-source unsupervised domain adaptation (MUDA) is a recently explore...
research
02/17/2019

Unsupervised Domain Adaptation using Deep Networks with Cross-Grafted Stacks

Popular deep domain adaptation methods have mainly focused on learning d...
research
04/25/2019

Transferrable Prototypical Networks for Unsupervised Domain Adaptation

In this paper, we introduce a new idea for unsupervised domain adaptatio...
research
02/12/2022

Robust alignment of cross-session recordings of neural population activity by behaviour via unsupervised domain adaptation

Neural population activity relating to behaviour is assumed to be inhere...

Please sign up or login with your details

Forgot password? Click here to reset