Open-Set Domain Adaptation with Visual-Language Foundation Models

07/30/2023
by   Qing Yu, et al.
0

Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge obtained from a source domain with labeled data to a target domain with unlabeled data. Owing to the lack of labeled data in the target domain and the possible presence of unknown classes, open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase. Although existing ODA approaches aim to solve the distribution shifts between the source and target domains, most methods fine-tuned ImageNet pre-trained models on the source domain with the adaptation on the target domain. Recent visual-language foundation models (VLFM), such as Contrastive Language-Image Pre-Training (CLIP), are robust to many distribution shifts and, therefore, should substantially improve the performance of ODA. In this work, we explore generic ways to adopt CLIP, a popular VLFM, for ODA. We investigate the performance of zero-shot prediction using CLIP, and then propose an entropy optimization strategy to assist the ODA models with the outputs of CLIP. The proposed approach achieves state-of-the-art results on various benchmarks, demonstrating its effectiveness in addressing the ODA problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/01/2022

Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation

We consider unsupervised domain adaptation (UDA), where labeled data fro...
research
08/04/2023

ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation

Large-scale Pre-Training Vision-Language Model such as CLIP has demonstr...
research
05/18/2023

Universal Domain Adaptation from Foundation Models

Foundation models (e.g., CLIP or DINOv2) have shown their impressive lea...
research
04/03/2023

Few-shot Fine-tuning is All You Need for Source-free Domain Adaptation

Recently, source-free unsupervised domain adaptation (SFUDA) has emerged...
research
07/17/2023

Domain Adaptation using Silver Standard Masks for Lateral Ventricle Segmentation in FLAIR MRI

Lateral ventricular volume (LVV) is an important biomarker for clinical ...
research
11/08/2022

Unsupervised Domain Adaptation for Sparse Retrieval by Filling Vocabulary and Word Frequency Gaps

IR models using a pretrained language model significantly outperform lex...
research
06/16/2020

PERL: Pivot-based Domain Adaptation for Pre-trained Deep Contextualized Embedding Models

Pivot-based neural representation models have lead to significant progre...

Please sign up or login with your details

Forgot password? Click here to reset