Few-shot Fine-tuning is All You Need for Source-free Domain Adaptation

04/03/2023
by   Suho Lee, et al.
0

Recently, source-free unsupervised domain adaptation (SFUDA) has emerged as a more practical and feasible approach compared to unsupervised domain adaptation (UDA) which assumes that labeled source data are always accessible. However, significant limitations associated with SFUDA approaches are often overlooked, which limits their practicality in real-world applications. These limitations include a lack of principled ways to determine optimal hyperparameters and performance degradation when the unlabeled target data fail to meet certain requirements such as a closed-set and identical label distribution to the source data. All these limitations stem from the fact that SFUDA entirely relies on unlabeled target data. We empirically demonstrate the limitations of existing SFUDA methods in real-world scenarios including out-of-distribution and label distribution shifts in target data, and verify that none of these methods can be safely applied to real-world settings. Based on our experimental results, we claim that fine-tuning a source pretrained model with a few labeled data (e.g., 1- or 3-shot) is a practical and reliable solution to circumvent the limitations of SFUDA. Contrary to common belief, we find that carefully fine-tuned models do not suffer from overfitting even when trained with only a few labeled data, and also show little change in performance due to sampling bias. Our experimental results on various domain adaptation benchmarks demonstrate that the few-shot fine-tuning approach performs comparatively under the standard SFUDA settings, and outperforms comparison methods under realistic scenarios. Our code is available at https://github.com/daintlab/fewshot-SFDA .

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2023

Key Design Choices for Double-Transfer in Source-Free Unsupervised Domain Adaptation

Fine-tuning and Domain Adaptation emerged as effective strategies for ef...
research
07/30/2023

Open-Set Domain Adaptation with Visual-Language Foundation Models

Unsupervised domain adaptation (UDA) has proven to be very effective in ...
research
03/01/2023

UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers

Many information retrieval tasks require large labeled datasets for fine...
research
07/14/2023

PseudoCal: A Source-Free Approach to Unsupervised Uncertainty Calibration in Domain Adaptation

Unsupervised domain adaptation (UDA) has witnessed remarkable advancemen...
research
07/31/2023

UDAMA: Unsupervised Domain Adaptation through Multi-discriminator Adversarial Training with Noisy Labels Improves Cardio-fitness Prediction

Deep learning models have shown great promise in various healthcare moni...
research
09/16/2020

Similarity-based data mining for online domain adaptation of a sonar ATR system

Due to the expensive nature of field data gathering, the lack of trainin...
research
01/07/2022

Improved Input Reprogramming for GAN Conditioning

We study the GAN conditioning problem, whose goal is to convert a pretra...

Please sign up or login with your details

Forgot password? Click here to reset