Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data

10/07/2021
by   Jiaxing Huang, et al.
0

Unsupervised domain adaptation aims to align a labeled source domain and an unlabeled target domain, but it requires to access the source data which often raises concerns in data privacy, data portability and data transmission efficiency. We study unsupervised model adaptation (UMA), or called Unsupervised Domain Adaptation without Source Data, an alternative setting that aims to adapt source-trained models towards target distributions without accessing source data. To this end, we design an innovative historical contrastive learning (HCL) technique that exploits historical source hypothesis to make up for the absence of source data in UMA. HCL addresses the UMA challenge from two perspectives. First, it introduces historical contrastive instance discrimination (HCID) that learns from target samples by contrasting their embeddings which are generated by the currently adapted model and the historical models. With the source-trained and earlier-epoch models as the historical models, HCID encourages UMA to learn instance-discriminative target representations while preserving the source hypothesis. Second, it introduces historical contrastive category discrimination (HCCD) that pseudo-labels target samples to learn category-discriminative target representations. Instead of globally thresholding pseudo labels, HCCD re-weights pseudo labels according to their prediction consistency across the current and historical models. Extensive experiments show that HCL outperforms and complements state-of-the-art methods consistently across a variety of visual tasks (e.g., segmentation, classification and detection) and setups (e.g., close-set, open-set and partial adaptation).

READ FULL TEXT
research
06/05/2021

Category Contrast for Unsupervised Domain Adaptation in Visual Tasks

Instance contrast for unsupervised representation learning has achieved ...
research
07/10/2022

Domain Confused Contrastive Learning for Unsupervised Domain Adaptation

In this work, we study Unsupervised Domain Adaptation (UDA) in a challen...
research
02/20/2020

Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation

Unsupervised domain adaptation (UDA) aims to leverage the knowledge lear...
research
08/13/2023

Unsupervised Adaptation of Polyp Segmentation Models via Coarse-to-Fine Self-Supervision

Unsupervised Domain Adaptation (UDA) has attracted a surge of interest o...
research
07/19/2023

Unsupervised Accuracy Estimation of Deep Visual Models using Domain-Adaptive Adversarial Perturbation without Source Samples

Deploying deep visual models can lead to performance drops due to the di...
research
03/26/2021

Unsupervised Robust Domain Adaptation without Source Data

We study the problem of robust domain adaptation in the context of unava...
research
04/05/2021

Unsupervised Multi-source Domain Adaptation Without Access to Source Data

Unsupervised Domain Adaptation (UDA) aims to learn a predictor model for...

Please sign up or login with your details

Forgot password? Click here to reset