Cycle Label-Consistent Networks for Unsupervised Domain Adaptation

05/27/2022
by   Mei Wang, et al.
0

Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution. Previous methods mostly match the distribution between two domains by global or class alignment. However, global alignment methods cannot achieve a fine-grained class-to-class overlap; class alignment methods supervised by pseudo-labels cannot guarantee their reliability. In this paper, we propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label, which applies dual cross-domain nearest centroid classification procedures to generate a reliable self-supervised signal for the discrimination in the target domain. The cycle label-consistent loss reinforces the consistency between ground-truth labels and pseudo-labels of source samples leading to statistically similar latent representations between source and target domains. This new loss can easily be added to any existing classification network with almost no computational overhead. We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks. Results validate that the proposed method can alleviate the negative influence of falsely-labeled samples and learn more discriminative features, leading to the absolute improvement over source-only model by 9.4 Office-31 and 6.3

READ FULL TEXT

page 1

page 7

research
03/18/2020

Cross-domain Self-supervised Learning for Domain Adaptation with Few Source Labels

Existing unsupervised domain adaptation methods aim to transfer knowledg...
research
03/09/2022

Dynamic Instance Domain Adaptation

Most existing studies on unsupervised domain adaptation (UDA) assume tha...
research
08/08/2020

Hard Class Rectification for Domain Adaptation

Domain adaptation (DA) aims to transfer knowledge from a label-rich and ...
research
01/30/2018

Deep Adversarial Attention Alignment for Unsupervised Domain Adaptation: the Benefit of Target Expectation Maximization

In this paper we make two contributions to unsupervised domain adaptatio...
research
03/12/2019

Learning Condensed and Aligned Features for Unsupervised Domain Adaptation Using Label Propagation

Unsupervised domain adaptation aiming to learn a specific task for one d...
research
12/04/2020

Effective Label Propagation for Discriminative Semi-Supervised Domain Adaptation

Semi-supervised domain adaptation (SSDA) methods have demonstrated great...
research
05/29/2022

ProxyMix: Proxy-based Mixup Training with Label Refinery for Source-Free Domain Adaptation

Unsupervised domain adaptation (UDA) aims to transfer knowledge from a l...

Please sign up or login with your details

Forgot password? Click here to reset