Category Contrast for Unsupervised Domain Adaptation in Visual Tasks

by   Jiaxing Huang, et al.

Instance contrast for unsupervised representation learning has achieved great success in recent years. In this work, we explore the idea of instance contrastive learning in unsupervised domain adaptation (UDA) and propose a novel Category Contrast technique (CaCo) that introduces semantic priors on top of instance discrimination for visual UDA tasks. By considering instance contrastive learning as a dictionary look-up operation, we construct a semantics-aware dictionary with samples from both source and target domains where each target sample is assigned a (pseudo) category label based on the category priors of source samples. This allows category contrastive learning (between target queries and the category-level dictionary) for category-discriminative yet domain-invariant feature representations: samples of the same category (from either source or target domain) are pulled closer while those of different categories are pushed apart simultaneously. Extensive UDA experiments in multiple visual tasks (e.g., segmentation, classification and detection) show that the simple implementation of CaCo achieves superior performance as compared with the highly-optimized state-of-the-art methods. Analytically and empirically, the experiments also demonstrate that CaCo is complementary to existing UDA methods and generalizable to other learning setups such as semi-supervised learning, unsupervised model adaptation, etc.


page 1

page 2

page 3

page 4


Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data

Unsupervised domain adaptation aims to align a labeled source domain and...

Unsupervised Domain Adaptive Fundus Image Segmentation with Category-level Regularization

Existing unsupervised domain adaptation methods based on adversarial lea...

Domain Confused Contrastive Learning for Unsupervised Domain Adaptation

In this work, we study Unsupervised Domain Adaptation (UDA) in a challen...

Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing

Unsupervised domain adaptation which aims to adapt models trained on a l...

Integrating Categorical Semantics into Unsupervised Domain Translation

While unsupervised domain translation (UDT) has seen a lot of success re...

Transferrable Contrastive Learning for Visual Domain Adaptation

Self-supervised learning (SSL) has recently become the favorite among fe...

Semantic-aware Representation Learning Via Probability Contrastive Loss

Recent feature contrastive learning (FCL) has shown promising performanc...