Semantic-aware Message Broadcasting for Efficient Unsupervised Domain Adaptation

12/06/2022
by   Xin Li, et al.
0

Vision transformer has demonstrated great potential in abundant vision tasks. However, it also inevitably suffers from poor generalization capability when the distribution shift occurs in testing (i.e., out-of-distribution data). To mitigate this issue, we propose a novel method, Semantic-aware Message Broadcasting (SAMB), which enables more informative and flexible feature alignment for unsupervised domain adaptation (UDA). Particularly, we study the attention module in the vision transformer and notice that the alignment space using one global class token lacks enough flexibility, where it interacts information with all image tokens in the same manner but ignores the rich semantics of different regions. In this paper, we aim to improve the richness of the alignment features by enabling semantic-aware adaptive message broadcasting. Particularly, we introduce a group of learned group tokens as nodes to aggregate the global information from all image tokens, but encourage different group tokens to adaptively focus on the message broadcasting to different semantic regions. In this way, our message broadcasting encourages the group tokens to learn more informative and diverse information for effective domain alignment. Moreover, we systematically study the effects of adversarial-based feature alignment (ADA) and pseudo-label based self-training (PST) on UDA. We find that one simple two-stage training strategy with the cooperation of ADA and PST can further improve the adaptation capability of the vision transformer. Extensive experiments on DomainNet, OfficeHome, and VisDA-2017 demonstrate the effectiveness of our methods for UDA.

READ FULL TEXT

page 1

page 3

page 7

research
06/01/2022

Cross-domain Detection Transformer based on Spatial-aware and Semantic-aware Token Alignment

Detection transformers like DETR have recently shown promising performan...
research
04/13/2019

Towards Self-similarity Consistency and Feature Discrimination for Unsupervised Domain Adaptation

Recent advances in unsupervised domain adaptation mainly focus on learni...
research
08/02/2022

Making the Best of Both Worlds: A Domain-Oriented Transformer for Unsupervised Domain Adaptation

Extensive studies on Unsupervised Domain Adaptation (UDA) have propelled...
research
03/23/2021

Unsupervised domain adaptation via coarse-to-fine feature alignment method using contrastive learning

Previous feature alignment methods in Unsupervised domain adaptation(UDA...
research
11/29/2022

Soft Alignment Objectives for Robust Adaptation in Machine Translation

Domain adaptation allows generative language models to address specific ...
research
04/02/2021

AAformer: Auto-Aligned Transformer for Person Re-Identification

Transformer is showing its superiority over convolutional architectures ...
research
11/24/2021

An Image Patch is a Wave: Phase-Aware Vision MLP

Different from traditional convolutional neural network (CNN) and vision...

Please sign up or login with your details

Forgot password? Click here to reset