Transformer-Based Source-Free Domain Adaptation

05/28/2021
by   Guanglei Yang, et al.
0

In this paper, we study the task of source-free domain adaptation (SFDA), where the source data are not available during target adaptation. Previous works on SFDA mainly focus on aligning the cross-domain distributions. However, they ignore the generalization ability of the pretrained source model, which largely influences the initial target outputs that are vital to the target adaptation stage. To address this, we make the interesting observation that the model accuracy is highly correlated with whether or not attention is focused on the objects in an image. To this end, we propose a generic and effective framework based on Transformer, named TransDA, for learning a generalized model for SFDA. Specifically, we apply the Transformer as the attention module and inject it into a convolutional network. By doing so, the model is encouraged to turn attention towards the object regions, which can effectively improve the model's generalization ability on the target domains. Moreover, a novel self-supervised knowledge distillation approach is proposed to adapt the Transformer with target pseudo-labels, thus further encouraging the network to focus on the object regions. Experiments on three domain adaptation tasks, including closed-set, partial-set, and open-set adaption, demonstrate that TransDA can greatly improve the adaptation accuracy and produce state-of-the-art results. The source code and trained models are available at https://github.com/ygjwd12345/TransDA.

READ FULL TEXT
research
06/07/2022

One Ring to Bring Them All: Towards Open-Set Recognition under Domain Shift

In this paper, we investigate open-set recognition with domain shift, wh...
research
05/13/2023

Black-box Source-free Domain Adaptation via Two-stage Knowledge Distillation

Source-free domain adaptation aims to adapt deep neural networks using o...
research
01/06/2022

Efficiently Disentangle Causal Representations

This paper proposes an efficient approach to learning disentangled repre...
research
08/26/2021

Cross-category Video Highlight Detection via Set-based Learning

Autonomous highlight detection is crucial for enhancing the efficiency o...
research
03/18/2023

Confidence Attention and Generalization Enhanced Distillation for Continuous Video Domain Adaptation

Continuous Video Domain Adaptation (CVDA) is a scenario where a source m...
research
07/15/2022

Feed-Forward Source-Free Latent Domain Adaptation via Cross-Attention

We study the highly practical but comparatively under-studied problem of...
research
06/14/2021

Attention-based Domain Adaptation for Single Stage Detectors

While domain adaptation has been used to improve the performance of obje...

Please sign up or login with your details

Forgot password? Click here to reset