Safe Self-Refinement for Transformer-based Domain Adaptation

04/16/2022
by   Tao Sun, et al.
3

Unsupervised Domain Adaptation (UDA) aims to leverage a label-rich source domain to solve tasks on a related unlabeled target domain. It is a challenging problem especially when a large domain gap lies between the source and target domains. In this paper we propose a novel solution named SSRT (Safe Self-Refinement for Transformer-based domain adaptation), which brings improvement from two aspects. First, encouraged by the success of vision transformers in various vision tasks, we arm SSRT with a transformer backbone. We find that the combination of vision transformer with simple adversarial adaptation surpasses best reported Convolutional Neural Network (CNN)-based results on the challenging DomainNet benchmark, showing its strong transferable feature representation. Second, to reduce the risk of model collapse and improve the effectiveness of knowledge transfer between domains with large gaps, we propose a Safe Self-Refinement strategy. Specifically, SSRT utilizes predictions of perturbed target domain data to refine the model. Since the model capacity of vision transformer is large and predictions in such challenging tasks can be noisy, a safe training mechanism is designed to adaptively adjust learning configuration. Extensive evaluations are conducted on several widely tested UDA benchmarks and SSRT achieves consistently the best performances, including 85.43 on DomainNet.

READ FULL TEXT
research
04/03/2020

Unsupervised Domain Adaptation with Progressive Domain Augmentation

Domain adaptation aims to exploit a label-rich source domain for learnin...
research
08/12/2021

TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation

Unsupervised domain adaptation (UDA) aims to transfer the knowledge lear...
research
04/17/2023

Heterogeneous Domain Adaptation with Positive and Unlabeled Data

Heterogeneous unsupervised domain adaptation (HUDA) is the most challeng...
research
01/15/2022

Domain Adaptation via Bidirectional Cross-Attention Transformer

Domain Adaptation (DA) aims to leverage the knowledge learned from a sou...
research
09/16/2020

Transformer Based Multi-Source Domain Adaptation

In practical machine learning settings, the data on which a model must m...
research
07/16/2023

Domain Generalisation with Bidirectional Encoder Representations from Vision Transformers

Domain generalisation involves pooling knowledge from source domain(s) i...
research
03/23/2023

Patch-Mix Transformer for Unsupervised Domain Adaptation: A Game Perspective

Endeavors have been recently made to leverage the vision transformer (Vi...

Please sign up or login with your details

Forgot password? Click here to reset