SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation

12/12/2022
by   Wanqing Zhu, et al.
0

As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which transfers knowledge learned from a rich-label dataset to the unlabeled target dataset, is gaining increasing popularity. While extensive studies have been devoted to improving the model accuracy on target domain, an important issue of model robustness is neglected. To make things worse, conventional adversarial training (AT) methods for improving model robustness are inapplicable under UDA scenario since they train models on adversarial examples that are generated by supervised loss function. In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models. Based on self-training paradigm, SRoUDA starts with pre-training a source model by applying UDA baseline on source labeled data and taraget unlabeled data with a developed random masked augmentation (RMA), and then alternates between adversarial target model training on pseudo-labeled target data and finetuning source model by a meta step. While self-training allows the direct incorporation of AT in UDA, the meta step in SRoUDA further helps in mitigating error propagation from noisy pseudo labels. Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of SRoUDA where it achieves significant model robustness improvement without harming clean accuracy. Code is available at https://github.com/Vision.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/02/2021

Adversarial Robustness for Unsupervised Domain Adaptation

Extensive Unsupervised Domain Adaptation (UDA) studies have shown great ...
research
11/23/2021

A self-training framework for glaucoma grading in OCT B-scans

In this paper, we present a self-training-based framework for glaucoma g...
research
03/09/2021

ST3D: Self-training for Unsupervised Domain Adaptation on 3D ObjectDetection

We present a new domain adaptive self-training pipeline, named ST3D, for...
research
08/08/2023

Unsupervised Camouflaged Object Segmentation as Domain Adaptation

Deep learning for unsupervised image segmentation remains challenging du...
research
08/04/2023

ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation

Large-scale Pre-Training Vision-Language Model such as CLIP has demonstr...
research
07/18/2022

Prior Knowledge Guided Unsupervised Domain Adaptation

The waive of labels in the target domain makes Unsupervised Domain Adapt...
research
03/19/2023

AdaptGuard: Defending Against Universal Attacks for Model Adaptation

Model adaptation aims at solving the domain transfer problem under the c...

Please sign up or login with your details

Forgot password? Click here to reset