Two-phase Pseudo Label Densification for Self-training based Domain Adaptation

12/09/2020
by   Inkyu Shin, et al.
0

Recently, deep self-training approaches emerged as a powerful solution to the unsupervised domain adaptation. The self-training scheme involves iterative processing of target data; it generates target pseudo labels and retrains the network. However, since only the confident predictions are taken as pseudo labels, existing self-training approaches inevitably produce sparse pseudo labels in practice. We see this is critical because the resulting insufficient training-signals lead to a suboptimal, error-prone model. In order to tackle this problem, we propose a novel Two-phase Pseudo Label Densification framework, referred to as TPLD. In the first phase, we use sliding window voting to propagate the confident predictions, utilizing intrinsic spatial-correlations in the images. In the second phase, we perform a confidence-based easy-hard classification. For the easy samples, we now employ their full pseudo labels. For the hard ones, we instead adopt adversarial learning to enforce hard-to-easy feature alignment. To ease the training process and avoid noisy predictions, we introduce the bootstrapping mechanism to the original self-training loss. We show the proposed TPLD can be easily integrated into existing self-training based approaches and improves the performance significantly. Combined with the recently proposed CRST self-training framework, we achieve new state-of-the-art results on two standard UDA benchmarks.

READ FULL TEXT

page 6

page 7

page 8

page 9

page 11

page 14

research
08/26/2019

Confidence Regularized Self-Training

Recent advances in domain adaptation show that deep self-training presen...
research
08/26/2022

Constraining Pseudo-label in Self-training Unsupervised Domain Adaptation with Energy-based Model

Deep learning is usually data starved, and the unsupervised domain adapt...
research
10/18/2018

Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training

Recent deep networks achieved state of the art performance on a variety ...
research
01/01/2021

Energy-constrained Self-training for Unsupervised Domain Adaptation

Unsupervised domain adaptation (UDA) aims to transfer the knowledge on a...
research
04/23/2021

STRUDEL: Self-Training with Uncertainty Dependent Label Refinement across Domains

We propose an unsupervised domain adaptation (UDA) approach for white ma...
research
10/21/2021

RefRec: Pseudo-labels Refinement via Shape Reconstruction for Unsupervised 3D Domain Adaptation

Unsupervised Domain Adaptation (UDA) for point cloud classification is a...
research
06/10/2022

Unsupervised Foggy Scene Understanding via Self Spatial-Temporal Label Diffusion

Understanding foggy image sequence in the driving scenes is critical for...

Please sign up or login with your details

Forgot password? Click here to reset