Combating Domain Shift with Self-Taught Labeling

07/08/2020
by   Jian Liang, et al.
0

We present a novel method to combat domain shift when adapting classification models trained on one domain to other new domains with few or no target labels. In the existing literature, a prevailing solution paradigm is to learn domain-invariant feature representations so that a classifier learned on the source features generalizes well to the target features. However, such a classifier is inevitably biased to the source domain by overlooking the structure of the target data. Instead, we propose Self-Taught Labeling (SeTL), a new regularization approach that finds an auxiliary target-specific classifier for unlabeled data. During adaptation, this classifier is able to teach the target domain itself by providing unbiased accurate pseudo labels. In particular, for each target data, we employ the memory bank to store the feature along with its soft label from the domain-shared classifier. Then we develop a non-parametric neighborhood aggregation strategy to generate new pseudo labels as well as confidence weights for unlabeled data. Though simply using the standard classification objective, SeTL significantly outperforms existing domain alignment techniques on a large variety of domain adaptation benchmarks. We expect that SeTL can provide a new perspective of addressing domain shift and inspire future research of domain adaptation and transfer learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2019

Multi-Purposing Domain Adaptation Discriminators for Pseudo Labeling Confidence

Often domain adaptation is performed using a discriminator (domain class...
research
06/17/2020

Self-training Avoids Using Spurious Features Under Domain Shift

In unsupervised domain adaptation, existing theory focuses on situations...
research
03/20/2021

Your Classifier can Secretly Suffice Multi-Source Domain Adaptation

Multi-Source Domain Adaptation (MSDA) deals with the transfer of task kn...
research
08/28/2019

Heterogeneous Domain Adaptation via Soft Transfer Network

Heterogeneous domain adaptation (HDA) aims to facilitate the learning ta...
research
11/23/2021

A self-training framework for glaucoma grading in OCT B-scans

In this paper, we present a self-training-based framework for glaucoma g...
research
02/10/2023

Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features

Conventional approaches to robustness try to learn a model based on caus...
research
03/05/2021

Cycle Self-Training for Domain Adaptation

Mainstream approaches for unsupervised domain adaptation (UDA) learn dom...

Please sign up or login with your details

Forgot password? Click here to reset