When Source-Free Domain Adaptation Meets Learning with Noisy Labels

01/31/2023
by   Li Yi, et al.
0

Recent state-of-the-art source-free domain adaptation (SFDA) methods have focused on learning meaningful cluster structures in the feature space, which have succeeded in adapting the knowledge from source domain to unlabeled target domain without accessing the private source data. However, existing methods rely on the pseudo-labels generated by source models that can be noisy due to domain shift. In this paper, we study SFDA from the perspective of learning with label noise (LLN). Unlike the label noise in the conventional LLN scenario, we prove that the label noise in SFDA follows a different distribution assumption. We also prove that such a difference makes existing LLN methods that rely on their distribution assumptions unable to address the label noise in SFDA. Empirical evidence suggests that only marginal improvements are achieved when applying the existing LLN methods to solve the SFDA problem. On the other hand, although there exists a fundamental difference between the label noise in the two scenarios, we demonstrate theoretically that the early-time training phenomenon (ETP), which has been previously observed in conventional label noise settings, can also be observed in the SFDA problem. Extensive experiments demonstrate significant improvements to existing SFDA algorithms by leveraging ETP to address the label noise in SFDA.

READ FULL TEXT
research
04/01/2021

Divergence Optimization for Noisy Universal Domain Adaptation

Universal domain adaptation (UniDA) has been proposed to transfer knowle...
research
07/31/2017

Transfer Learning with Label Noise

Transfer learning aims to improve learning in the target domain with lim...
research
03/30/2023

C-SFDA: A Curriculum Learning Aided Self-Training Framework for Efficient Source Free Domain Adaptation

Unsupervised domain adaptation (UDA) approaches focus on adapting models...
research
10/04/2022

Robust Target Training for Multi-Source Domain Adaptation

Given multiple labeled source domains and a single target domain, most e...
research
04/27/2020

Towards Accurate and Robust Domain Adaptation under Noisy Environments

In non-stationary environments, learning machines usually confront the d...
research
05/28/2022

Divide to Adapt: Mitigating Confirmation Bias for Domain Adaptation of Black-Box Predictors

Domain Adaptation of Black-box Predictors (DABP) aims to learn a model o...
research
07/19/2018

Visual Domain Adaptation with Manifold Embedded Distribution Alignment

Visual domain adaptation aims to learn robust classifiers for the target...

Please sign up or login with your details

Forgot password? Click here to reset