Learning Transferable Parameters for Unsupervised Domain Adaptation

08/13/2021
by   Zhongyi Han, et al.
9

Unsupervised domain adaptation (UDA) enables a learning machine to adapt from a labeled source domain to an unlabeled domain under the distribution shift. Thanks to the strong representation ability of deep neural networks, recent remarkable achievements in UDA resort to learning domain-invariant features. Intuitively, the hope is that a good feature representation, together with the hypothesis learned from the source domain, can generalize well to the target domain. However, the learning processes of domain-invariant features and source hypothesis inevitably involve domain-specific information that would degrade the generalizability of UDA models on the target domain. In this paper, motivated by the lottery ticket hypothesis that only partial parameters are essential for generalization, we find that only partial parameters are essential for learning domain-invariant information and generalizing well in UDA. Such parameters are termed transferable parameters. In contrast, the other parameters tend to fit domain-specific details and often fail to generalize, which we term as untransferable parameters. Driven by this insight, we propose Transferable Parameter Learning (TransPar) to reduce the side effect brought by domain-specific information in the learning process and thus enhance the memorization of domain-invariant information. Specifically, according to the distribution discrepancy degree, we divide all parameters into transferable and untransferable ones in each training iteration. We then perform separate updates rules for the two types of parameters. Extensive experiments on image classification and regression tasks (keypoint detection) show that TransPar outperforms prior arts by non-trivial margins. Moreover, experiments demonstrate that TransPar can be integrated into the most popular deep UDA networks and be easily extended to handle any data distribution shift scenarios.

READ FULL TEXT

page 1

page 7

page 10

page 11

page 13

research
11/25/2021

Exploiting Both Domain-specific and Invariant Knowledge via a Win-win Transformer for Unsupervised Domain Adaptation

Unsupervised Domain Adaptation (UDA) aims to transfer knowledge from a l...
research
01/04/2022

Aligning Domain-specific Distribution and Classifier for Cross-domain Classification from Multiple Sources

While Unsupervised Domain Adaptation (UDA) algorithms, i.e., there are o...
research
08/26/2023

Unsupervised Domain Adaptation via Domain-Adaptive Diffusion

Unsupervised Domain Adaptation (UDA) is quite challenging due to the lar...
research
10/13/2019

The Role of Embedding Complexity in Domain-invariant Representations

Unsupervised domain adaptation aims to generalize the hypothesis trained...
research
02/14/2022

Domain Adaptation via Prompt Learning

Unsupervised domain adaption (UDA) aims to adapt models learned from a w...
research
12/28/2020

Improving Unsupervised Domain Adaptation by Reducing Bi-level Feature Redundancy

Reducing feature redundancy has shown beneficial effects for improving t...
research
11/09/2021

Mitigating domain shift in AI-based tuberculosis screening with unsupervised domain adaptation

We demonstrate that Domain Invariant Feature Learning (DIFL) can improve...

Please sign up or login with your details

Forgot password? Click here to reset