Gradual Domain Adaptation in the Wild:When Intermediate Distributions are Absent

06/10/2021
by   Samira Abnar, et al.
7

We focus on the problem of domain adaptation when the goal is shifting the model towards the target distribution, rather than learning domain invariant representations. It has been shown that under the following two assumptions: (a) access to samples from intermediate distributions, and (b) samples being annotated with the amount of change from the source distribution, self-training can be successfully applied on gradually shifted samples to adapt the model toward the target distribution. We hypothesize having (a) is enough to enable iterative self-training to slowly adapt the model to the target distribution, by making use of an implicit curriculum. In the case where (a) does not hold, we observe that iterative self-training falls short. We propose GIFT, a method that creates virtual samples from intermediate distributions by interpolating representations of examples from source and target domains. We evaluate an iterative-self-training method on datasets with natural distribution shifts, and show that when applied on top of other domain adaptation methods, it improves the performance of the model on the target dataset. We run an analysis on a synthetic dataset to show that in the presence of (a) iterative-self-training naturally forms a curriculum of samples. Furthermore, we show that when (a) does not hold, GIFT performs better than iterative self-training.

READ FULL TEXT
research
02/09/2022

Cost-effective Framework for Gradual Domain Adaptation with Multifidelity

In domain adaptation, when there is a large distance between the source ...
research
06/23/2022

Gradual Domain Adaptation via Normalizing Flows

Conventional domain adaptation methods do not work well when a large gap...
research
06/18/2021

Gradual Domain Adaptation via Self-Training of Auxiliary Models

Domain adaptation becomes more challenging with increasing gaps between ...
research
07/02/2020

Curriculum Manager for Source Selection in Multi-Source Domain Adaptation

The performance of Multi-Source Unsupervised Domain Adaptation depends s...
research
10/18/2022

Curriculum Reinforcement Learning using Optimal Transport via Gradual Domain Adaptation

Curriculum Reinforcement Learning (CRL) aims to create a sequence of tas...
research
07/24/2019

Curriculum based Dropout Discriminator for Domain Adaptation

Domain adaptation is essential to enable wide usage of deep learning bas...
research
02/26/2020

Understanding Self-Training for Gradual Domain Adaptation

Machine learning systems must adapt to data distributions that evolve ov...

Please sign up or login with your details

Forgot password? Click here to reset