Algorithms and Theory for Supervised Gradual Domain Adaptation

04/25/2022
by   Jing Dong, et al.
0

The phenomenon of data distribution evolving over time has been observed in a range of applications, calling the needs of adaptive learning algorithms. We thus study the problem of supervised gradual domain adaptation, where labeled data from shifting distributions are available to the learner along the trajectory, and we aim to learn a classifier on a target data distribution of interest. Under this setting, we provide the first generalization upper bound on the learning error under mild assumptions. Our results are algorithm agnostic, general for a range of loss functions, and only depend linearly on the averaged learning error across the trajectory. This shows significant improvement compared to the previous upper bound for unsupervised gradual domain adaptation, where the learning error on the target domain depends exponentially on the initial error on the source domain. Compared with the offline setting of learning from multiple domains, our results also suggest the potential benefits of the temporal structure among different domains in adapting to the target one. Empirically, our theoretical results imply that learning proper representations across the domains will effectively mitigate the learning errors. Motivated by these theoretical insights, we propose a min-max learning objective to learn the representation and classifier simultaneously. Experimental results on both semi-synthetic and large-scale real datasets corroborate our findings and demonstrate the effectiveness of our objectives.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2022

Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path and Beyond

The vast majority of existing algorithms for unsupervised domain adaptat...
research
07/19/2020

A Theory of Multiple-Source Adaptation with Limited Target Labeled Data

We study multiple-source domain adaptation, when the learner has access ...
research
01/27/2019

On Learning Invariant Representation for Domain Adaptation

Due to the ability of deep neural nets to learn rich representations, re...
research
10/09/2020

Learning Invariant Representations and Risks for Semi-supervised Domain Adaptation

The success of supervised learning hinges on the assumption that the tra...
research
06/23/2020

Bridging the Theoretical Bound and Deep Algorithms for Open Set Domain Adaptation

In the unsupervised open set domain adaptation (UOSDA), the target domai...
research
12/17/2018

Domain Adaptation on Graphs by Learning Graph Topologies: Theoretical Analysis and an Algorithm

Traditional machine learning algorithms assume that the training and tes...
research
07/05/2022

Cooperative Distribution Alignment via JSD Upper Bound

Unsupervised distribution alignment estimates a transformation that maps...

Please sign up or login with your details

Forgot password? Click here to reset