A Theory of Output-Side Unsupervised Domain Adaptation

03/05/2017
by   Tomer Galanti, et al.
0

When learning a mapping from an input space to an output space, the assumption that the sample distribution of the training data is the same as that of the test data is often violated. Unsupervised domain shift methods adapt the learned function in order to correct for this shift. Previous work has focused on utilizing unlabeled samples from the target distribution. We consider the complementary problem in which the unlabeled samples are given post mapping, i.e., we are given the outputs of the mapping of unknown samples from the shifted domain. Two other variants are also studied: the two sided version, in which unlabeled samples are give from both the input and the output spaces, and the Domain Transfer problem, which was recently formalized. In all cases, we derive generalization bounds that employ discrepancy terms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/04/2021

Unsupervised Adaptation of Semantic Segmentation Models without Source Data

We consider the novel problem of unsupervised domain adaptation of sourc...
research
01/14/2020

Unsupervised Learning of the Set of Local Maxima

This paper describes a new form of unsupervised learning, whose input is...
research
04/21/2023

Towards Realizing the Value of Labeled Target Samples: a Two-Stage Approach for Semi-Supervised Domain Adaptation

Semi-Supervised Domain Adaptation (SSDA) is a recently emerging research...
research
12/05/2022

Addressing Distribution Shift at Test Time in Pre-trained Language Models

State-of-the-art pre-trained language models (PLMs) outperform other mod...
research
06/02/2017

One-Sided Unsupervised Domain Mapping

In unsupervised domain mapping, the learner is given two unmatched datas...
research
04/09/2018

G-Distillation: Reducing Overconfident Errors on Novel Samples

Counter to the intuition that unfamiliarity should lead to lack of confi...
research
06/29/2020

Simplifying Models with Unlabeled Output Data

We focus on prediction problems with high-dimensional outputs that are s...

Please sign up or login with your details

Forgot password? Click here to reset