Multiple Source Adaptation and the Renyi Divergence

05/09/2012
by   Yishay Mansour, et al.
0

This paper presents a novel theoretical study of the general problem of multiple source adaptation using the notion of Renyi divergence. Our results build on our previous work [12], but significantly broaden the scope of that work in several directions. We extend previous multiple source loss guarantees based on distribution weighted combinations to arbitrary target distributions P, not necessarily mixtures of the source distributions, analyze both known and unknown target distribution cases, and prove a lower bound. We further extend our bounds to deal with the case where the learner receives an approximate distribution for each source instead of the exact one, and show that similar loss guarantees can be achieved depending on the divergence between the approximate and true distributions. We also analyze the case where the labeling functions of the source domains are somewhat different. Finally, we report the results of experiments with both an artificial data set and a sentiment analysis task, showing the performance benefits of the distribution weighted combinations and the quality of our bounds based on the Renyi divergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2020

Information-Theoretic Bounds on Transfer Generalization Gap Based on Jensen-Shannon Divergence

In transfer learning, training and testing data sets are drawn from diff...
research
05/20/2018

Algorithms and Theory for Multiple-Source Adaptation

This work includes a number of novel contributions for the multiple-sour...
research
10/01/2014

Domain adaptation of weighted majority votes via perturbed variation-based self-labeling

In machine learning, the domain adaptation problem arrives when the test...
research
11/19/2013

Domain Adaptation of Majority Votes via Perturbed Variation-based Label Transfer

We tackle the PAC-Bayesian Domain Adaptation (DA) problem. This arrives ...
research
11/17/2012

A Logic and Adaptive Approach for Efficient Diagnosis Systems using CBR

Case Based Reasoning (CBR) is an intelligent way of thinking based on ex...
research
05/19/2012

New Analysis and Algorithm for Learning with Drifting Distributions

We present a new analysis of the problem of learning with drifting distr...
research
10/20/2020

Invertible Low-Divergence Coding

Several applications in communication, control, and learning require app...

Please sign up or login with your details

Forgot password? Click here to reset