DeepAI AI Chat
Log In Sign Up

Theoretical Guarantees of Transfer Learning

by   Zirui Wang, et al.
Carnegie Mellon University

Transfer learning has been proven effective when within-target labeled data is scarce. A lot of works have developed successful algorithms and empirically observed positive transfer effect that improves target generalization error using source knowledge. However, theoretical analysis of transfer learning is more challenging due to the nature of the problem and thus is less studied. In this report, we do a survey of theoretical works in transfer learning and summarize key theoretical guarantees that prove the effectiveness of transfer learning. The theoretical bounds are derived using model complexity and learning algorithm stability. As we should see, these works exhibit a trade-off between tight bounds and restrictive assumptions. Moreover, we also prove a new generalization bound for the multi-source transfer learning problem using the VC-theory, which is more informative than the one proved in previous work.


page 1

page 2

page 3

page 4


Learning Bound for Parameter Transfer Learning

We consider a transfer-learning problem by using the parameter transfer ...

Transfer Learning under High-dimensional Generalized Linear Models

In this work, we study the transfer learning problem under high-dimensio...

Zeta Distribution and Transfer Learning Problem

We explore the relations between the zeta distribution and algorithmic i...

A PAC-Bayesian bound for Lifelong Learning

Transfer learning has received a lot of attention in the machine learnin...

An Optimal Online Method of Selecting Source Policies for Reinforcement Learning

Transfer learning significantly accelerates the reinforcement learning p...

To Share or Not to Share? Performance Guarantees and the Asymmetric Nature of Cross-Robot Experience Transfer

In the robotics literature, experience transfer has been proposed in dif...