On the Theory of Transfer Learning: The Importance of Task Diversity

06/20/2020
by   Nilesh Tripuraneni, et al.
0

We provide new statistical guarantees for transfer learning via representation learning–when transfer is achieved by learning a feature representation shared across different tasks. This enables learning on new tasks using far less data than is required to learn them in isolation. Formally, we consider t+1 tasks parameterized by functions of the form f_j ∘ h in a general function class F∘H, where each f_j is a task-specific function in F and h is the shared representation in H. Letting C(·) denote the complexity measure of the function class, we show that for diverse training tasks (1) the sample complexity needed to learn the shared representation across the first t training tasks scales as C(H) + t C(F), despite no explicit access to a signal from the feature representation and (2) with an accurate estimate of the representation, the sample complexity needed to learn a new task scales only with C(F). Our results depend upon a new general notion of task diversity–applicable to models with general tasks, features, and losses–as well as a novel chain rule for Gaussian complexities. Finally, we exhibit the utility of our general framework in several models of importance in the literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/28/2022

On the Sample Complexity of Representation Learning in Multi-task Bandits with Global and Local structure

We investigate the sample complexity of learning the optimal arm for mul...
research
06/29/2020

Adversarial Multi-Source Transfer Learning in Healthcare: Application to Glucose Prediction for Diabetic People

Deep learning has yet to revolutionize general practices in healthcare, ...
research
06/15/2023

Active Representation Learning for General Task Space with Applications in Robotics

Representation learning based on multi-task pretraining has become a pow...
research
07/31/2017

Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning

One question central to Reinforcement Learning is how to learn a feature...
research
03/08/2023

Provable Pathways: Learning Multiple Tasks over Multiple Paths

Constructing useful representations across a large number of tasks is a ...
research
05/31/2021

Representation Learning Beyond Linear Prediction Functions

Recent papers on the theory of representation learning has shown the imp...
research
01/06/2019

Spectrum-Diverse Neuroevolution with Unified Neural Models

Learning algorithms are being increasingly adopted in various applicatio...

Please sign up or login with your details

Forgot password? Click here to reset