Provable Lifelong Learning of Representations

10/27/2021
by   Xinyuan Cao, et al.
0

In lifelong learning, the tasks (or classes) to be learned arrive sequentially over time in arbitrary order. During training, knowledge from previous tasks can be captured and transferred to subsequent ones to improve sample efficiency. We consider the setting where all target tasks can be represented in the span of a small number of unknown linear or nonlinear features of the input data. We propose a provable lifelong learning algorithm that maintains and refines the internal feature representation. We prove that for any desired accuracy on all tasks, the dimension of the representation remains close to that of the underlying representation. The resulting sample complexity improves significantly on existing bounds. In the setting of linear features, our algorithm is provably efficient and the sample complexity for input dimension d, m tasks with k features up to error ϵ is Õ(dk^1.5/ϵ+km/ϵ). We also prove a matching lower bound for any lifelong learning algorithm that uses a single task learner as a black box. Finally, we complement our analysis with an empirical study.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/26/2013

Sample Complexity of Multi-task Reinforcement Learning

Transferring knowledge across a sequence of reinforcement-learning tasks...
research
07/13/2023

Near-Optimal Bounds for Learning Gaussian Halfspaces with Random Classification Noise

We study the problem of learning general (i.e., not necessarily homogene...
research
08/19/2021

Threshold Phenomena in Learning Halfspaces with Massart Noise

We study the problem of PAC learning halfspaces on ℝ^d with Massart nois...
research
03/22/2022

A Note on Target Q-learning For Solving Finite MDPs with A Generative Oracle

Q-learning with function approximation could diverge in the off-policy s...
research
06/19/2015

Representation Learning for Clustering: A Statistical Framework

We address the problem of communicating domain knowledge from a user to ...
research
06/05/2023

Improved Active Multi-Task Representation Learning via Lasso

To leverage the copious amount of data from source tasks and overcome th...
research
02/14/2021

Sample Efficient Subspace-based Representations for Nonlinear Meta-Learning

Constructing good representations is critical for learning complex tasks...

Please sign up or login with your details

Forgot password? Click here to reset