Incremental Learning-to-Learn with Statistical Guarantees

03/21/2018
by   Giulia Denevi, et al.
0

In learning-to-learn the goal is to infer a learning algorithm that works well on a class of tasks sampled from an unknown meta distribution. In contrast to previous work on batch learning-to-learn, we consider a scenario where tasks are presented sequentially and the algorithm needs to adapt incrementally to improve its performance on future tasks. Key to this setting is for the algorithm to rapidly incorporate new observations into the model as they arrive, without keeping them in memory. We focus on the case where the underlying algorithm is ridge regression parameterized by a positive semidefinite matrix. We propose to learn this matrix by applying a stochastic strategy to minimize the empirical error incurred by ridge regression on future tasks sampled from the meta distribution. We study the statistical properties of the proposed algorithm and prove non-asymptotic bounds on its excess transfer risk, that is, the generalization performance on new tasks from the same meta distribution. We compare our online learning-to-learn approach with a state of the art batch method, both theoretically and empirically.

READ FULL TEXT
research
05/18/2020

Meta-learning with Stochastic Linear Bandits

We investigate meta-learning procedures in the setting of stochastic lin...
research
03/25/2019

Learning-to-Learn Stochastic Gradient Descent with Biased Regularization

We study the problem of learning-to-learn: inferring a learning algorith...
research
12/04/2020

Model-Agnostic Learning to Meta-Learn

In this paper, we propose a learning algorithm that enables a model to q...
research
02/14/2016

Generalization Properties of Learning with Random Features

We study the generalization properties of ridge regression with random f...
research
01/21/2021

An Information-Theoretic Analysis of the Impact of Task Similarity on Meta-Learning

Meta-learning aims at optimizing the hyperparameters of a model class or...
research
06/20/2021

Task Attended Meta-Learning for Few-Shot Learning

Meta-learning (ML) has emerged as a promising direction in learning mode...
research
09/02/2022

Future Gradient Descent for Adapting the Temporal Shifting Data Distribution in Online Recommendation Systems

One of the key challenges of learning an online recommendation model is ...

Please sign up or login with your details

Forgot password? Click here to reset