Adaptive Gradient-Based Meta-Learning Methods

06/06/2019
by   Mikhail Khodak, et al.
0

We build a theoretical framework for understanding practical meta-learning methods that enables the integration of sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms. Our approach enables the task-similarity to be learned adaptively, provides sharper transfer-risk bounds in the setting of statistical learning-to-learn, and leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure. We use our theory to modify several popular meta-learning algorithms and improve their training and meta-test-time performance on standard problems in few-shot and federated deep learning.

READ FULL TEXT
research
02/27/2019

Provable Guarantees for Gradient-Based Meta-Learning

We study the problem of meta-learning through the lens of online convex ...
research
12/14/2020

Variable-Shot Adaptation for Online Meta-Learning

Few-shot meta-learning methods consider the problem of learning new task...
research
01/30/2021

On Data Efficiency of Meta-learning

Meta-learning has enabled learning statistical models that can be quickl...
research
04/30/2023

META-SMGO-Δ: similarity as a prior in black-box optimization

When solving global optimization problems in practice, one often ends up...
research
10/22/2019

Online Meta-Learning on Non-convex Setting

The online meta-learning framework is designed for the continual lifelon...
research
03/29/2023

Meta-Learning Parameterized First-Order Optimizers using Differentiable Convex Optimization

Conventional optimization methods in machine learning and controls rely ...
research
10/09/2020

Learning not to learn: Nature versus nurture in silico

Animals are equipped with a rich innate repertoire of sensory, behaviora...

Please sign up or login with your details

Forgot password? Click here to reset