Meta-Learning with Latent Embedding Optimization

07/16/2018
by   Andrei A. Rusu, et al.
0

Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have the practical difficulties of operating in high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a low-dimensional latent generative representation of model parameters and performing gradient-based meta-learning in this space with latent embedding optimization (LEO), effectively decoupling the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive 5-way 1-shot miniImageNet classification task.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/25/2019

Decoder Choice Network for Meta-Learning

Meta-learning has been widely used for implementing few-shot learning an...
07/08/2022

On the Subspace Structure of Gradient-Based Meta-Learning

In this work we provide an analysis of the distribution of the post-adap...
03/17/2021

HyperDynamics: Meta-Learning Object and Agent Dynamics with Hypernetworks

We propose HyperDynamics, a dynamics meta-learning framework that condit...
06/15/2022

On Enforcing Better Conditioned Meta-Learning for Rapid Few-Shot Adaptation

Inspired by the concept of preconditioning, we propose a novel method to...
09/06/2020

Gradient-based Competitive Learning: Theory

Deep learning has been widely used for supervised learning and classific...
10/03/2019

Is Fast Adaptation All You Need?

Gradient-based meta-learning has proven to be highly effective at learni...
06/17/2022

Accelerating numerical methods by gradient-based meta-solving

In science and engineering applications, it is often required to solve s...