Meta-Learning with Latent Embedding Optimization

by   Andrei A. Rusu, et al.

Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have the practical difficulties of operating in high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a low-dimensional latent generative representation of model parameters and performing gradient-based meta-learning in this space with latent embedding optimization (LEO), effectively decoupling the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive 5-way 1-shot miniImageNet classification task.


page 1

page 2

page 3

page 4


Decoder Choice Network for Meta-Learning

Meta-learning has been widely used for implementing few-shot learning an...

On the Subspace Structure of Gradient-Based Meta-Learning

In this work we provide an analysis of the distribution of the post-adap...

HyperDynamics: Meta-Learning Object and Agent Dynamics with Hypernetworks

We propose HyperDynamics, a dynamics meta-learning framework that condit...

On Enforcing Better Conditioned Meta-Learning for Rapid Few-Shot Adaptation

Inspired by the concept of preconditioning, we propose a novel method to...

Gradient-based Competitive Learning: Theory

Deep learning has been widely used for supervised learning and classific...

Is Fast Adaptation All You Need?

Gradient-based meta-learning has proven to be highly effective at learni...

Accelerating numerical methods by gradient-based meta-solving

In science and engineering applications, it is often required to solve s...