Model-Agnostic Meta-Learning using Runge-Kutta Methods

10/16/2019
by   Daniel Jiwoong Im, et al.
0

Meta-learning has emerged as an important framework for learning new tasks from just a few examples. The success of any meta-learning model depends on (i) its fast adaptation to new tasks, as well as (ii) having a shared representation across similar tasks. Here we extend the model-agnostic meta-learning (MAML) framework introduced by Finn et al. (2017) to achieve improved performance by analyzing the temporal dynamics of the optimization procedure via the Runge-Kutta method. This method enables us to gain fine-grained control over the optimization and helps us achieve both the adaptation and representation goals across tasks. By leveraging this refined control, we demonstrate that there are multiple principled ways to update MAML and show that the classic MAML optimization is simply a special case of second-order Runge-Kutta method that mainly focuses on fast-adaptation. Experiments on benchmark classification, regression and reinforcement learning tasks show that this refined control helps attain improved results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2018

The effects of negative adaptation in Model-Agnostic Meta-Learning

The capacity of meta-learning algorithms to quickly adapt to a variety o...
research
10/03/2019

Is Fast Adaptation All You Need?

Gradient-based meta-learning has proven to be highly effective at learni...
research
10/30/2019

Decoupling Adaptation from Modeling with Meta-Optimizers for Meta Learning

Meta-learning methods, most notably Model-Agnostic Meta-Learning or MAML...
research
10/31/2019

Hierarchical Expert Networks for Meta-Learning

The goal of meta-learning is to train a model on a variety of learning t...
research
09/15/2021

Sign-MAML: Efficient Model-Agnostic Meta-Learning by SignSGD

We propose a new computationally-efficient first-order algorithm for Mod...
research
07/15/2021

A Channel Coding Benchmark for Meta-Learning

Meta-learning provides a popular and effective family of methods for dat...
research
12/10/2020

Performance-Weighed Policy Sampling for Meta-Reinforcement Learning

This paper discusses an Enhanced Model-Agnostic Meta-Learning (E-MAML) a...

Please sign up or login with your details

Forgot password? Click here to reset