Meta-Learning with Adjoint Methods

10/16/2021
by   Shibo Li, et al.
0

Model Agnostic Meta-Learning (MAML) is widely used to find a good initialization for a family of tasks. Despite its success, a critical challenge in MAML is to calculate the gradient w.r.t the initialization of a long training trajectory for the sampled tasks, because the computation graph can rapidly explode and the computational cost is very expensive. To address this problem, we propose Adjoint MAML (A-MAML). We view gradient descent in the inner optimization as the evolution of an Ordinary Differential Equation (ODE). To efficiently compute the gradient of the validation loss w.r.t the initialization, we use the adjoint method to construct a companion, backward ODE. To obtain the gradient w.r.t the initialization, we only need to run the standard ODE solver twice – one is forward in time that evolves a long trajectory of gradient flow for the sampled task; the other is backward and solves the adjoint ODE. We need not create or expand any intermediate computational graphs, adopt aggressive approximations, or impose proximal regularizers in the training loss. Our approach is cheap, accurate, and adaptable to different trajectory lengths. We demonstrate the advantage of our approach in both synthetic and real-world meta-learning tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/01/2021

B-SMALL: A Bayesian Neural Network approach to Sparse Model-Agnostic Meta-Learning

There is a growing interest in the learning-to-learn paradigm, also know...
research
02/14/2021

Large-Scale Meta-Learning with Continual Trajectory Shifting

Meta-learning of shared initialization parameters has shown to be highly...
research
04/04/2023

Meta-Learning with a Geometry-Adaptive Preconditioner

Model-agnostic meta-learning (MAML) is one of the most successful meta-l...
research
03/02/2022

Continuous-Time Meta-Learning with Forward Mode Differentiation

Drawing inspiration from gradient-based meta-learning methods with infin...
research
06/19/2020

Meta Learning in the Continuous Time Limit

In this paper, we establish the ordinary differential equation (ODE) tha...
research
09/30/2019

Chameleon: Learning Model Initializations Across Tasks With Different Schemas

Parametric models, and particularly neural networks, require weight init...
research
10/25/2018

Truncated Back-propagation for Bilevel Optimization

Bilevel optimization has been recently revisited for designing and analy...

Please sign up or login with your details

Forgot password? Click here to reset