Meta-Learning with Warped Gradient Descent

08/30/2019
by   Sebastian Flennerhag, et al.
0

A versatile and effective approach to meta-learning is to infer a gradient-based up-date rule directly from data that promotes rapid learning of new tasks from the same distribution. Current methods rely on backpropagating through the learning process, limiting their scope to few-shot learning. In this work, we introduce Warped Gradient Descent (WarpGrad), a family of modular optimisers that can scale to arbitrary adaptation processes. WarpGrad methods meta-learn to warp task loss surfaces across the joint task-parameter distribution to facilitate gradient descent, which is achieved by a reparametrisation of neural networks that interleaves warp layers in the architecture. These layers are shared across task learners and fixed during adaptation; they represent a projection of task parameters into a meta-learned space that is conducive to task adaptation and standard backpropagation induces a form of gradient preconditioning. WarpGrad methods are computationally efficient and easy to implement as they rely on parameter sharing and backpropagation. They are readily combined with other meta-learners and can scale both in terms of model size and length of adaptation trajectories as meta-learning warp parameters do not require differentiation through task adaptation processes. We show empirically that WarpGrad optimisers meta-learn a warped space where gradient descent is well behaved, with faster convergence and better performance in a variety of settings, including few-shot, standard supervised, continual, and reinforcement learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/17/2018

Meta-Learning with Adaptive Layerwise Metric and Subspace

Recent advances in meta-learning demonstrate that deep representations c...
research
03/02/2022

Continuous-Time Meta-Learning with Forward Mode Differentiation

Drawing inspiration from gradient-based meta-learning methods with infin...
research
12/29/2020

Meta Learning Backpropagation And Improving It

Many concepts have been proposed for meta learning with neural networks ...
research
09/25/2019

Decoder Choice Network for Meta-Learning

Meta-learning has been widely used for implementing few-shot learning an...
research
04/10/2021

Meta-Learning Bidirectional Update Rules

In this paper, we introduce a new type of generalized neural network whe...
research
07/06/2020

Covariate Distribution Aware Meta-learning

Meta-learning has proven to be successful at few-shot learning across th...
research
09/12/2019

Modular Meta-Learning with Shrinkage

Most gradient-based approaches to meta-learning do not explicitly accoun...

Please sign up or login with your details

Forgot password? Click here to reset