DeepAI AI Chat
Log In Sign Up

Generalized Inner Loop Meta-Learning

10/03/2019
by   Edward Grefenstette, et al.
University of Southern California
Facebook
NYU college
12

Many (but not all) approaches self-qualifying as "meta-learning" in deep learning and reinforcement learning fit a common pattern of approximating the solution to a nested optimization problem. In this paper, we give a formalization of this shared pattern, which we call GIMLI, prove its general requirements, and derive a general-purpose algorithm for implementing similar approaches. Based on this analysis and algorithm, we describe a library of our design, higher, which we share with the community to assist and enable future research into these kinds of meta-learning approaches. We end the paper by showcasing the practical applications of this framework and library through illustrative experiments and ablation studies which they facilitate.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/24/2020

Modeling and Optimization Trade-off in Meta-learning

By searching for shared inductive biases across tasks, meta-learning pro...
08/27/2020

learn2learn: A Library for Meta-Learning Research

Meta-learning researchers face two fundamental issues in their empirical...
09/28/2020

BOML: A Modularized Bilevel Optimization Library in Python for Meta Learning

Meta-learning (a.k.a. learning to learn) has recently emerged as a promi...
03/16/2023

Arbitrary Order Meta-Learning with Simple Population-Based Evolution

Meta-learning, the notion of learning to learn, enables learning systems...
03/31/2023

Scalable Bayesian Meta-Learning through Generalized Implicit Gradients

Meta-learning owns unique effectiveness and swiftness in tackling emergi...
07/14/2022

A Meta-learning Formulation of the Autoencoder Problem

A rapidly growing area of research is the use of machine learning approa...
05/05/2022

Meta-learning Feature Representations for Adaptive Gaussian Processes via Implicit Differentiation

We propose Adaptive Deep Kernel Fitting (ADKF), a general framework for ...

Code Repositories

higher

higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual training steps.


view repo

learning-to-distill-trajectories

Code for the ICLR 2021 paper "A teacher-student framework to distill future trajectories"


view repo