Modular Meta-Learning with Shrinkage

09/12/2019
by   Yutian Chen, et al.
6

Most gradient-based approaches to meta-learning do not explicitly account for the fact that different parts of the underlying model adapt by different amounts when applied to a new task. For example, the input layers of an image classification convnet typically adapt very little, while the output layers can change significantly. This can cause parts of the model to begin to overfit while others underfit. To address this, we introduce a hierarchical Bayesian model with per-module shrinkage parameters, which we propose to learn by maximizing an approximation of the predictive likelihood using implicit differentiation. Our algorithm subsumes Reptile and outperforms variants of MAML on two synthetic few-shot meta-learning problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/09/2019

Meta-Learning without Memorization

The ability to learn new concepts with small amounts of data is a critic...
research
05/13/2019

Hierarchically Structured Meta-learning

In order to learn quickly with few samples, meta-learning utilizes prior...
research
09/21/2020

Adaptive Meta-Learning for Identification of Rover-Terrain Dynamics

Rovers require knowledge of terrain to plan trajectories that maximize s...
research
10/18/2021

Learning Prototype-oriented Set Representations for Meta-Learning

Learning from set-structured data is a fundamental problem that has rece...
research
08/30/2019

Meta-Learning with Warped Gradient Descent

A versatile and effective approach to meta-learning is to infer a gradie...
research
03/31/2023

Scalable Bayesian Meta-Learning through Generalized Implicit Gradients

Meta-learning owns unique effectiveness and swiftness in tackling emergi...
research
10/31/2019

Hierarchical Expert Networks for Meta-Learning

The goal of meta-learning is to train a model on a variety of learning t...

Please sign up or login with your details

Forgot password? Click here to reset