Is Fast Adaptation All You Need?

10/03/2019
by   Khurram Javed, et al.
24

Gradient-based meta-learning has proven to be highly effective at learning model initializations, representations, and update rules that allow fast adaptation from a few samples. The core idea behind these approaches is to use fast adaptation and generalization -- two second-order metrics -- as training signals on a meta-training dataset. However, little attention has been given to other possible second-order metrics. In this paper, we investigate a different training signal -- robustness to catastrophic interference -- and demonstrate that representations learned by directing minimizing interference are more conducive to incremental learning than those learned by just maximizing fast adaptation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2019

Model-Agnostic Meta-Learning using Runge-Kutta Methods

Meta-learning has emerged as an important framework for learning new tas...
research
06/16/2021

Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation

Multi-task learning (MTL) aims to improve the generalization of several ...
research
12/29/2018

Meta Reinforcement Learning with Distribution of Exploration Parameters Learned by Evolution Strategies

In this paper, we propose a novel meta-learning method in a reinforcemen...
research
03/02/2020

Rapidly Adaptable Legged Robots via Evolutionary Meta-Learning

Learning adaptable policies is crucial for robots to operate autonomousl...
research
07/16/2018

Meta-Learning with Latent Embedding Optimization

Gradient-based meta-learning techniques are both widely applicable and p...
research
06/09/2021

Meta-Interpretive Learning as Metarule Specialisation

In Meta-Interpretive Learning (MIL) the metarules, second-order datalog ...
research
03/07/2023

EscherNet 101

A deep learning model, EscherNet 101, is constructed to categorize image...

Please sign up or login with your details

Forgot password? Click here to reset