Variable-Shot Adaptation for Online Meta-Learning

12/14/2020
by   Tianhe Yu, et al.
5

Few-shot meta-learning methods consider the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks. However, in many real world settings, it is more natural to view the problem as one of minimizing the total amount of supervision — both the number of examples needed to learn a new task and the amount of data needed for meta-learning. Such a formulation can be studied in a sequential learning setting, where tasks are presented in sequence. When studying meta-learning in this online setting, a critical question arises: can meta-learning improve over the sample complexity and regret of standard empirical risk minimization methods, when considering both meta-training and adaptation together? The answer is particularly non-obvious for meta-learning algorithms with complex bi-level optimizations that may demand large amounts of meta-training data. To answer this question, we extend previous meta-learning algorithms to handle the variable-shot settings that naturally arise in sequential learning: from many-shot learning at the start, to zero-shot learning towards the end. On sequential learning problems, we find that meta-learning solves the full task set with fewer overall labels and achieves greater cumulative performance, compared to standard supervised methods. These results suggest that meta-learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.

READ FULL TEXT
research
12/05/2018

The effects of negative adaptation in Model-Agnostic Meta-Learning

The capacity of meta-learning algorithms to quickly adapt to a variety o...
research
04/29/2020

Learning to Learn to Disambiguate: Meta-Learning for Few-Shot Word Sense Disambiguation

Deep learning methods typically rely on large amounts of annotated data ...
research
06/06/2019

Adaptive Gradient-Based Meta-Learning Methods

We build a theoretical framework for understanding practical meta-learni...
research
01/16/2022

Towards Sample-efficient Overparameterized Meta-learning

An overarching goal in machine learning is to build a generalizable mode...
research
10/26/2021

On sensitivity of meta-learning to support data

Meta-learning algorithms are widely used for few-shot learning. For exam...
research
03/04/2020

Meta Cyclical Annealing Schedule: A Simple Approach to Avoiding Meta-Amortization Error

The ability to learn new concepts with small amounts of data is a crucia...
research
02/22/2019

Online Meta-Learning

A central capability of intelligent systems is the ability to continuous...

Please sign up or login with your details

Forgot password? Click here to reset