Fully Online Meta-Learning Without Task Boundaries

02/01/2022
by   Jathushan Rajasegaran, et al.
0

While deep networks can learn complex functions such as classifiers, detectors, and trackers, many applications require models that continually adapt to changing input distributions, changing tasks, and changing environmental conditions. Indeed, this ability to continuously accrue knowledge and use past experience to learn new tasks quickly in continual settings is one of the key properties of an intelligent system. For complex and high-dimensional problems, simply updating the model continually with standard learning algorithms such as gradient descent may result in slow adaptation. Meta-learning can provide a powerful tool to accelerate adaptation yet is conventionally studied in batch settings. In this paper, we study how meta-learning can be applied to tackle online problems of this nature, simultaneously adapting to changing tasks and input distributions and meta-training the model in order to adapt more quickly in the future. Extending meta-learning into the online setting presents its own challenges, and although several prior methods have studied related problems, they generally require a discrete notion of tasks, with known ground-truth task boundaries. Such methods typically adapt to each task in sequence, resetting the model between tasks, rather than adapting continuously across tasks. In many real-world settings, such discrete boundaries are unavailable, and may not even exist. To address these settings, we propose a Fully Online Meta-Learning (FOML) algorithm, which does not require any ground truth knowledge about the task boundaries and stays fully online without resetting back to pre-trained weights. Our experiments show that FOML was able to learn new tasks faster than the state-of-the-art online learning methods on Rainbow-MNIST, CIFAR100 and CELEBA datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2023

Algorithm Design for Online Meta-Learning with Task Boundary Detection

Online meta-learning has recently emerged as a marriage between batch me...
research
04/17/2019

Online Adaptation through Meta-Learning for Stereo Depth Estimation

In this work, we tackle the problem of online adaptation for stereo dept...
research
07/02/2016

A Greedy Approach to Adapting the Trace Parameter for Temporal Difference Learning

One of the main obstacles to broad application of reinforcement learning...
research
09/29/2020

MetaMix: Improved Meta-Learning with Interpolation-based Consistency Regularization

Model-Agnostic Meta-Learning (MAML) and its variants are popular few-sho...
research
10/02/2019

Learning Neural Causal Models from Unknown Interventions

Meta-learning over a set of distributions can be interpreted as learning...
research
10/14/2022

Meta Transferring for Deblurring

Most previous deblurring methods were built with a generic model trained...
research
02/22/2019

Online Meta-Learning

A central capability of intelligent systems is the ability to continuous...

Please sign up or login with your details

Forgot password? Click here to reset