DeepAI
Log In Sign Up

Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning

03/12/2020
by   Massimo Caccia, et al.
26

Learning from non-stationary data remains a great challenge for machine learning. Continual learning addresses this problem in scenarios where the learning agent faces a stream of changing tasks. In these scenarios, the agent is expected to retain its highest performance on previous tasks without revisiting them while adapting well to the new tasks. Two new recent continual-learning scenarios have been proposed. In meta-continual learning, the model is pre-trained to minimize catastrophic forgetting when trained on a sequence of tasks. In continual-meta learning, the goal is faster remembering, i.e., focusing on how quickly the agent recovers performance rather than measuring the agent's performance without any adaptation. Both scenarios have the potential to propel the field forward. Yet in their original formulations, they each have limitations. As a remedy, we propose a more general scenario where an agent must quickly solve (new) out-of-distribution tasks, while also requiring fast remembering. We show that current continual learning, meta learning, meta-continual learning, and continual-meta learning techniques fail in this new scenario. Accordingly, we propose a strong baseline: Continual-MAML, an online extension of the popular MAML algorithm. In our empirical experiments, we show that our method is better suited to the new scenario than the methodologies mentioned above, as well as standard continual learning and meta learning approaches.

READ FULL TEXT
06/12/2019

Task Agnostic Continual Learning via Meta Learning

While neural networks are powerful function approximators, they suffer f...
02/21/2020

Learning to Continually Learn

Continual lifelong learning requires an agent or model to learn many seq...
02/11/2021

Reproducibility Report: La-MAML: Look-ahead Meta Learning for Continual Learning

The Continual Learning (CL) problem involves performing well on a sequen...
08/05/2020

Meta Continual Learning via Dynamic Programming

Meta-continual learning algorithms seek to rapidly train a model when fa...
04/08/2022

Learning to modulate random weights can induce task-specific contexts for economical meta and continual learning

Neural networks are vulnerable to catastrophic forgetting when data dist...
10/01/2020

Value-based Bayesian Meta-reinforcement Learning and Traffic Signal Control

Reinforcement learning methods for traffic signal control has gained inc...
08/03/2022

Centroids Matching: an efficient Continual Learning approach operating in the embedding space

Catastrophic forgetting (CF) occurs when a neural network loses the info...

Code Repositories

osaka

Codebase for "Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning"


view repo