DeepAI AI Chat
Log In Sign Up

Posterior Meta-Replay for Continual Learning

03/01/2021
by   Christian Henning, et al.
20

Continual Learning (CL) algorithms have recently received a lot of attention as they attempt to overcome the need to train with an i.i.d. sample from some unknown target data distribution. Building on prior work, we study principled ways to tackle the CL problem by adopting a Bayesian perspective and focus on continually learning a task-specific posterior distribution via a shared meta-model, a task-conditioned hypernetwork. This approach, which we term Posterior-replay CL, is in sharp contrast to most Bayesian CL approaches that focus on the recursive update of a single posterior distribution. The benefits of our approach are (1) an increased flexibility to model solutions in weight space and therewith less susceptibility to task dissimilarity, (2) access to principled task-specific predictive uncertainty estimates, that can be used to infer task identity during test time and to detect task boundaries during training, and (3) the ability to revisit and update task-specific posteriors in a principled manner without requiring access to past data. The proposed framework is versatile, which we demonstrate using simple posterior approximations (such as Gaussians) as well as powerful, implicit distributions modelled via a neural network. We illustrate the conceptual advance of our framework on low-dimensional problems and show performance gains on computer vision benchmarks.

READ FULL TEXT

page 7

page 26

page 28

page 34

page 35

06/12/2019

Task Agnostic Continual Learning via Meta Learning

While neural networks are powerful function approximators, they suffer f...
02/18/2019

A Unifying Bayesian View of Continual Learning

Some machine learning applications require continual learning - where da...
04/08/2022

Learning to modulate random weights can induce task-specific contexts for economical meta and continual learning

Neural networks are vulnerable to catastrophic forgetting when data dist...
07/12/2021

Kernel Continual Learning

This paper introduces kernel continual learning, a simple but effective ...
11/12/2019

Learning from the Past: Continual Meta-Learning via Bayesian Graph Modeling

Meta-learning for few-shot learning allows a machine to leverage previou...
07/15/2022

Improving Task-free Continual Learning by Distributionally Robust Memory Evolution

Task-free continual learning (CL) aims to learn a non-stationary data st...
07/28/2021

Task-Specific Normalization for Continual Learning of Blind Image Quality Models

The computational vision community has recently paid attention to contin...

Code Repositories

posterior_replay_cl

Continual learning of task-specific approximations of the parameter posterior distribution via a shared hypernetwork.


view repo