Continual Learning with Adaptive Weights (CLAW)

11/21/2019
by   Tameem Adel, et al.
20

Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner. Recently, several frameworks have been developed which enable deep learning to be deployed in this learning scenario. A key modelling decision is to what extent the architecture should be shared across tasks. On the one hand, separately modelling each task avoids catastrophic forgetting but it does not support transfer learning and leads to large models. On the other hand, rigidly specifying a shared component and a task-specific part enables task transfer and limits the model size, but it is vulnerable to catastrophic forgetting and restricts the form of task-transfer that can occur. Ideally, the network should adaptively identify which parts of the network to share in a data driven way. Here we introduce such an approach called Continual Learning with Adaptive Weights (CLAW), which is based on probabilistic modelling and variational inference. Experiments show that CLAW achieves state-of-the-art performance on six benchmarks in terms of overall continual learning performance, as measured by classification accuracy, and in terms of addressing catastrophic forgetting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2021

Understanding Catastrophic Forgetting and Remembering in Continual Learning with Optimal Relevance Mapping

Catastrophic forgetting in neural networks is a significant problem for ...
research
08/18/2023

On the Effectiveness of LayerNorm Tuning for Continual Learning in Vision Transformers

State-of-the-art rehearsal-free continual learning methods exploit the p...
research
06/01/2022

Transfer without Forgetting

This work investigates the entanglement between Continual Learning (CL) ...
research
07/12/2021

Kernel Continual Learning

This paper introduces kernel continual learning, a simple but effective ...
research
06/09/2021

Optimizing Reusable Knowledge for Continual Learning via Metalearning

When learning tasks over time, artificial neural networks suffer from a ...
research
06/26/2023

Parameter-Level Soft-Masking for Continual Learning

Existing research on task incremental learning in continual learning has...
research
03/27/2023

Forget-free Continual Learning with Soft-Winning SubNetworks

Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which states t...

Please sign up or login with your details

Forgot password? Click here to reset