Continual Learning with Neuron Activation Importance

07/27/2021
by   Sohee Kim, et al.
0

Continual learning is a concept of online learning with multiple sequential tasks. One of the critical barriers of continual learning is that a network should learn a new task keeping the knowledge of old tasks without access to any data of the old tasks. In this paper, we propose a neuron activation importance-based regularization method for stable continual learning regardless of the order of tasks. We conduct comprehensive experiments on existing benchmark data sets to evaluate not just the stability and plasticity of our method with improved classification accuracy also the robustness of the performance along the changes of task order.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2021

Continual Learning via Inter-Task Synaptic Mapping

Learning from streaming tasks leads a model to catastrophically erase un...
research
08/31/2023

ScrollNet: Dynamic Weight Importance for Continual Learning

The principle underlying most existing continual learning (CL) methods i...
research
06/06/2023

Continual Learning in Linear Classification on Separable Data

We analyze continual learning on a sequence of separable linear classifi...
research
01/05/2022

Hyperparameter-free Continuous Learning for Domain Classification in Natural Language Understanding

Domain classification is the fundamental task in natural language unders...
research
11/15/2021

Target Layer Regularization for Continual Learning Using Cramer-Wold Generator

We propose an effective regularization strategy (CW-TaLaR) for solving c...
research
04/12/2021

Continual Learning for Text Classification with Information Disentanglement Based Regularization

Continual learning has become increasingly important as it enables NLP m...
research
09/01/2023

New metrics for analyzing continual learners

Deep neural networks have shown remarkable performance when trained on i...

Please sign up or login with your details

Forgot password? Click here to reset