Continual Prune-and-Select: Class-incremental learning with specialized subnetworks

08/09/2022
by   Aleksandr Dekhovich, et al.
23

The human brain is capable of learning tasks sequentially mostly without forgetting. However, deep neural networks (DNNs) suffer from catastrophic forgetting when learning one task after another. We address this challenge considering a class-incremental learning scenario where the DNN sees test data without knowing the task from which this data originates. During training, Continual-Prune-and-Select (CP S) finds a subnetwork within the DNN that is responsible for solving a given task. Then, during inference, CP S selects the correct subnetwork to make predictions for that task. A new task is learned by training available neuronal connections of the DNN (previously untrained) to create a new subnetwork by pruning, which can include previously trained connections belonging to other subnetwork(s) because it does not update shared connections. This enables to eliminate catastrophic forgetting by creating specialized regions in the DNN that do not conflict with each other while still allowing knowledge transfer across them. The CP S strategy is implemented with different subnetwork selection strategies, revealing superior performance to state-of-the-art continual learning methods tested on various datasets (CIFAR-100, CUB-200-2011, ImageNet-100 and ImageNet-1000). In particular, CP S is capable of sequentially learning 10 tasks from ImageNet-1000 keeping an accuracy around 94 class-incremental learning. To the best of the authors' knowledge, this represents an improvement in accuracy above 20 alternative method.

READ FULL TEXT

page 7

page 10

page 16

research
05/06/2023

Active Continual Learning: Labelling Queries in a Sequence of Tasks

Acquiring new knowledge without forgetting what has been learned in a se...
research
07/15/2020

SpaceNet: Make Free Space For Continual Learning

The continual learning (CL) paradigm aims to enable neural networks to l...
research
09/22/2021

Neural network relief: a pruning algorithm based on neural activity

Current deep neural networks (DNNs) are overparameterized and use most o...
research
08/25/2023

GRASP: A Rehearsal Policy for Efficient Online Continual Learning

Continual learning (CL) in deep neural networks (DNNs) involves incremen...
research
07/28/2022

Progressive Voronoi Diagram Subdivision: Towards A Holistic Geometric Framework for Exemplar-free Class-Incremental Learning

Exemplar-free Class-incremental Learning (CIL) is a challenging problem ...
research
05/03/2020

Continuous Learning in a Single-Incremental-Task Scenario with Spike Features

Deep Neural Networks (DNNs) have two key deficiencies, their dependence ...
research
02/16/2022

Diagnosing Batch Normalization in Class Incremental Learning

Extensive researches have applied deep neural networks (DNNs) in class i...

Please sign up or login with your details

Forgot password? Click here to reset