Split-and-Bridge: Adaptable Class Incremental Learning within a Single Neural Network

07/03/2021
by   Jong-Yeong Kim, et al.
0

Continual learning has been a major problem in the deep learning community, where the main challenge is how to effectively learn a series of newly arriving tasks without forgetting the knowledge of previous tasks. Initiated by Learning without Forgetting (LwF), many of the existing works report that knowledge distillation is effective to preserve the previous knowledge, and hence they commonly use a soft label for the old task, namely a knowledge distillation (KD) loss, together with a class label for the new task, namely a cross entropy (CE) loss, to form a composite loss for a single neural network. However, this approach suffers from learning the knowledge by a CE loss as a KD loss often more strongly influences the objective function when they are in a competitive situation within a single network. This could be a critical problem particularly in a class incremental scenario, where the knowledge across tasks as well as within the new task, both of which can only be acquired by a CE loss, is essentially learned due to the existence of a unified classifier. In this paper, we propose a novel continual learning method, called Split-and-Bridge, which can successfully address the above problem by partially splitting a neural network into two partitions for training the new task separated from the old task and re-connecting them for learning the knowledge across tasks. In our thorough experimental analysis, our Split-and-Bridge method outperforms the state-of-the-art competitors in KD-based continual learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2023

Subspace Distillation for Continual Learning

An ultimate objective in continual learning is to preserve knowledge lea...
research
03/28/2023

Projected Latent Distillation for Data-Agnostic Consolidation in Distributed Continual Learning

Distributed learning on the edge often comprises self-centered devices (...
research
06/11/2019

Incremental Classifier Learning Based on PEDCC-Loss and Cosine Distance

The main purpose of incremental learning is to learn new knowledge while...
research
05/06/2023

Active Continual Learning: Labelling Queries in a Sequence of Tasks

Acquiring new knowledge without forgetting what has been learned in a se...
research
11/15/2022

Exploring the Joint Use of Rehearsal and Knowledge Distillation in Continual Learning for Spoken Language Understanding

Continual learning refers to a dynamical framework in which a model or a...
research
10/24/2019

Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning

Human beings are able to master a variety of knowledge and skills with o...
research
06/14/2023

Heterogeneous Continual Learning

We propose a novel framework and a solution to tackle the continual lear...

Please sign up or login with your details

Forgot password? Click here to reset