Effects of Auxiliary Knowledge on Continual Learning

06/03/2022
by   Giovanni Bellitto, et al.
0

In Continual Learning (CL), a neural network is trained on a stream of data whose distribution changes over time. In this context, the main problem is how to learn new information without forgetting old knowledge (i.e., Catastrophic Forgetting). Most existing CL approaches focus on finding solutions to preserve acquired knowledge, so working on the past of the model. However, we argue that as the model has to continually learn new tasks, it is also important to put focus on the present knowledge that could improve following tasks learning. In this paper we propose a new, simple, CL algorithm that focuses on solving the current task in a way that might facilitate the learning of the next ones. More specifically, our approach combines the main data stream with a secondary, diverse and uncorrelated stream, from which the network can draw auxiliary knowledge. This helps the model from different perspectives, since auxiliary data may contain useful features for the current and the next tasks and incoming task classes can be mapped onto auxiliary classes. Furthermore, the addition of data to the current task is implicitly making the classifier more robust as we are forcing the extraction of more discriminative features. Our method can outperform existing state-of-the-art models on the most common CL Image Classification benchmarks.

READ FULL TEXT
research
08/11/2021

Discriminative Distillation to Reduce Class Confusion in Continual Learning

Successful continual learning of new knowledge would enable intelligent ...
research
03/14/2023

Is forgetting less a good inductive bias for forward transfer?

One of the main motivations of studying continual learning is that the p...
research
04/28/2021

Preserving Earlier Knowledge in Continual Learning with the Help of All Previous Feature Extractors

Continual learning of new knowledge over time is one desirable capabilit...
research
10/12/2020

Rethinking Experience Replay: a Bag of Tricks for Continual Learning

In Continual Learning, a Neural Network is trained on a stream of data w...
research
05/28/2019

Single-Net Continual Learning with Progressive Segmented Training (PST)

There is an increasing need of continual learning in dynamic systems, su...
research
07/05/2020

Pseudo-Rehearsal for Continual Learning with Normalizing Flows

Catastrophic forgetting (CF) happens whenever a neural network overwrite...
research
08/31/2023

Continual Learning From a Stream of APIs

Continual learning (CL) aims to learn new tasks without forgetting previ...

Please sign up or login with your details

Forgot password? Click here to reset