Online Continual Learning via the Knowledge Invariant and Spread-out Properties

02/02/2023
by   Ya-nan Han, et al.
0

The goal of continual learning is to provide intelligent agents that are capable of learning continually a sequence of tasks using the knowledge obtained from previous tasks while performing well on prior tasks. However, a key challenge in this continual learning paradigm is catastrophic forgetting, namely adapting a model to new tasks often leads to severe performance degradation on prior tasks. Current memory-based approaches show their success in alleviating the catastrophic forgetting problem by replaying examples from past tasks when new tasks are learned. However, these methods are infeasible to transfer the structural knowledge from previous tasks i.e., similarities or dissimilarities between different instances. Furthermore, the learning bias between the current and prior tasks is also an urgent problem that should be solved. In this work, we propose a new method, named Online Continual Learning via the Knowledge Invariant and Spread-out Properties (OCLKISP), in which we constrain the evolution of the embedding features via Knowledge Invariant and Spread-out Properties (KISP). Thus, we can further transfer the inter-instance structural knowledge of previous tasks while alleviating the forgetting due to the learning bias. We empirically evaluate our proposed method on four popular benchmarks for continual learning: Split CIFAR 100, Split SVHN, Split CUB200 and Split Tiny-Image-Net. The experimental results show the efficacy of our proposed method compared to the state-of-the-art continual learning algorithms.

READ FULL TEXT

page 2

page 15

research
01/15/2021

Learning Invariant Representation for Continual Learning

Continual learning aims to provide intelligent agents that are capable o...
research
03/14/2023

Is forgetting less a good inductive bias for forward transfer?

One of the main motivations of studying continual learning is that the p...
research
04/30/2023

DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning

Rehearsal-based approaches are a mainstay of continual learning (CL). Th...
research
10/18/2021

Dendritic Self-Organizing Maps for Continual Learning

Current deep learning architectures show remarkable performance when tra...
research
11/26/2020

Better Knowledge Retention through Metric Learning

In continual learning, new categories may be introduced over time, and a...
research
01/28/2021

Self-Attention Meta-Learner for Continual Learning

Continual learning aims to provide intelligent agents capable of learnin...
research
10/07/2021

Towards Continual Knowledge Learning of Language Models

Large Language Models (LMs) are known to encode world knowledge in their...

Please sign up or login with your details

Forgot password? Click here to reset