Selecting Related Knowledge via Efficient Channel Attention for Online Continual Learning

09/09/2022
by   Ya-nan Han, et al.
0

Continual learning aims to learn a sequence of tasks by leveraging the knowledge acquired in the past in an online-learning manner while being able to perform well on all previous tasks, this ability is crucial to the artificial intelligence (AI) system, hence continual learning is more suitable for most real-word and complex applicative scenarios compared to the traditional learning pattern. However, the current models usually learn a generic representation base on the class label on each task and an effective strategy is selected to avoid catastrophic forgetting. We postulate that selecting the related and useful parts only from the knowledge obtained to perform each task is more effective than utilizing the whole knowledge. Based on this fact, in this paper we propose a new framework, named Selecting Related Knowledge for Online Continual Learning (SRKOCL), which incorporates an additional efficient channel attention mechanism to pick the particular related knowledge for every task. Our model also combines experience replay and knowledge distillation to circumvent the catastrophic forgetting. Finally, extensive experiments are conducted on different benchmarks and the competitive experimental results demonstrate that our proposed SRKOCL is a promised approach against the state-of-the-art.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2020

Bilevel Continual Learning

Continual learning aims to learn continuously from a stream of tasks and...
research
08/01/2023

Online Prototype Learning for Online Continual Learning

Online continual learning (CL) studies the problem of learning continuou...
research
08/04/2020

Online Continual Learning under Extreme Memory Constraints

Continual Learning (CL) aims to develop agents emulating the human abili...
research
11/19/2021

Defeating Catastrophic Forgetting via Enhanced Orthogonal Weights Modification

The ability of neural networks (NNs) to learn and remember multiple task...
research
07/06/2021

CoReD: Generalizing Fake Media Detection with Continual Representation using Distillation

Over the last few decades, artificial intelligence research has made tre...
research
09/08/2023

Navigating Out-of-Distribution Electricity Load Forecasting during COVID-19: A Continual Learning Approach Leveraging Human Mobility

In traditional deep learning algorithms, one of the key assumptions is t...
research
01/03/2022

Class-Incremental Continual Learning into the eXtended DER-verse

The staple of human intelligence is the capability of acquiring knowledg...

Please sign up or login with your details

Forgot password? Click here to reset