Online Continual Learning via the Meta-learning Update with Multi-scale Knowledge Distillation and Data Augmentation

09/12/2022
by   Ya-nan Han, et al.
5

Continual learning aims to rapidly and continually learn the current task from a sequence of tasks. Compared to other kinds of methods, the methods based on experience replay have shown great advantages to overcome catastrophic forgetting. One common limitation of this method is the data imbalance between the previous and current tasks, which would further aggravate forgetting. Moreover, how to effectively address the stability-plasticity dilemma in this setting is also an urgent problem to be solved. In this paper, we overcome these challenges by proposing a novel framework called Meta-learning update via Multi-scale Knowledge Distillation and Data Augmentation (MMKDDA). Specifically, we apply multiscale knowledge distillation to grasp the evolution of long-range and short-range spatial relationships at different feature levels to alleviate the problem of data imbalance. Besides, our method mixes the samples from the episodic memory and current task in the online continual training procedure, thus alleviating the side influence due to the change of probability distribution. Moreover, we optimize our model via the meta-learning update resorting to the number of tasks seen previously, which is helpful to keep a better balance between stability and plasticity. Finally, our experimental evaluation on four benchmark datasets shows the effectiveness of the proposed MMKDDA framework against other popular baselines, and ablation studies are also conducted to further analyze the role of each component in our framework.

READ FULL TEXT

page 7

page 20

page 23

page 24

page 25

page 32

page 33

research
11/09/2021

Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot Image Recognition

Most meta-learning approaches assume the existence of a very large set o...
research
01/18/2023

Adaptively Integrated Knowledge Distillation and Prediction Uncertainty for Continual Learning

Current deep learning models often suffer from catastrophic forgetting o...
research
02/17/2023

New Insights for the Stability-Plasticity Dilemma in Online Continual Learning

The aim of continual learning is to learn new tasks continuously (i.e., ...
research
03/06/2021

Learning to Continually Learn Rapidly from Few and Noisy Data

Neural networks suffer from catastrophic forgetting and are unable to se...
research
03/19/2021

Online Lifelong Generalized Zero-Shot Learning

Methods proposed in the literature for zero-shot learning (ZSL) are typi...
research
06/14/2023

Heterogeneous Continual Learning

We propose a novel framework and a solution to tackle the continual lear...
research
05/15/2018

Continuous Learning in a Hierarchical Multiscale Neural Network

We reformulate the problem of encoding a multi-scale representation of a...

Please sign up or login with your details

Forgot password? Click here to reset