Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning

10/24/2019
by   Xin Yao, et al.
0

Human beings are able to master a variety of knowledge and skills with ongoing learning. By contrast, dramatic performance degradation is observed when new tasks are added to an existing neural network model. This phenomenon, termed as Catastrophic Forgetting, is one of the major roadblocks that prevent deep neural networks from achieving human-level artificial intelligence. Several research efforts, e.g. Lifelong or Continual learning algorithms, have been proposed to tackle this problem. However, they either suffer from an accumulating drop in performance as the task sequence grows longer, or require to store an excessive amount of model parameters for historical memory, or cannot obtain competitive performance on the new tasks. In this paper, we focus on the incremental multi-task image classification scenario. Inspired by the learning process of human students, where they usually decompose complex tasks into easier goals, we propose an adversarial feature alignment method to avoid catastrophic forgetting. In our design, both the low-level visual features and high-level semantic features serve as soft targets and guide the training process in multiple stages, which provide sufficient supervised information of the old tasks and help to reduce forgetting. Due to the knowledge distillation and regularization phenomenons, the proposed method gains even better performance than finetuning on the new tasks, which makes it stand out from other methods. Extensive experiments in several typical lifelong learning scenarios demonstrate that our method outperforms the state-of-the-art methods in both accuracies on new tasks and performance preservation on old tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2019

Knowledge Distillation for Incremental Learning in Semantic Segmentation

Although deep learning architectures have shown remarkable results in sc...
research
06/10/2020

Self-Supervised Learning Aided Class-Incremental Lifelong Learning

Lifelong or continual learning remains to be a challenge for artificial ...
research
08/28/2021

Representation Memorization for Fast Learning New Knowledge without Forgetting

The ability to quickly learn new knowledge (e.g. new classes or data dis...
research
07/03/2021

Split-and-Bridge: Adaptable Class Incremental Learning within a Single Neural Network

Continual learning has been a major problem in the deep learning communi...
research
11/27/2020

Association: Remind Your GAN not to Forget

Neural networks are susceptible to catastrophic forgetting. They fail to...
research
08/30/2022

The alignment problem from a deep learning perspective

Within the coming decades, artificial general intelligence (AGI) may sur...
research
09/03/2021

Complementary Calibration: Boosting General Continual Learning with Collaborative Distillation and Self-Supervision

General Continual Learning (GCL) aims at learning from non independent a...

Please sign up or login with your details

Forgot password? Click here to reset