Defeating Catastrophic Forgetting via Enhanced Orthogonal Weights Modification

11/19/2021
by   Yanni Li, et al.
8

The ability of neural networks (NNs) to learn and remember multiple tasks sequentially is facing tough challenges in achieving general artificial intelligence due to their catastrophic forgetting (CF) issues. Fortunately, the latest OWM Orthogonal Weights Modification) and other several continual learning (CL) methods suggest some promising ways to overcome the CF issue. However, none of existing CL methods explores the following three crucial questions for effectively overcoming the CF issue: that is, what knowledge does it contribute to the effective weights modification of the NN during its sequential tasks learning? When the data distribution of a new learning task changes corresponding to the previous learned tasks, should a uniform/specific weight modification strategy be adopted or not? what is the upper bound of the learningable tasks sequentially for a given CL method? ect. To achieve this, in this paper, we first reveals the fact that of the weight gradient of a new learning task is determined by both the input space of the new task and the weight space of the previous learned tasks sequentially. On this observation and the recursive least square optimal method, we propose a new efficient and effective continual learning method EOWM via enhanced OWM. And we have theoretically and definitively given the upper bound of the learningable tasks sequentially of our EOWM. Extensive experiments conducted on the benchmarks demonstrate that our EOWM is effectiveness and outperform all of the state-of-the-art CL baselines.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 10

research
05/07/2020

Generative Feature Replay with Orthogonal Weight Modification for Continual Learning

The ability of intelligent agents to learn and remember multiple tasks s...
research
09/09/2022

Selecting Related Knowledge via Efficient Channel Attention for Online Continual Learning

Continual learning aims to learn a sequence of tasks by leveraging the k...
research
10/24/2018

Continual Classification Learning Using Generative Models

Continual learning is the ability to sequentially learn over time by acc...
research
06/09/2021

Optimizing Reusable Knowledge for Continual Learning via Metalearning

When learning tasks over time, artificial neural networks suffer from a ...
research
06/03/2019

Continual learning with hypernetworks

Artificial neural networks suffer from catastrophic forgetting when they...
research
08/07/2023

Do You Remember? Overcoming Catastrophic Forgetting for Fake Audio Detection

Current fake audio detection algorithms have achieved promising performa...
research
10/05/2022

ImpressLearn: Continual Learning via Combined Task Impressions

This work proposes a new method to sequentially train a deep neural netw...

Please sign up or login with your details

Forgot password? Click here to reset