Optimizing Reusable Knowledge for Continual Learning via Metalearning

06/09/2021
by   Julio Hurtado, et al.
6

When learning tasks over time, artificial neural networks suffer from a problem known as Catastrophic Forgetting (CF). This happens when the weights of a network are overwritten during the training of a new task causing forgetting of old information. To address this issue, we propose MetA Reusable Knowledge or MARK, a new method that fosters weight reusability instead of overwriting when learning a new task. Specifically, MARK keeps a set of shared weights among tasks. We envision these shared weights as a common Knowledge Base (KB) that is not only used to learn new tasks, but also enriched with new knowledge as the model learns new tasks. Key components behind MARK are two-fold. On the one hand, a metalearning approach provides the key mechanism to incrementally enrich the KB with new knowledge and to foster weight reusability among tasks. On the other hand, a set of trainable masks provides the key mechanism to selectively choose from the KB relevant weights to solve each task. By using MARK, we achieve state of the art results in several popular benchmarks, surpassing the best performing methods in terms of average accuracy by over 10 on the 20-Split-MiniImageNet dataset, while achieving almost zero forgetfulness using 55 evidence that, indeed, MARK is learning reusable knowledge that is selectively used by each task.

READ FULL TEXT

page 9

page 13

page 15

research
10/18/2022

Exclusive Supermask Subnetwork Training for Continual Learning

Continual Learning (CL) methods mainly focus on avoiding catastrophic fo...
research
11/21/2019

Continual Learning with Adaptive Weights (CLAW)

Approaches to continual learning aim to successfully learn a set of rela...
research
08/26/2023

Differentiable Weight Masks for Domain Transfer

One of the major drawbacks of deep learning models for computer vision h...
research
05/17/2021

Layerwise Optimization by Gradient Decomposition for Continual Learning

Deep neural networks achieve state-of-the-art and sometimes super-human ...
research
11/19/2021

Defeating Catastrophic Forgetting via Enhanced Orthogonal Weights Modification

The ability of neural networks (NNs) to learn and remember multiple task...
research
09/29/2020

One Person, One Model, One World: Learning Continual User Representation without Forgetting

Learning generic user representations which can then be applied to other...
research
11/28/2022

Progressive Learning without Forgetting

Learning from changing tasks and sequential experience without forgettin...

Please sign up or login with your details

Forgot password? Click here to reset