Knowledge Accumulation in Continually Learned Representations and the Issue of Feature Forgetting

04/03/2023
by   Timm Hess, et al.
0

By default, neural networks learn on all training data at once. When such a model is trained on sequential chunks of new data, it tends to catastrophically forget how to handle old data. In this work we investigate how continual learners learn and forget representations. We observe two phenomena: knowledge accumulation, i.e. the improvement of a representation over time, and feature forgetting, i.e. the loss of task-specific representations. To better understand both phenomena, we introduce a new analysis technique called task exclusion comparison. If a model has seen a task and it has not forgotten all the task-specific features, then its representation for that task should be better than that of a model that was trained on similar tasks, but not that exact one. Our image classification experiments show that most task-specific features are quickly forgotten, in contrast to what has been suggested in the past. Further, we demonstrate how some continual learning methods, like replay, and ideas from representation learning affect a continually learned representation. We conclude by observing that representation quality is tightly correlated with continual learning performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2020

Adversarial Continual Learning

Continual learning aims to learn new tasks without forgetting previously...
research
12/12/2020

Knowledge Capture and Replay for Continual Learning

Deep neural networks have shown promise in several domains, and the lear...
research
06/14/2023

POP: Prompt Of Prompts for Continual Learning

Continual learning (CL) has attracted increasing attention in the recent...
research
09/29/2020

One Person, One Model, One World: Learning Continual User Representation without Forgetting

Learning generic user representations which can then be applied to other...
research
06/18/2023

IF2Net: Innately Forgetting-Free Networks for Continual Learning

Continual learning can incrementally absorb new concepts without interfe...
research
06/22/2020

Automatic Recall Machines: Internal Replay, Continual Learning and the Brain

Replay in neural networks involves training on sequential data with memo...
research
06/24/2020

Improving task-specific representation via 1M unlabelled images without any extra knowledge

We present a case-study to improve the task-specific representation by l...

Please sign up or login with your details

Forgot password? Click here to reset