Knowledge Capture and Replay for Continual Learning

Deep neural networks have shown promise in several domains, and the learned task-specific information is implicitly stored in the network parameters. It will be vital to utilize representations from these networks for downstream tasks such as continual learning. In this paper, we introduce the notion of flashcards that are visual representations to capture the encoded knowledge of a network, as a function of random image patterns. We demonstrate the effectiveness of flashcards in capturing representations and show that they are efficient replay methods for general and task agnostic continual learning setting. Thus, while adapting to a new task, a limited number of constructed flashcards, help to prevent catastrophic forgetting of the previously learned tasks. Most interestingly, such flashcards neither require external memory storage nor need to be accumulated over multiple tasks and only need to be constructed just before learning the subsequent new task, irrespective of the number of tasks trained before and are hence task agnostic. We first demonstrate the efficacy of flashcards in capturing knowledge representation from a trained network, and empirically validate the efficacy of flashcards on a variety of continual learning tasks: continual unsupervised reconstruction, continual denoising, and new-instance learning classification, using a number of heterogeneous benchmark datasets. These studies also indicate that continual learning algorithms with flashcards as the replay strategy perform better than other state-of-the-art replay methods, and exhibits on par performance with the best possible baseline using coreset sampling, with the least additional computational complexity and storage.

READ FULL TEXT

page 3

page 6

page 12

page 13

page 14

page 15

page 16

page 17

research
08/04/2022

A Benchmark and Empirical Analysis for Replay Strategies in Continual Learning

With the capacity of continual learning, humans can continuously acquire...
research
05/23/2022

KRNet: Towards Efficient Knowledge Replay

The knowledge replay technique has been widely used in many tasks such a...
research
04/03/2023

Knowledge Accumulation in Continually Learned Representations and the Issue of Feature Forgetting

By default, neural networks learn on all training data at once. When suc...
research
02/22/2022

Increasing Depth of Neural Networks for Life-long Learning

Increasing neural network depth is a well-known method for improving neu...
research
10/31/2019

Continual Unsupervised Representation Learning

Continual learning aims to improve the ability of modern learning system...
research
07/29/2023

Continual Learning in Predictive Autoscaling

Predictive Autoscaling is used to forecast the workloads of servers and ...
research
05/31/2023

The Tunnel Effect: Building Data Representations in Deep Neural Networks

Deep neural networks are widely known for their remarkable effectiveness...

Please sign up or login with your details

Forgot password? Click here to reset