Brain-inspired feature exaggeration in generative replay for continual learning

10/26/2021
by   Jack Millichamp, et al.
0

The catastrophic forgetting of previously learnt classes is one of the main obstacles to the successful development of a reliable and accurate generative continual learning model. When learning new classes, the internal representation of previously learnt ones can often be overwritten, resulting in the model's "memory" of earlier classes being lost over time. Recent developments in neuroscience have uncovered a method through which the brain avoids its own form of memory interference. Applying a targeted exaggeration of the differences between features of similar, yet competing memories, the brain can more easily distinguish and recall them. In this paper, the application of such exaggeration, via the repulsion of replayed samples belonging to competing classes, is explored. Through the development of a 'reconstruction repulsion' loss, this paper presents a new state-of-the-art performance on the classification of early classes in the class-incremental learning dataset CIFAR100.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/15/2023

Self-recovery of memory via generative replay

A remarkable capacity of the brain is its ability to autonomously reorga...
research
12/26/2022

Saliency-Augmented Memory Completion for Continual Learning

Continual Learning is considered a key step toward next-generation Artif...
research
03/06/2020

Triple Memory Networks: a Brain-Inspired Method for Continual Learning

Continual acquisition of novel experience without interfering previously...
research
11/10/2022

Mitigating Forgetting in Online Continual Learning via Contrasting Semantically Distinct Augmentations

Online continual learning (OCL) aims to enable model learning from a non...
research
05/28/2023

Just a Glimpse: Rethinking Temporal Information for Video Continual Learning

Class-incremental learning is one of the most important settings for the...
research
01/25/2022

Representation learnt by SGD and Adaptive learning rules – Conditions that Vary Sparsity and Selectivity in Neural Network

From the point of view of the human brain, continual learning can perfor...
research
06/22/2020

Automatic Recall Machines: Internal Replay, Continual Learning and the Brain

Replay in neural networks involves training on sequential data with memo...

Please sign up or login with your details

Forgot password? Click here to reset