Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning

04/05/2019
by   Oleksiy Ostapenko, et al.
1

Models trained in the context of continual learning (CL) should be able to learn from a stream of data over an undefined period of time. The main challenges herein are: 1) maintaining old knowledge while simultaneously benefiting from it when learning new tasks, and 2) guaranteeing model scalability with a growing amount of data to learn from. In order to tackle these challenges, we introduce Dynamic Generative Memory (DGM) - a synaptic plasticity driven framework for continual learning. DGM relies on conditional generative adversarial networks with learnable connection plasticity realized with neural masking. Specifically, we evaluate two variants of neural masking: applied to (i) layer activations and (ii) to connection weights directly. Furthermore, we propose a dynamic network expansion mechanism that ensures sufficient model capacity to accommodate for continually incoming tasks. The amount of added capacity is determined dynamically from the learned binary mask. We evaluate DGM in the continual class-incremental setup on visual classification tasks.

READ FULL TEXT

page 3

page 8

research
10/23/2021

AFEC: Active Forgetting of Negative Transfer in Continual Learning

Continual learning aims to learn a sequence of tasks from dynamic data d...
research
02/26/2020

Metaplasticity in Multistate Memristor Synaptic Networks

Recent studies have shown that metaplastic synapses can retain informati...
research
06/18/2022

NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks

The goal of continual learning (CL) is to learn different tasks over tim...
research
03/13/2017

Continual Learning Through Synaptic Intelligence

While deep learning has led to remarkable advances across diverse applic...
research
08/29/2023

Continual Learning for Generative Retrieval over Dynamic Corpora

Generative retrieval (GR) directly predicts the identifiers of relevant ...
research
08/08/2022

A Multi-label Continual Learning Framework to Scale Deep Learning Approaches for Packaging Equipment Monitoring

Continual Learning aims to learn from a stream of tasks, being able to r...
research
03/27/2023

Forget-free Continual Learning with Soft-Winning SubNetworks

Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which states t...

Please sign up or login with your details

Forgot password? Click here to reset