Graph-Based Continual Learning

07/09/2020
by   Binh Tang, et al.
0

Despite significant advances, continual learning models still suffer from catastrophic forgetting when exposed to incrementally available data from non-stationary distributions. Rehearsal approaches alleviate the problem by maintaining and replaying a small episodic memory of previous samples, often implemented as an array of independent memory slots. In this work, we propose to augment such an array with a learnable random graph that captures pairwise similarities between its samples, and use it not only to learn new tasks but also to guard against forgetting. Empirical results on several benchmark datasets show that our model consistently outperforms recently proposed baselines for task-free continual learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2021

Continual Learning with Echo State Networks

Continual Learning (CL) refers to a learning setup where data is non sta...
research
04/15/2021

Rehearsal revealed: The limits and merits of revisiting samples in continual learning

Learning from non-stationary data streams and overcoming catastrophic fo...
research
05/10/2021

Continual Learning via Bit-Level Information Preserving

Continual learning tackles the setting of learning different tasks seque...
research
04/22/2020

Continual Learning of Object Instances

We propose continual instance learning - a method that applies the conce...
research
12/19/2022

DSI++: Updating Transformer Memory with New Documents

Differentiable Search Indices (DSIs) encode a corpus of documents in the...
research
09/28/2022

A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal

Online continual learning (OCL) aims to train neural networks incrementa...
research
08/14/2023

CBA: Improving Online Continual Learning via Continual Bias Adaptor

Online continual learning (CL) aims to learn new knowledge and consolida...

Please sign up or login with your details

Forgot password? Click here to reset