Online Continual Learning under Extreme Memory Constraints

08/04/2020
by   Enrico Fini, et al.
0

Continual Learning (CL) aims to develop agents emulating the human ability to sequentially learn new tasks while being able to retain knowledge obtained from past experiences. In this paper, we introduce the novel problem of Memory-Constrained Online Continual Learning (MC-OCL) which imposes strict constraints on the memory overhead that a possible algorithm can use to avoid catastrophic forgetting. As most, if not all, previous CL methods violate these constraints, we propose an algorithmic solution to MC-OCL: Batch-level Distillation (BLD), a regularization-based CL approach, which effectively balances stability and plasticity in order to learn from data streams, while preserving the ability to solve old tasks through distillation. Our extensive experimental evaluation, conducted on three publicly available benchmarks, empirically demonstrates that our approach successfully addresses the MC-OCL problem and achieves comparable accuracy to prior distillation methods requiring higher memory overhead.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2023

New Insights for the Stability-Plasticity Dilemma in Online Continual Learning

The aim of continual learning is to learn new tasks continuously (i.e., ...
research
09/09/2022

Selecting Related Knowledge via Efficient Channel Attention for Online Continual Learning

Continual learning aims to learn a sequence of tasks by leveraging the k...
research
03/06/2023

Centroid Distance Distillation for Effective Rehearsal in Continual Learning

Rehearsal, retraining on a stored small data subset of old tasks, has be...
research
08/14/2023

CBA: Improving Online Continual Learning via Continual Bias Adaptor

Online continual learning (CL) aims to learn new knowledge and consolida...
research
06/11/2019

Continual Reinforcement Learning deployed in Real-life using Policy Distillation and Sim2Real Transfer

We focus on the problem of teaching a robot to solve tasks presented seq...
research
04/10/2022

Information-theoretic Online Memory Selection for Continual Learning

A challenging problem in task-free continual learning is the online sele...
research
05/06/2021

Structured Ensembles: an Approach to Reduce the Memory Footprint of Ensemble Methods

In this paper, we propose a novel ensembling technique for deep neural n...

Please sign up or login with your details

Forgot password? Click here to reset