Continual Learning in Low-rank Orthogonal Subspaces

10/22/2020
by   Arslan Chaudhry, et al.
0

In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the other, and the goal is to remember all the tasks once the continual learning experience is finished. The prior art in CL uses episodic memory, parameter regularization or extensible network structures to reduce interference among tasks, but in the end, all the approaches learn different tasks in a joint vector space. We believe this invariably leads to interference among different tasks. We propose to learn tasks in different (low-rank) vector subspaces that are kept orthogonal to each other in order to minimize interference. Further, to keep the gradients of different tasks coming from these subspaces orthogonal to each other, we learn isometric mappings by posing network training as an optimization problem over the Stiefel manifold. To the best of our understanding, we report, for the first time, strong results over experience-replay baseline with and without memory on standard classification benchmarks in continual learning. The code is made publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2020

Energy-Based Models for Continual Learning

We motivate Energy-Based Models (EBMs) as a promising model class for co...
research
07/11/2020

Batch-level Experience Replay with Review for Continual Learning

Continual learning is a branch of deep learning that seeks to strike a b...
research
04/15/2020

Dark Experience for General Continual Learning: a Strong, Simple Baseline

Neural networks struggle to learn continuously, as they forget the old k...
research
04/22/2022

Memory Bounds for Continual Learning

Continual learning, or lifelong learning, is a formidable current challe...
research
03/17/2021

Gradient Projection Memory for Continual Learning

The ability to learn continually without forgetting the past tasks is a ...
research
04/20/2020

CLOPS: Continual Learning of Physiological Signals

Deep learning algorithms are known to experience destructive interferenc...
research
05/19/2022

Interpolating Compressed Parameter Subspaces

Inspired by recent work on neural subspaces and mode connectivity, we re...

Please sign up or login with your details

Forgot password? Click here to reset