Memory Bounds for Continual Learning

04/22/2022
by   Xi Chen, et al.
2

Continual learning, or lifelong learning, is a formidable current challenge to machine learning. It requires the learner to solve a sequence of k different learning tasks, one after the other, while retaining its aptitude for earlier tasks; the continual learner should scale better than the obvious solution of developing and maintaining a separate learner for each of the k tasks. We embark on a complexity-theoretic study of continual learning in the PAC framework. We make novel uses of communication complexity to establish that any continual learner, even an improper one, needs memory that grows linearly with k, strongly suggesting that the problem is intractable. When logarithmically many passes over the learning tasks are allowed, we provide an algorithm based on multiplicative weights update whose memory requirement scales well; we also establish that improper learning is necessary for such performance. We conjecture that these results may lead to new promising approaches to continual learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2023

Sparse Distributed Memory is a Continual Learner

Continual learning is a problem for artificial neural networks that thei...
research
04/29/2023

The Ideal Continual Learner: An Agent That Never Forgets

The goal of continual learning is to find a model that solves multiple l...
research
02/25/2021

On continual single index learning

In this paper, we generalize the problem of single index model to the co...
research
03/02/2022

Continual Learning of Multi-modal Dynamics with External Memory

We study the problem of fitting a model to a dynamical environment when ...
research
02/27/2019

Continual Learning with Tiny Episodic Memories

Learning with less supervision is a major challenge in artificial intell...
research
04/18/2019

Continual Learning for Sentence Representations Using Conceptors

Distributed representations of sentences have become ubiquitous in natur...
research
10/22/2020

Continual Learning in Low-rank Orthogonal Subspaces

In continual learning (CL), a learner is faced with a sequence of tasks,...

Please sign up or login with your details

Forgot password? Click here to reset