Online continual learning with no task boundaries

03/20/2019
by   Rahaf Aljundi, et al.
10

Continual learning is the ability of an agent to learn online with a non-stationary and never-ending stream of data. A key component for such never-ending learning process is to overcome the catastrophic forgetting of previously seen data, a problem that neural networks are well known to suffer from. The solutions developed so far often relax the problem of continual learning to the easier task-incremental setting, where the stream of data is divided into tasks with clear boundaries. In this paper, we break the limits and move to the more challenging online setting where we assume no information of tasks in the data stream. We start from the idea that each learning step should not increase the losses of the previously learned examples through constraining the optimization process. This means that the number of constraints grows linearly with the number of examples, which is a serious limitation. We develop a solution to select a fixed number of constraints that we use to approximate the feasible region defined by the original constraints. We compare our approach against the methods that rely on task boundaries to select a fixed set of examples, and show comparable or even better results, especially when the boundaries are blurry or when the data distributions are imbalanced.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 9

research
10/07/2019

Continual Learning in Neural Networks

Artificial neural networks have exceeded human-level performance in acco...
research
10/01/2020

Task Agnostic Continual Learning Using Online Variational Bayes with Fixed-Point Updates

Background: Catastrophic forgetting is the notorious vulnerability of ne...
research
05/18/2021

DRILL: Dynamic Representations for Imbalanced Lifelong Learning

Continual or lifelong learning has been a long-standing challenge in mac...
research
06/06/2023

Learning Representations on the Unit Sphere: Application to Online Continual Learning

We use the maximum a posteriori estimation principle for learning repres...
research
06/29/2023

Improving Online Continual Learning Performance and Stability with Temporal Ensembles

Neural networks are very effective when trained on large datasets for a ...
research
01/04/2021

CLeaR: An Adaptive Continual Learning Framework for Regression Tasks

Catastrophic forgetting means that a trained neural network model gradua...
research
04/26/2022

Theoretical Understanding of the Information Flow on Continual Learning Performance

Continual learning (CL) is a setting in which an agent has to learn from...

Please sign up or login with your details

Forgot password? Click here to reset