DeepAI AI Chat
Log In Sign Up

Online continual learning with no task boundaries

by   Rahaf Aljundi, et al.

Continual learning is the ability of an agent to learn online with a non-stationary and never-ending stream of data. A key component for such never-ending learning process is to overcome the catastrophic forgetting of previously seen data, a problem that neural networks are well known to suffer from. The solutions developed so far often relax the problem of continual learning to the easier task-incremental setting, where the stream of data is divided into tasks with clear boundaries. In this paper, we break the limits and move to the more challenging online setting where we assume no information of tasks in the data stream. We start from the idea that each learning step should not increase the losses of the previously learned examples through constraining the optimization process. This means that the number of constraints grows linearly with the number of examples, which is a serious limitation. We develop a solution to select a fixed number of constraints that we use to approximate the feasible region defined by the original constraints. We compare our approach against the methods that rely on task boundaries to select a fixed set of examples, and show comparable or even better results, especially when the boundaries are blurry or when the data distributions are imbalanced.


page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 9


Continual Learning in Neural Networks

Artificial neural networks have exceeded human-level performance in acco...

Task Agnostic Continual Learning Using Online Variational Bayes with Fixed-Point Updates

Background: Catastrophic forgetting is the notorious vulnerability of ne...

DRILL: Dynamic Representations for Imbalanced Lifelong Learning

Continual or lifelong learning has been a long-standing challenge in mac...

Online Continual Learning with Natural Distribution Shifts: An Empirical Study with Visual Data

Continual learning is the problem of learning and retaining knowledge th...

CLeaR: An Adaptive Continual Learning Framework for Regression Tasks

Catastrophic forgetting means that a trained neural network model gradua...

Theoretical Understanding of the Information Flow on Continual Learning Performance

Continual learning (CL) is a setting in which an agent has to learn from...

Continual Learning for Image-Based Camera Localization

For several emerging technologies such as augmented reality, autonomous ...

Code Repositories


A PyTorch implementation of the ECCV 2018 publication "Memory Aware Synapses: Learning what (not) to forget"

view repo