DeepAI AI Chat
Log In Sign Up

The Effect of Task Ordering in Continual Learning

by   Samuel J. Bell, et al.

We investigate the effect of task ordering on continual learning performance. We conduct an extensive series of empirical experiments on synthetic and naturalistic datasets and show that reordering tasks significantly affects the amount of catastrophic forgetting. Connecting to the field of curriculum learning, we show that the effect of task ordering can be exploited to modify continual learning performance, and present a simple approach for doing so. Our method computes the distance between all pairs of tasks, where distance is defined as the source task curvature of a gradient step toward the target task. Using statistically rigorous methods and sound experimental design, we show that task ordering is an important aspect of continual learning that can be modified for improved performance.


page 1

page 2

page 3

page 4


Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent

In continual learning settings, deep neural networks are prone to catast...

Architecture Matters in Continual Learning

A large body of research in continual learning is devoted to overcoming ...

From MNIST to ImageNet and Back: Benchmarking Continual Curriculum Learning

Continual learning (CL) is one of the most promising trends in recent ma...

Target Layer Regularization for Continual Learning Using Cramer-Wold Generator

We propose an effective regularization strategy (CW-TaLaR) for solving c...

Fixed Design Analysis of Regularization-Based Continual Learning

We consider a continual learning (CL) problem with two linear regression...

Continual Learning with Extended Kronecker-factored Approximate Curvature

We propose a quadratic penalty method for continual learning of neural n...

Learning to Learn: How to Continuously Teach Humans and Machines

Our education system comprises a series of curricula. For example, when ...