DeepAI AI Chat
Log In Sign Up

Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent

06/21/2020
by   Mehdi Abbana Bennani, et al.
0

In continual learning settings, deep neural networks are prone to catastrophic forgetting. Orthogonal Gradient Descent (Farajtabar et al., 2019) achieves state-of-the-art results in practice for continual learning, although no theoretical guarantees have been proven yet. We derive the first generalisation guarantees for the algorithm OGD for continual learning, for overparameterized neural networks. We find that OGD is only provably robust to catastrophic forgetting across a single task. We propose OGD+, prove that it is robust to catastrophic forgetting across an arbitrary number of tasks, and that it verifies tighter generalisation bounds. Our experiments show that OGD+ achieves state-of-the-art results on settings with a large number of tasks, even though the models are not overparameterized. Also, we derive a closed form expression of the learned models through tasks, as a recursive kernel regression relation, which captures the transferability of knowledge through tasks. Finally, we quantify theoretically the impact of task ordering on the generalisation error, which highlights the importance of the curriculum for lifelong learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/02/2019

Weight Friction: A Simple Method to Overcome Catastrophic Forgetting and Enable Continual Learning

In recent years, deep neural networks have found success in replicating ...
07/10/2022

Scaling the Number of Tasks in Continual Learning

Standard gradient descent algorithms applied to sequences of tasks are k...
05/26/2022

The Effect of Task Ordering in Continual Learning

We investigate the effect of task ordering on continual learning perform...
10/15/2019

Orthogonal Gradient Descent for Continual Learning

Neural networks are achieving state of the art and sometimes super-human...
10/07/2020

A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix

Continual learning (CL) is a setting in which an agent has to learn from...
05/19/2022

How catastrophic can catastrophic forgetting be in linear regression?

To better understand catastrophic forgetting, we study fitting an overpa...
06/17/2022

Debugging using Orthogonal Gradient Descent

In this report we consider the following problem: Given a trained model ...