Linear Mode Connectivity in Multitask and Continual Learning

10/09/2020
by   Seyed-Iman Mirzadeh, et al.
0

Continual (sequential) training and multitask (simultaneous) training are often attempting to solve the same overall objective: to find a solution that performs well on all considered tasks. The main difference is in the training regimes, where continual learning can only have access to one task at a time, which for neural networks typically leads to catastrophic forgetting. That is, the solution found for a subsequent task does not perform well on the previous ones anymore. However, the relationship between the different minima that the two training regimes arrive at is not well understood. What sets them apart? Is there a local structure that could explain the difference in performance achieved by the two different schemes? Motivated by recent work showing that different minima of the same task are typically connected by very simple curves of low error, we investigate whether multitask and continual solutions are similarly connected. We empirically find that indeed such connectivity can be reliably achieved and, more interestingly, it can be done by a linear path, conditioned on having the same initialization for both. We thoroughly analyze this observation and discuss its significance for the continual learning process. Furthermore, we exploit this finding to propose an effective algorithm that constrains the sequentially learned minima to behave as the multitask solution. We show that our method outperforms several state of the art continual learning algorithms on various vision benchmarks.

READ FULL TEXT

page 3

page 15

page 19

page 20

research
02/20/2022

Efficient Continual Learning Ensembles in Neural Network Subspaces

A growing body of research in continual learning focuses on the catastro...
research
05/06/2019

Improving and Understanding Variational Continual Learning

In the continual learning setting, tasks are encountered sequentially. T...
research
06/12/2020

Understanding the Role of Training Regimes in Continual Learning

Catastrophic forgetting affects the training of neural networks, limitin...
research
08/02/2019

Toward Understanding Catastrophic Forgetting in Continual Learning

We study the relationship between catastrophic forgetting and properties...
research
06/12/2020

CPR: Classifier-Projection Regularization for Continual Learning

We propose a general, yet simple patch that can be applied to existing r...
research
03/12/2021

Training Networks in Null Space of Feature Covariance for Continual Learning

In the setting of continual learning, a network is trained on a sequence...
research
10/06/2020

Sequential Changepoint Detection in Neural Networks with Checkpoints

We introduce a framework for online changepoint detection and simultaneo...

Please sign up or login with your details

Forgot password? Click here to reset