DeepAI AI Chat
Log In Sign Up

Linear Mode Connectivity in Multitask and Continual Learning

by   Seyed-Iman Mirzadeh, et al.

Continual (sequential) training and multitask (simultaneous) training are often attempting to solve the same overall objective: to find a solution that performs well on all considered tasks. The main difference is in the training regimes, where continual learning can only have access to one task at a time, which for neural networks typically leads to catastrophic forgetting. That is, the solution found for a subsequent task does not perform well on the previous ones anymore. However, the relationship between the different minima that the two training regimes arrive at is not well understood. What sets them apart? Is there a local structure that could explain the difference in performance achieved by the two different schemes? Motivated by recent work showing that different minima of the same task are typically connected by very simple curves of low error, we investigate whether multitask and continual solutions are similarly connected. We empirically find that indeed such connectivity can be reliably achieved and, more interestingly, it can be done by a linear path, conditioned on having the same initialization for both. We thoroughly analyze this observation and discuss its significance for the continual learning process. Furthermore, we exploit this finding to propose an effective algorithm that constrains the sequentially learned minima to behave as the multitask solution. We show that our method outperforms several state of the art continual learning algorithms on various vision benchmarks.


page 3

page 15

page 19

page 20


Efficient Continual Learning Ensembles in Neural Network Subspaces

A growing body of research in continual learning focuses on the catastro...

Improving and Understanding Variational Continual Learning

In the continual learning setting, tasks are encountered sequentially. T...

Understanding the Role of Training Regimes in Continual Learning

Catastrophic forgetting affects the training of neural networks, limitin...

Toward Understanding Catastrophic Forgetting in Continual Learning

We study the relationship between catastrophic forgetting and properties...

CPR: Classifier-Projection Regularization for Continual Learning

We propose a general, yet simple patch that can be applied to existing r...

An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems

Multitask learning assumes that models capable of learning from multiple...

Sequential Changepoint Detection in Neural Networks with Checkpoints

We introduce a framework for online changepoint detection and simultaneo...

Code Repositories


Linear Mode Connectivity in Multitask and Continual Learning:

view repo