Continual Learning with Distributed Optimization: Does CoCoA Forget?

11/30/2022
by   Martin Hellkvist, et al.
0

We focus on the continual learning problem where the tasks arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks. In contrast to the continual learning literature focusing on the centralized setting, we investigate the distributed estimation framework. We consider the well-established distributed learning algorithm CoCoA. We derive closed form expressions for the iterations for the overparametrized case. We illustrate the convergence and the error performance of the algorithm based on the over/under-parametrization of the problem. Our results show that depending on the problem dimensions and data generation assumptions, CoCoA can perform continual learning over a sequence of tasks, i.e., it can learn a new task without forgetting previously learned tasks, with access only to one task at a time.

READ FULL TEXT
research
04/04/2023

I2I: Initializing Adapters with Improvised Knowledge

Adapters present a promising solution to the catastrophic forgetting pro...
research
03/27/2023

CoDeC: Communication-Efficient Decentralized Continual Learning

Training at the edge utilizes continuously evolving data generated at di...
research
04/18/2019

Continual Learning for Sentence Representations Using Conceptors

Distributed representations of sentences have become ubiquitous in natur...
research
06/06/2023

Continual Learning in Linear Classification on Separable Data

We analyze continual learning on a sequence of separable linear classifi...
research
05/31/2021

A study on the plasticity of neural networks

One aim shared by multiple settings, such as continual learning or trans...
research
08/29/2023

Continual Learning for Generative Retrieval over Dynamic Corpora

Generative retrieval (GR) directly predicts the identifiers of relevant ...
research
06/06/2023

Learning Representations on the Unit Sphere: Application to Online Continual Learning

We use the maximum a posteriori estimation principle for learning repres...

Please sign up or login with your details

Forgot password? Click here to reset