Plasticity-Optimized Complementary Networks for Unsupervised Continual Learning

09/12/2023
by   Alex Gomez-Villa, et al.
0

Continuous unsupervised representation learning (CURL) research has greatly benefited from improvements in self-supervised learning (SSL) techniques. As a result, existing CURL methods using SSL can learn high-quality representations without any labels, but with a notable performance drop when learning on a many-tasks data stream. We hypothesize that this is caused by the regularization losses that are imposed to prevent forgetting, leading to a suboptimal plasticity-stability trade-off: they either do not adapt fully to the incoming data (low plasticity), or incur significant forgetting when allowed to fully adapt to a new SSL pretext-task (low stability). In this work, we propose to train an expert network that is relieved of the duty of keeping the previous knowledge and can focus on performing optimally on the new tasks (optimizing plasticity). In the second phase, we combine this new knowledge with the previous network in an adaptation-retrospection phase to avoid forgetting and initialize a new expert with the knowledge of the old network. We perform several experiments showing that our proposed approach outperforms other CURL exemplar-free methods in few- and many-task split settings. Furthermore, we show how to adapt our approach to semi-supervised continual learning (Semi-SCL) and show that we surpass the accuracy of other exemplar-free Semi-SCL methods and reach the results of some others that use exemplars.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2022

Probing Representation Forgetting in Supervised and Unsupervised Continual Learning

Continual Learning research typically focuses on tackling the phenomenon...
research
10/05/2021

Hypernetworks for Continual Semi-Supervised Learning

Learning from data sequentially arriving, possibly in a non i.i.d. way, ...
research
12/30/2021

Continually Learning Self-Supervised Representations with Projected Functional Regularization

Recent self-supervised learning methods are able to learn high-quality i...
research
07/12/2022

Contrastive Learning for Online Semi-Supervised General Continual Learning

We study Online Continual Learning with missing labels and propose SemiC...
research
03/16/2023

Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks in Continual Learning

In contrast to the natural capabilities of humans to learn new tasks in ...
research
10/31/2019

Continual Unsupervised Representation Learning

Continual learning aims to improve the ability of modern learning system...
research
08/31/2023

Continual Learning From a Stream of APIs

Continual learning (CL) aims to learn new tasks without forgetting previ...

Please sign up or login with your details

Forgot password? Click here to reset