Is Continual Learning Truly Learning Representations Continually?

06/16/2022
by   Sungmin Cha, et al.
62

Continual learning (CL) aims to learn from sequentially arriving tasks without forgetting previous tasks. Whereas CL algorithms have tried to achieve higher average test accuracy across all the tasks learned so far, learning continuously useful representations is critical for successful generalization and downstream transfer. To measure representational quality, we re-train only the output layers using a small balanced dataset for all the tasks, evaluating the average accuracy without any biased predictions toward the current task. We also test on several downstream tasks, measuring transfer learning accuracy of the learned representations. By testing our new formalism on ImageNet-100 and ImageNet-1000, we find that using more exemplar memory is the only option to make a meaningful difference in learned representations, and most of the regularization- or distillation-based CL algorithms that use the exemplar memory fail to learn continuously useful representations in class-incremental learning. Surprisingly, unsupervised (or self-supervised) CL with sufficient memory size can achieve comparable performance to the supervised counterparts. Considering non-trivial labeling costs, we claim that finding more efficient unsupervised CL algorithms that minimally use exemplary memory would be the next promising direction for CL research.

READ FULL TEXT
research
10/13/2021

Rethinking the Representational Continuity: Towards Unsupervised Continual Learning

Continual learning (CL) aims to learn a sequence of tasks without forget...
research
05/20/2023

Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion

Task-incremental continual learning refers to continually training a mod...
research
04/14/2021

Continual Learning From Unlabeled Data Via Deep Clustering

Continual learning, a promising future learning strategy, aims to learn ...
research
07/12/2022

A developmental approach for training deep belief networks

Deep belief networks (DBNs) are stochastic neural networks that can extr...
research
03/21/2023

Continual Learning in the Presence of Spurious Correlation

Most continual learning (CL) algorithms have focused on tackling the sta...
research
09/28/2020

Sense and Learn: Self-Supervision for Omnipresent Sensors

Learning general-purpose representations from multisensor data produced ...

Please sign up or login with your details

Forgot password? Click here to reset