Centroids Matching: an efficient Continual Learning approach operating in the embedding space

by   Jary Pomponi, et al.

Catastrophic forgetting (CF) occurs when a neural network loses the information previously learned while training on a set of samples from a different distribution, i.e., a new task. Existing approaches have achieved remarkable results in mitigating CF, especially in a scenario called task incremental learning. However, this scenario is not realistic, and limited work has been done to achieve good results on more realistic scenarios. In this paper, we propose a novel regularization method called Centroids Matching, that, inspired by meta-learning approaches, fights CF by operating in the feature space produced by the neural network, achieving good results while requiring a small memory footprint. Specifically, the approach classifies the samples directly using the feature vectors produced by the neural network, by matching those vectors with the centroids representing the classes from the current task, or all the tasks up to that point. Centroids Matching is faster than competing baselines, and it can be exploited to efficiently mitigate CF, by preserving the distances between the embedding space produced by the model when past tasks were over, and the one currently produced, leading to a method that achieves high accuracy on all the tasks, without using an external memory when operating on easy scenarios, or using a small one for more realistic ones. Extensive experiments demonstrate that Centroids Matching achieves accuracy gains on multiple datasets and scenarios.


Prototype Reminding for Continual Learning

Continual learning is a critical ability of continually acquiring and tr...

Efficient Continual Learning in Neural Networks with Embedding Regularization

Continual learning of deep neural networks is a key requirement for scal...

Susceptibility of Continual Learning Against Adversarial Attacks

The recent advances in continual (incremental or lifelong) learning have...

Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks in Continual Learning

In contrast to the natural capabilities of humans to learn new tasks in ...

Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation

Existing continual relation learning (CRL) methods rely on plenty of lab...

Few-Shot Unsupervised Continual Learning through Meta-Examples

In real-world applications, data do not reflect the ones commonly used f...

Memory-Efficient Incremental Learning Through Feature Adaptation

In this work we introduce an approach for incremental learning, which pr...

Please sign up or login with your details

Forgot password? Click here to reset