Few-Shot Unsupervised Continual Learning through Meta-Examples

09/17/2020
by   Alessia Bertugli, et al.
46

In real-world applications, data do not reflect the ones commonly used for neural networks training, since they are usually few, unbalanced, unlabeled and can be available as a stream. Hence many existing deep learning solutions suffer from a limited range of applications, in particular in the case of online streaming data that evolve over time. To narrow this gap, in this work we introduce a novel and complex setting involving unsupervised meta-continual learning with unbalanced tasks. These tasks are built through a clustering procedure applied to a fitted embedding space. We exploit a meta-learning scheme that simultaneously alleviates catastrophic forgetting and favors the generalization to new tasks, even Out-of-Distribution ones. Moreover, to encourage feature reuse during the meta-optimization, we exploit a single inner loop taking advantage of an aggregated representation achieved through the use of a self-attention mechanism. Experimental results on few-shot learning benchmarks show competitive performance even compared to the supervised case. Additionally, we empirically observe that in an unsupervised scenario, the small tasks and the variability in the clusters pooling play a crucial role in the generalization capability of the network. Further, on complex datasets, the exploitation of more clusters than the true number of classes leads to higher results, even compared to the ones obtained with full supervision, suggesting that a predefined partitioning into classes can miss relevant structural information.

READ FULL TEXT

page 1

page 2

page 12

page 13

research
01/28/2021

Generalising via Meta-Examples for Continual Learning in the Wild

Learning quickly and continually is still an ambitious task for neural n...
research
04/19/2021

Few-shot Continual Learning: a Brain-inspired Approach

It is an important yet challenging setting to continually learn new task...
research
10/03/2022

Efficient Meta-Learning for Continual Learning with Taylor Expansion Approximation

Continual learning aims to alleviate catastrophic forgetting when handli...
research
03/02/2017

Meta Networks

Neural networks have been successfully applied in applications with a la...
research
08/14/2021

Weakly Supervised Continual Learning

Continual Learning (CL) investigates how to train Deep Networks on a str...
research
05/22/2023

Mitigating Catastrophic Forgetting for Few-Shot Spoken Word Classification Through Meta-Learning

We consider the problem of few-shot spoken word classification in a sett...
research
08/03/2022

Centroids Matching: an efficient Continual Learning approach operating in the embedding space

Catastrophic forgetting (CF) occurs when a neural network loses the info...

Please sign up or login with your details

Forgot password? Click here to reset