Learning Invariant Representation for Continual Learning

01/15/2021
by   Ghada Sokar, et al.
10

Continual learning aims to provide intelligent agents that are capable of learning continually a sequence of tasks, building on previously learned knowledge. A key challenge in this learning paradigm is catastrophically forgetting previously learned tasks when the agent faces a new one. Current rehearsal-based methods show their success in mitigating the catastrophic forgetting problem by replaying samples from previous tasks during learning a new one. However, these methods are infeasible when the data of previous tasks is not accessible. In this work, we propose a new pseudo-rehearsal-based method, named learning Invariant Representation for Continual Learning (IRCL), in which class-invariant representation is disentangled from a conditional generative model and jointly used with class-specific representation to learn the sequential tasks. Disentangling the shared invariant representation helps to learn continually a sequence of tasks, while being more robust to forgetting and having better knowledge transfer. We focus on class incremental learning where there is no knowledge about task identity during inference. We empirically evaluate our proposed method on two well-known benchmarks for continual learning: split MNIST and split Fashion MNIST. The experimental results show that our proposed method outperforms regularization-based methods by a big margin and is better than the state-of-the-art pseudo-rehearsal-based method. Finally, we analyze the role of the shared invariant representation in mitigating the forgetting problem especially when the number of replayed samples for each previous task is small.

READ FULL TEXT
research
02/02/2023

Online Continual Learning via the Knowledge Invariant and Spread-out Properties

The goal of continual learning is to provide intelligent agents that are...
research
07/15/2020

SpaceNet: Make Free Space For Continual Learning

The continual learning (CL) paradigm aims to enable neural networks to l...
research
01/28/2021

Self-Attention Meta-Learner for Continual Learning

Continual learning aims to provide intelligent agents capable of learnin...
research
09/15/2023

Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization

The pursuit of long-term autonomy mandates that robotic agents must cont...
research
09/13/2023

Continual Learning with Dirichlet Generative-based Rehearsal

Recent advancements in data-driven task-oriented dialogue systems (ToDs)...
research
12/03/2019

Overcoming Catastrophic Forgetting by Generative Regularization

In this paper, we propose a new method to overcome catastrophic forgetti...
research
04/30/2023

DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning

Rehearsal-based approaches are a mainstay of continual learning (CL). Th...

Please sign up or login with your details

Forgot password? Click here to reset