A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning

01/03/2020
by   Soochan Lee, et al.
0

Despite the growing interest in continual learning, most of its contemporary works have been studied in a rather restricted setting where tasks are clearly distinguishable, and task boundaries are known during training. However, if our goal is to develop an algorithm that learns as humans do, this setting is far from realistic, and it is essential to develop a methodology that works in a task-free manner. Meanwhile, among several branches of continual learning, expansion-based methods have the advantage of eliminating catastrophic forgetting by allocating new resources to learn new data. In this work, we propose an expansion-based approach for task-free continual learning. Our model, named Continual Neural Dirichlet Process Mixture (CN-DPM), consists of a set of neural network experts that are in charge of a subset of the data. CN-DPM expands the number of experts in a principled way under the Bayesian nonparametric framework. With extensive experiments, we show that our model successfully performs task-free continual learning for both discriminative and generative tasks such as image classification and image generation.

READ FULL TEXT
research
04/07/2020

Class-Agnostic Continual Learning of Alternating Languages and Domains

Continual Learning has been often framed as the problem of training a mo...
research
04/21/2020

Bayesian Nonparametric Weight Factorization for Continual Learning

Naively trained neural networks tend to experience catastrophic forgetti...
research
07/09/2023

Class-Incremental Mixture of Gaussians for Deep Continual Learning

Continual learning models for stationary data focus on learning and reta...
research
07/11/2022

Learning an evolved mixture model for task-free continual learning

Recently, continual learning (CL) has gained significant interest becaus...
research
05/27/2021

Encoders and Ensembles for Task-Free Continual Learning

We present an architecture that is effective for continual learning in a...
research
07/10/2023

Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning

Federated continual learning (FCL) learns incremental tasks over time fr...
research
02/14/2022

Design of Explainability Module with Experts in the Loop for Visualization and Dynamic Adjustment of Continual Learning

Continual learning can enable neural networks to evolve by learning new ...

Please sign up or login with your details

Forgot password? Click here to reset