Continual Learning Using Bayesian Neural Networks

10/09/2019
by   Honglin Li, et al.
0

Continual learning models allow to learn and adapt to new changes and tasks over time. However, in continual and sequential learning scenarios in which the models are trained using different data with various distributions, neural networks tend to forget the previously learned knowledge. This phenomenon is often referred to as catastrophic forgetting. The catastrophic forgetting is an inevitable problem in continual learning models for dynamic environments. To address this issue, we propose a method, called Continual Bayesian Learning Networks (CBLN), which enables the networks to allocate additional resources to adapt to new tasks without forgetting the previously learned tasks. Using a Bayesian Neural Network, CBLN maintains a mixture of Gaussian posterior distributions that are associated with different tasks. The proposed method tries to optimise the number of resources that are needed to learn each task and avoids an exponential increase in the number of resources that are involved in learning multiple tasks. The proposed method does not need to access the past training data and can choose suitable weights to classify the data points during the test time automatically based on an uncertainty criterion. We have evaluated our method on the MNIST and UCR time-series datasets. The evaluation results show that our method can address the catastrophic forgetting problem at a promising rate compared to the state-of-the-art models.

READ FULL TEXT

page 5

page 7

research
04/24/2019

Facilitating Bayesian Continual Learning by Natural Gradients and Stein Gradients

Continual learning aims to enable machine learning models to learn a gen...
research
05/08/2020

Continual Learning Using Task Conditional Neural Networks

Conventional deep learning models have limited capacity in learning mult...
research
06/06/2019

Uncertainty-guided Continual Learning with Bayesian Neural Networks

Continual learning aims to learn new tasks without forgetting previously...
research
09/07/2018

HC-Net: Memory-based Incremental Dual-Network System for Continual learning

Training a neural network for a classification task typically assumes th...
research
01/23/2020

Structured Compression and Sharing of Representational Space for Continual Learning

Humans are skilled at learning adaptively and efficiently throughout the...
research
03/06/2019

Using World Models for Pseudo-Rehearsal in Continual Learning

The utility of learning a dynamics/world model of the environment in rei...
research
03/16/2022

Continuous Detection, Rapidly React: Unseen Rumors Detection based on Continual Prompt-Tuning

Since open social platforms allow for a large and continuous flow of unv...

Please sign up or login with your details

Forgot password? Click here to reset