Robust Continual Learning through a Comprehensively Progressive Bayesian Neural Network

02/27/2022
by   Guo Yang, et al.
0

This work proposes a comprehensively progressive Bayesian neural network for robust continual learning of a sequence of tasks. A Bayesian neural network is progressively pruned and grown such that there are sufficient network resources to represent a sequence of tasks, while the network does not explode. It starts with the contention that similar tasks should have the same number of total network resources, to ensure fair representation of all tasks in a continual learning scenario. Thus, as the data for new task streams in, sufficient neurons are added to the network such that the total number of neurons in each layer of the network, including the shared representations with previous tasks and individual task related representation, are equal for all tasks. The weights that are redundant at the end of training each task are also pruned through re-initialization, in order to be efficiently utilized in the subsequent task. Thus, the network grows progressively, but ensures effective utilization of network resources. We refer to our proposed method as 'Robust Continual Learning through a Comprehensively Progressive Bayesian Neural Network (RCL-CPB)' and evaluate the proposed approach on the MNIST data set, under three different continual learning scenarios. Further to this, we evaluate the performance of RCL-CPB on a homogeneous sequence of tasks using split CIFAR100 (20 tasks of 5 classes each), and a heterogeneous sequence of tasks using MNIST, SVHN and CIFAR10 data sets. The demonstrations and the performance results show that the proposed strategies for progressive BNN enable robust continual learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/29/2023

Progressive Prompts: Continual Learning for Language Models

We introduce Progressive Prompts - a simple and efficient approach for c...
research
02/22/2022

Increasing Depth of Neural Networks for Life-long Learning

Increasing neural network depth is a well-known method for improving neu...
research
07/12/2022

A developmental approach for training deep belief networks

Deep belief networks (DBNs) are stochastic neural networks that can extr...
research
01/24/2022

PaRT: Parallel Learning Towards Robust and Transparent AI

This paper takes a parallel learning approach for robust and transparent...
research
08/28/2023

Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates

Continual learning (CL) refers to the ability of an intelligent system t...
research
01/25/2022

Representation learnt by SGD and Adaptive learning rules – Conditions that Vary Sparsity and Selectivity in Neural Network

From the point of view of the human brain, continual learning can perfor...
research
07/01/2021

Improving Human Motion Prediction Through Continual Learning

Human motion prediction is an essential component for enabling closer hu...

Please sign up or login with your details

Forgot password? Click here to reset