Variational Density Propagation Continual Learning

08/22/2023
by   Christopher Angelini, et al.
0

Deep Neural Networks (DNNs) deployed to the real world are regularly subject to out-of-distribution (OoD) data, various types of noise, and shifting conceptual objectives. This paper proposes a framework for adapting to data distribution drift modeled by benchmark Continual Learning datasets. We develop and evaluate a method of Continual Learning that leverages uncertainty quantification from Bayesian Inference to mitigate catastrophic forgetting. We expand on previous approaches by removing the need for Monte Carlo sampling of the model weights to sample the predictive distribution. We optimize a closed-form Evidence Lower Bound (ELBO) objective approximating the predictive distribution by propagating the first two moments of a distribution, i.e. mean and covariance, through all network layers. Catastrophic forgetting is mitigated by using the closed-form ELBO to approximate the Minimum Description Length (MDL) Principle, inherently penalizing changes in the model likelihood by minimizing the KL Divergence between the variational posterior for the current task and the previous task's variational posterior acting as the prior. Leveraging the approximation of the MDL principle, we aim to initially learn a sparse variational posterior and then minimize additional model complexity learned for subsequent tasks. Our approach is evaluated for the task incremental learning scenario using density propagated versions of fully-connected and convolutional neural networks across multiple sequential benchmark datasets with varying task sequence lengths. Ultimately, this procedure produces a minimally complex network over a series of tasks mitigating catastrophic forgetting.

READ FULL TEXT
research
01/04/2023

On Sequential Bayesian Inference for Continual Learning

Sequential Bayesian inference can be used for continual learning to prev...
research
06/26/2020

Continual Learning from the Perspective of Compression

Connectionist models such as neural networks suffer from catastrophic fo...
research
01/31/2019

Functional Regularisation for Continual Learning using Gaussian Processes

We introduce a novel approach for supervised continual learning based on...
research
06/02/2020

Continual Learning of Predictive Models in Video Sequences via Variational Autoencoders

This paper proposes a method for performing continual learning of predic...
research
10/01/2020

Task Agnostic Continual Learning Using Online Variational Bayes with Fixed-Point Updates

Background: Catastrophic forgetting is the notorious vulnerability of ne...
research
06/03/2021

Continual Learning in Deep Networks: an Analysis of the Last Layer

We study how different output layer types of a deep neural network learn...
research
04/19/2021

Overcoming Catastrophic Forgetting with Gaussian Mixture Replay

We present Gaussian Mixture Replay (GMR), a rehearsal-based approach for...

Please sign up or login with your details

Forgot password? Click here to reset