Variational Auto-Regressive Gaussian Processes for Continual Learning

06/09/2020
by   Sanyam Kapoor, et al.
0

This paper proposes Variational Auto-Regressive Gaussian Process (VAR-GP), a principled Bayesian updating mechanism to incorporate new data for sequential tasks in the context of continual learning. It relies on a novel auto-regressive characterization of the variational distribution and inference is made scalable using sparse inducing point approximations. Experiments on standard continual learning benchmarks demonstrate the ability of VAR-GPs to perform well at new tasks without compromising performance on old ones, yielding competitive results to state-of-the-art methods. In addition, we qualitatively show how VAR-GP improves the predictive entropy estimates as we train on new tasks. Further, we conduct a thorough ablation study to verify the effectiveness of inferential choices.

READ FULL TEXT

page 2

page 8

page 9

page 14

research
01/31/2019

Functional Regularisation for Continual Learning using Gaussian Processes

We introduce a novel approach for supervised continual learning based on...
research
06/06/2023

Memory-Based Dual Gaussian Processes for Sequential Learning

Sequential learning with Gaussian processes (GPs) is challenging when ac...
research
10/31/2019

Continual Multi-task Gaussian Processes

We address the problem of continual learning in multi-task Gaussian proc...
research
12/04/2019

Indian Buffet Neural Networks for Continual Learning

We place an Indian Buffet Process (IBP) prior over the neural structure ...
research
04/12/2022

Out-Of-Distribution Detection In Unsupervised Continual Learning

Unsupervised continual learning aims to learn new tasks incrementally wi...
research
11/27/2018

Partitioned Variational Inference: A unified framework encompassing federated and continual learning

Variational inference (VI) has become the method of choice for fitting m...

Please sign up or login with your details

Forgot password? Click here to reset