Scalable Gaussian Process Variational Autoencoders

10/26/2020
by   Metod Jazbec, et al.
18

Conventional variational autoencoders fail in modeling correlations between data points due to their use of factorized priors. Amortized Gaussian process inference through GP-VAEs has led to significant improvements in this regard, but is still inhibited by the intrinsic complexity of exact GP inference. We improve the scalability of these methods through principled sparse inference approaches. We propose a new scalable GP-VAE model that outperforms existing approaches in terms of runtime and memory footprint, is easy to implement, and allows for joint end-to-end optimization of all components.

READ FULL TEXT

page 8

page 10

page 21

page 22

research
10/20/2020

Sparse Gaussian Process Variational Autoencoders

Large, multi-dimensional spatio-temporal datasets are omnipresent in mod...
research
05/27/2019

Scalable Training of Inference Networks for Gaussian-Process Models

Inference in Gaussian process (GP) models is computationally challenging...
research
11/14/2020

Factorized Gaussian Process Variational Autoencoders

Variational autoencoders often assume isotropic Gaussian priors and mean...
research
10/28/2018

Gaussian Process Prior Variational Autoencoders

Variational autoencoders (VAE) are a powerful and widely-used class of m...
research
08/09/2018

Exploiting Structure for Fast Kernel Learning

We propose two methods for exact Gaussian process (GP) inference and lea...
research
12/26/2015

Inverse Reinforcement Learning via Deep Gaussian Process

We propose a new approach to inverse reinforcement learning (IRL) based ...
research
10/29/2020

Gaussian Process Bandit Optimization of the Thermodynamic Variational Objective

Achieving the full promise of the Thermodynamic Variational Objective (T...

Please sign up or login with your details

Forgot password? Click here to reset