Convergence of Sparse Variational Inference in Gaussian Processes Regression

08/01/2020
by   David R. Burt, et al.
3

Gaussian processes are distributions over functions that are versatile and mathematically convenient priors in Bayesian modelling. However, their use is often impeded for data with large numbers of observations, N, due to the cubic (in N) cost of matrix operations used in exact inference. Many solutions have been proposed that rely on M ≪ N inducing variables to form an approximation at a cost of 𝒪(NM^2). While the computational cost appears linear in N, the true complexity depends on how M must scale with N to ensure a certain quality of the approximation. In this work, we investigate upper and lower bounds on how M needs to grow with N to ensure high quality approximations. We show that we can make the KL-divergence between the approximate model and the exact posterior arbitrarily small for a Gaussian-noise regression model with M≪ N. Specifically, for the popular squared exponential kernel and D-dimensional Gaussian distributed covariates, M=𝒪((log N)^D) suffice and a method with an overall computational cost of 𝒪(N(log N)^2D(loglog N)^2) can be used to perform inference.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2019

Rates of Convergence for Sparse Variational Gaussian Process Regression

Excellent variational approximations to Gaussian process posteriors have...
research
03/05/2020

Knot Selection in Sparse Gaussian Processes with a Variational Objective

Sparse, knot-based Gaussian processes have enjoyed considerable success ...
research
09/01/2022

Bézier Gaussian Processes for Tall and Wide Data

Modern approximations to Gaussian processes are suitable for "tall data"...
research
02/08/2022

Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning

Kernel-based models such as kernel ridge regression and Gaussian process...
research
06/15/2016

Understanding Probabilistic Sparse Gaussian Process Approximations

Good sparse approximations are essential for practical inference in Gaus...
research
04/29/2020

Sparse Cholesky factorization by Kullback-Leibler minimization

We propose to compute a sparse approximate inverse Cholesky factor L of ...
research
05/07/2021

Laplace Matching for fast Approximate Inference in Generalized Linear Models

Bayesian inference in generalized linear models (GLMs), i.e. Gaussian re...

Please sign up or login with your details

Forgot password? Click here to reset