Bézier Gaussian Processes for Tall and Wide Data

09/01/2022
by   Martin Jørgensen, et al.
0

Modern approximations to Gaussian processes are suitable for "tall data", with a cost that scales well in the number of observations, but under-performs on “wide data”, scaling poorly in the number of input features. That is, as the number of input features grows, good predictive performance requires the number of summarising variables, and their associated cost, to grow rapidly. We introduce a kernel that allows the number of summarising variables to grow exponentially with the number of input features, but requires only linear cost in both number of observations and input features. This scaling is achieved through our introduction of the Bézier buttress, which allows approximate inference without computing matrix inverses or determinants. We show that our kernel has close similarities to some of the most used kernels in Gaussian process regression, and empirically demonstrate the kernel's ability to scale to both tall and wide datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2020

Faster Gaussian Processes via Deep Embeddings

Gaussian processes provide a probabilistic framework for quantifying unc...
research
08/01/2020

Convergence of Sparse Variational Inference in Gaussian Processes Regression

Gaussian processes are distributions over functions that are versatile a...
research
03/11/2020

General linear-time inference for Gaussian Processes on one dimension

Gaussian Processes (GPs) provide a powerful probabilistic framework for ...
research
02/15/2021

High-Dimensional Gaussian Process Inference with Derivatives

Although it is widely known that Gaussian processes can be conditioned o...
research
03/18/2019

Linear-Time Inference for Pairwise Comparisons with Gaussian-Process Dynamics

We present a probabilistic model of pairwise-comparison outcomes that ca...
research
10/24/2018

Scalable Gaussian Processes on Discrete Domains

Kernel methods on discrete domains have shown great promise for many cha...
research
03/21/2020

Scaling up Kernel Ridge Regression via Locality Sensitive Hashing

Random binning features, introduced in the seminal paper of Rahimi and R...

Please sign up or login with your details

Forgot password? Click here to reset