SigGPDE: Scaling Sparse Gaussian Processes on Sequential Data

by   Maud Lemercier, et al.

Making predictions and quantifying their uncertainty when the input data is sequential is a fundamental learning challenge, recently attracting increasing attention. We develop SigGPDE, a new scalable sparse variational inference framework for Gaussian Processes (GPs) on sequential data. Our contribution is twofold. First, we construct inducing variables underpinning the sparse approximation so that the resulting evidence lower bound (ELBO) does not require any matrix inversion. Second, we show that the gradients of the GP signature kernel are solutions of a hyperbolic partial differential equation (PDE). This theoretical insight allows us to build an efficient back-propagation algorithm to optimize the ELBO. We showcase the significant computational gains of SigGPDE compared to existing methods, while achieving state-of-the-art performance for classification tasks on large datasets of up to 1 million multivariate time series.



page 1

page 2

page 3

page 4


Variational Gaussian Processes with Signature Covariances

We introduce a Bayesian approach to learn from stream-valued data by usi...

Connections and Equivalences between the Nyström Method and Sparse Variational Gaussian Processes

We investigate the connections between sparse approximation methods for ...

Distributed Variational Inference in Sparse Gaussian Process Regression and Latent Variable Models

Gaussian processes (GPs) are a powerful tool for probabilistic inference...

Faster variational inducing input Gaussian process classification

Gaussian processes (GP) provide a prior over functions and allow finding...

Blitzkriging: Kronecker-structured Stochastic Gaussian Processes

We present Blitzkriging, a new approach to fast inference for Gaussian p...

Sparse Algorithms for Markovian Gaussian Processes

Approximate Bayesian inference methods that scale to very large datasets...

Conditioning Sparse Variational Gaussian Processes for Online Decision-making

With a principled representation of uncertainty and closed form posterio...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.