Faster variational inducing input Gaussian process classification

11/18/2016
by   Pavel Izmailov, et al.
0

Gaussian processes (GP) provide a prior over functions and allow finding complex regularities in data. Gaussian processes are successfully used for classification/regression problems and dimensionality reduction. In this work we consider the classification problem only. The complexity of standard methods for GP-classification scales cubically with the size of the training dataset. This complexity makes them inapplicable to big data problems. Therefore, a variety of methods were introduced to overcome this limitation. In the paper we focus on methods based on so called inducing inputs. This approach is based on variational inference and proposes a particular lower bound for marginal likelihood (evidence). This bound is then maximized w.r.t. parameters of kernel function of the Gaussian process, thus fitting the model to data. The computational complexity of this method is O(nm^2), where m is the number of inducing inputs used by the model and is assumed to be substantially smaller than the size of the dataset n. Recently, a new evidence lower bound for GP-classification problem was introduced. It allows using stochastic optimization, which makes it suitable for big data problems. However, the new lower bound depends on O(m^2) variational parameter, which makes optimization challenging in case of big m. In this work we develop a new approach for training inducing input GP models for classification problems. Here we use quadratic approximation of several terms in the aforementioned evidence lower bound, obtaining analytical expressions for optimal values of most of the parameters in the optimization, thus sufficiently reducing the dimension of optimization space. In our experiments we achieve as well or better results, compared to the existing method. Moreover, our method doesn't require the user to manually set the learning rate, making it more practical, than the existing method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/06/2014

Distributed Variational Inference in Sparse Gaussian Process Regression and Latent Variable Models

Gaussian processes (GPs) are a powerful tool for probabilistic inference...
research
06/04/2020

Quadruply Stochastic Gaussian Processes

We introduce a stochastic variational inference procedure for training s...
research
09/14/2019

Scalable Gaussian Process Classification with Additive Noise for Various Likelihoods

Gaussian process classification (GPC) provides a flexible and powerful s...
research
10/25/2018

Adversarially Robust Optimization with Gaussian Processes

In this paper, we consider the problem of Gaussian process (GP) optimiza...
research
09/02/2016

Generic Inference in Latent Gaussian Process Models

We develop an automated variational method for inference in models with ...
research
09/08/2018

Non-Parametric Variational Inference with Graph Convolutional Networks for Gaussian Processes

Inference for GP models with non-Gaussian noises is computationally expe...
research
09/10/2022

Revisiting Active Sets for Gaussian Process Decoders

Decoders built on Gaussian processes (GPs) are enticing due to the margi...

Please sign up or login with your details

Forgot password? Click here to reset