Direct loss minimization for sparse Gaussian processes

04/07/2020
by   Yadi Wei, et al.
0

The Gaussian process (GP) is an attractive Bayesian model for machine learning which combines an elegant formulation with model flexibility and uncertainty quantification. Sparse Gaussian process (sGP) algorithms provide an approximate solution that mitigates the high computational complexity of GP and the variational approximation is the current best practice for such approximations. Recent theoretical work has shown that an alternative approach, direct loss minimization (DLM), which directly minimizes predictive loss, comes with strong guarantees on the expected loss of the algorithm. In this paper we explore this approach experimentally. We develop the DLM algorithm for sGP and show that with appropriate hyperparameter optimization it provides a significant improvement over the variational approach. In particular, optimizing sGP for log loss provides better calibrated predictions for regression, classification and count prediction, and optimizing sGP for square loss improves the mean square error in regression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2016

Prediction performance after learning in Gaussian process regression

This paper considers the quantification of the prediction performance in...
research
04/18/2012

EigenGP: Sparse Gaussian process models with data-dependent eigenfunctions

Gaussian processes (GPs) provide a nonparametric representation of funct...
research
05/28/2018

Dirichlet-based Gaussian Processes for Large-scale Calibrated Classification

In this paper, we study the problem of deriving fast and accurate classi...
research
11/16/2018

Mean Square Prediction Error of Misspecified Gaussian Process Models

Nonparametric modeling approaches show very promising results in the are...
research
09/22/2019

PAC-Bayesian Bounds for Deep Gaussian Processes

Variational approximation techniques and inference for stochastic models...
research
10/16/2020

KrigHedge: GP Surrogates for Delta Hedging

We investigate a machine learning approach to option Greeks approximatio...
research
07/07/2021

Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian Modeling

There is significant interest in learning and optimizing a complex syste...

Please sign up or login with your details

Forgot password? Click here to reset