Learning Gaussian Processes by Minimizing PAC-Bayesian Generalization Bounds

10/29/2018
by   David Reeb, et al.
0

Gaussian Processes (GPs) are a generic modelling tool for supervised learning. While they have been successfully applied on large datasets, their use in safety-critical applications is hindered by the lack of good performance guarantees. To this end, we propose a method to learn GPs and their sparse approximations by directly optimizing a PAC-Bayesian bound on their generalization performance, instead of maximizing the marginal likelihood. Besides its theoretical appeal, we find in our evaluation that our learning method is robust and yields significantly better generalization guarantees than other common GP approaches on several regression benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/22/2019

PAC-Bayesian Bounds for Deep Gaussian Processes

Variational approximation techniques and inference for stochastic models...
research
04/07/2021

Adversarial Robustness Guarantees for Gaussian Processes

Gaussian processes (GPs) enable principled computation of model uncertai...
research
09/06/2021

Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for Safety-Critical Applications

Gaussian processes have become a promising tool for various safety-criti...
research
06/16/2020

PAC-Bayesian Generalization Bounds for MultiLayer Perceptrons

We study PAC-Bayesian generalization bounds for Multilayer Perceptrons (...
research
06/26/2018

Scalable Gaussian Process Inference with Finite-data Mean and Variance Guarantees

Gaussian processes (GPs) offer a flexible class of priors for nonparamet...
research
12/07/2020

Generalization bounds for deep learning

Generalization in deep learning has been the topic of much recent theore...
research
02/21/2020

Knot Selection in Sparse Gaussian Processes

Knot-based, sparse Gaussian processes have enjoyed considerable success ...

Please sign up or login with your details

Forgot password? Click here to reset