Tighter Bounds on the Log Marginal Likelihood of Gaussian Process Regression Using Conjugate Gradients

02/16/2021
by   Artem Artemev, et al.
0

We propose a lower bound on the log marginal likelihood of Gaussian process regression models that can be computed without matrix factorisation of the full kernel matrix. We show that approximate maximum likelihood learning of model parameters by maximising our lower bound retains many of the sparse variational approach benefits while reducing the bias introduced into parameter learning. The basis of our bound is a more careful analysis of the log-determinant term appearing in the log marginal likelihood, as well as using the method of conjugate gradients to derive tight lower bounds on the term involving a quadratic form. Our approach is a step forward in unifying methods relying on lower bound maximisation (e.g. variational methods) and iterative approaches based on conjugate gradients for training Gaussian processes. In experiments, we show improved predictive performance with our model for a comparable amount of training time compared to other conjugate gradient based approaches.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/03/2014

Nested Variational Compression in Deep Gaussian Processes

Deep Gaussian processes provide a flexible approach to probabilistic mod...
11/02/2019

Sparse inversion for derivative of log determinant

Algorithms for Gaussian process, marginal likelihood methods or restrict...
03/04/2011

Multiple Kernel Learning: A Unifying Probabilistic Viewpoint

We present a probabilistic viewpoint to multiple kernel learning unifyin...
09/20/2021

Barely Biased Learning for Gaussian Process Regression

Recent work in scalable approximate Gaussian process regression has disc...
05/25/2017

Filtering Variational Objectives

When used as a surrogate objective for maximum likelihood estimation in ...
06/22/2017

Scalable Multi-Class Gaussian Process Classification using Expectation Propagation

This paper describes an expectation propagation (EP) method for multi-cl...
10/29/2020

Gaussian Process Bandit Optimization of the Thermodynamic Variational Objective

Achieving the full promise of the Thermodynamic Variational Objective (T...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.