Fantasizing with Dual GPs in Bayesian Optimization and Active Learning

by   Paul E. Chang, et al.

Gaussian processes (GPs) are the main surrogate functions used for sequential modelling such as Bayesian Optimization and Active Learning. Their drawbacks are poor scaling with data and the need to run an optimization loop when using a non-Gaussian likelihood. In this paper, we focus on `fantasizing' batch acquisition functions that need the ability to condition on new fantasized data computationally efficiently. By using a sparse Dual GP parameterization, we gain linear scaling with batch size as well as one-step updates for non-Gaussian likelihoods, thus extending sparse models to greedy batch fantasizing acquisition functions.


Memory-Based Dual Gaussian Processes for Sequential Learning

Sequential learning with Gaussian processes (GPs) is challenging when ac...

Bayesian Active Learning with Fully Bayesian Gaussian Processes

The bias-variance trade-off is a well-known problem in machine learning ...

On Bayesian Search for the Feasible Space Under Computationally Expensive Constraints

We are often interested in identifying the feasible subset of a decision...

Likelihood-Free Inference with Deep Gaussian Processes

In recent years, surrogate models have been successfully used in likelih...

Modelling Human Active Search in Optimizing Black-box Functions

Modelling human function learning has been the subject of in-tense resea...

Representing Additive Gaussian Processes by Sparse Matrices

Among generalized additive models, additive Matérn Gaussian Processes (G...

Active emulation of computer codes with Gaussian processes – Application to remote sensing

Many fields of science and engineering rely on running simulations with ...

Please sign up or login with your details

Forgot password? Click here to reset