Efficient Gaussian Process Bandits by Believing only Informative Actions

03/23/2020
by   Amrit Singh Bedi, et al.
2

Bayesian optimization is a framework for global search via maximum a posteriori updates rather than simulated annealing, and has gained prominence for decision-making under uncertainty. In this work, we cast Bayesian optimization as a multi-armed bandit problem, where the payoff function is sampled from a Gaussian process (GP). Further, we focus on action selections via upper confidence bound (UCB) or expected improvement (EI) due to their prevalent use in practice. Prior works using GPs for bandits cannot allow the iteration horizon T to be large, as the complexity of computing the posterior parameters scales cubically with the number of past observations. To circumvent this computational burden, we propose a simple statistical test: only incorporate an action into the GP posterior when its conditional entropy exceeds an ϵ threshold. Doing so permits us to derive sublinear regret bounds of GP bandit algorithms up to factors depending on the compression parameter ϵ for both discrete and continuous action sets. Moreover, the complexity of the GP posterior remains provably finite. Experimentally, we observe state of the art accuracy and complexity tradeoffs for GP bandit algorithms applied to global optimization, suggesting the merits of compressed GPs in bandit settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2021

Gaussian Process Sampling and Optimization with Approximate Upper and Lower Bounds

Many functions have approximately-known upper and/or lower bounds, poten...
research
01/25/2016

Time-Varying Gaussian Process Bandit Optimization

We consider the sequential Bayesian optimization problem with bandit fee...
research
11/05/2021

An Empirical Study of Neural Kernel Bandits

Neural bandits have enabled practitioners to operate efficiently on prob...
research
02/11/2021

Lenient Regret and Good-Action Identification in Gaussian Process Bandits

In this paper, we study the problem of Gaussian process (GP) bandits und...
research
03/18/2015

Differentiating the multipoint Expected Improvement for optimal batch design

This work deals with parallel optimization of expensive objective functi...
research
03/15/2022

Regret Bounds for Expected Improvement Algorithms in Gaussian Process Bandit Optimization

The expected improvement (EI) algorithm is one of the most popular strat...
research
06/12/2023

Provably Efficient Bayesian Optimization with Unbiased Gaussian Process Hyperparameter Estimation

Gaussian process (GP) based Bayesian optimization (BO) is a powerful met...

Please sign up or login with your details

Forgot password? Click here to reset