Adjusted Expected Improvement for Cumulative Regret Minimization in Noisy Bayesian Optimization

05/10/2022
by   Shouri Hu, et al.
0

The expected improvement (EI) is one of the most popular acquisition functions for Bayesian optimization (BO) and has demonstrated good empirical performances in many applications for the minimization of simple regret. However, under the evaluation metric of cumulative regret, the performance of EI may not be competitive, and its existing theoretical regret upper bound still has room for improvement. To adapt the EI for better performance under cumulative regret, we introduce a novel quantity called the evaluation cost which is compared against the acquisition function, and with this, develop the expected improvement-cost (EIC) algorithm. In each iteration of EIC, a new point with the largest acquisition function value is sampled, only if that value exceeds its evaluation cost. If none meets this criteria, the current best point is resampled. This evaluation cost quantifies the potential downside of sampling a point, which is important under the cumulative regret metric as the objective function value in every iteration affects the performance measure. We further establish in theory a tight regret upper bound of EIC for the squared-exponential covariance kernel under mild regularity conditions, and perform experiments to illustrate the improvement of EIC over several popular BO algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2019

Bayesian Optimization using Pseudo-Points

Bayesian optimization (BO) is a popular approach for expensive black-box...
research
03/28/2023

qEUBO: A Decision-Theoretic Acquisition Function for Preferential Bayesian Optimization

Preferential Bayesian optimization (PBO) is a framework for optimizing a...
research
01/09/2020

Expected Improvement versus Predicted Value in Surrogate-Based Optimization

Surrogate-based optimization relies on so-called infill criteria (acquis...
research
01/25/2020

Tight Regret Bounds for Noisy Optimization of a Brownian Motion

We consider the problem of Bayesian optimization of a one-dimensional Br...
research
05/30/2018

Tight Regret Bounds for Bayesian Optimization in One Dimension

We consider the problem of Bayesian optimization (BO) in one dimension, ...
research
08/21/2018

On a New Improvement-Based Acquisition Function for Bayesian Optimization

Bayesian optimization (BO) is a popular algorithm for solving challengin...
research
02/02/2016

Minimum Regret Search for Single- and Multi-Task Optimization

We propose minimum regret search (MRS), a novel acquisition function for...

Please sign up or login with your details

Forgot password? Click here to reset