DeepAI AI Chat
Log In Sign Up

Regret bounds for meta Bayesian optimization with an unknown Gaussian process prior

by   Zi Wang, et al.

Bayesian optimization usually assumes that a Bayesian prior is given. However, the strong theoretical guarantees in Bayesian optimization are often regrettably compromised in practice because of unknown parameters in the prior. In this paper, we adopt a variant of empirical Bayes and show that, by estimating the Gaussian process prior from offline data sampled from the same prior and constructing unbiased estimators of the posterior, variants of both GP-UCB and probability of improvement achieve a near-zero regret bound, which decreases to a constant proportional to the observational noise as the number of offline data and the number of online evaluations increase. Empirically, we have verified our approach on challenging simulated robotic problems featuring task and motion planning.


page 1

page 2

page 3

page 4


On Batch Bayesian Optimization

We present two algorithms for Bayesian optimization in the batch feedbac...

On Provably Robust Meta-Bayesian Optimization

Bayesian optimization (BO) has become popular for sequential optimizatio...

JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data

The goal of Multi-task Bayesian Optimization (MBO) is to minimize the nu...

Meta-Learning Conjugate Priors for Few-Shot Bayesian Optimization

Bayesian Optimization is methodology used in statistical modelling that ...

Bayesian Optimization for Policy Search via Online-Offline Experimentation

Online field experiments are the gold-standard way of evaluating changes...

PFNs4BO: In-Context Learning for Bayesian Optimization

In this paper, we use Prior-data Fitted Networks (PFNs) as a flexible su...

Event-Triggered Time-Varying Bayesian Optimization

We consider the problem of sequentially optimizing a time-varying object...