Optimization for Gaussian Processes via Chaining

10/19/2015
by   Emile Contal, et al.
0

In this paper, we consider the problem of stochastic optimization under a bandit feedback model. We generalize the GP-UCB algorithm [Srinivas and al., 2012] to arbitrary kernels and search spaces. To do so, we use a notion of localized chaining to control the supremum of a Gaussian process, and provide a novel optimization scheme based on the computation of covering numbers. The theoretical bounds we obtain on the cumulative regret are more generic and present the same convergence rates as the GP-UCB algorithm. Finally, the algorithm is shown to be empirically more efficient than its natural competitors on simple and complex input spaces.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/19/2013

Gaussian Process Optimization with Mutual Information

In this paper, we analyze a generic algorithm scheme for sequential glob...
research
11/09/2021

Misspecified Gaussian Process Bandit Optimization

We consider the problem of optimizing a black-box function based on nois...
research
02/16/2016

Stochastic Process Bandits: Upper Confidence Bounds Algorithms via Generic Chaining

The paper considers the problem of global optimization in the setup of s...
research
08/23/2019

BdryGP: a new Gaussian process model for incorporating boundary information

Gaussian processes (GPs) are widely used as surrogate models for emulati...
research
07/07/2021

Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian Modeling

There is significant interest in learning and optimizing a complex syste...
research
10/25/2018

Adversarially Robust Optimization with Gaussian Processes

In this paper, we consider the problem of Gaussian process (GP) optimiza...
research
03/15/2022

Regret Bounds for Expected Improvement Algorithms in Gaussian Process Bandit Optimization

The expected improvement (EI) algorithm is one of the most popular strat...

Please sign up or login with your details

Forgot password? Click here to reset