Convergence rates of efficient global optimization algorithms

01/18/2011
by   Adam D. Bull, et al.
0

Efficient global optimization is the problem of minimizing an unknown function f, using as few evaluations f(x) as possible. It can be considered as a continuum-armed bandit problem, with noiseless data and simple regret. Expected improvement is perhaps the most popular method for solving this problem; the algorithm performs well in experiments, but little is known about its theoretical properties. Implementing expected improvement requires a choice of Gaussian process prior, which determines an associated space of functions, its reproducing-kernel Hilbert space (RKHS). When the prior is fixed, expected improvement is known to converge on the minimum of any function in the RKHS. We begin by providing convergence rates for this procedure. The rates are optimal for functions of low smoothness, and we modify the algorithm to attain optimal rates for smoother functions. For practitioners, however, these results are somewhat misleading. Priors are typically not held fixed, but depend on parameters estimated from the data. For standard estimators, we show this procedure may never discover the minimum of f. We then propose alternative estimators, chosen to minimize the constants in the rate of convergence, and show these estimators retain the convergence rates of a fixed prior.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/02/2023

Improved convergence rates of nonparametric penalized regression under misspecified total variation

Penalties that induce smoothness are common in nonparametric regression....
research
04/20/2021

Convergence of Gaussian process regression: Optimality, robustness, and relationship with kernel ridge regression

In this work, we investigate Gaussian process regression used to recover...
research
05/18/2017

Analysis of Thompson Sampling for Gaussian Process Optimization in the Bandit Setting

We consider the global optimization of a function over a continuous doma...
research
10/11/2018

Regularized Contextual Bandits

We consider the stochastic contextual bandit problem with additional reg...
research
02/29/2020

Dimension-free convergence rates for gradient Langevin dynamics in RKHS

Gradient Langevin dynamics (GLD) and stochastic GLD (SGLD) have attracte...
research
12/08/2019

Histogram Transform Ensembles for Large-scale Regression

We propose a novel algorithm for large-scale regression problems named h...
research
10/01/2018

A simple parameter-free and adaptive approach to optimization under a minimal local smoothness assumption

We study the problem of optimizing a function under a budgeted number of...

Please sign up or login with your details

Forgot password? Click here to reset