Parallel Bayesian Global Optimization of Expensive Functions

02/16/2016
by   Jialei Wang, et al.
0

We consider parallel global optimization of derivative-free expensive-to-evaluate functions, and propose an efficient method based on stochastic approximation for implementing a conceptual Bayesian optimization algorithm proposed by Ginsbourger et al. (2007). To accomplish this, we use infinitessimal perturbation analysis (IPA) to construct a stochastic gradient estimator and show that this estimator is unbiased. We also show that the stochastic gradient ascent algorithm using the constructed gradient estimator converges to a stationary point of the q-EI surface, and therefore, as the number of multiple starts of the gradient ascent algorithm and the number of steps for each start grow large, the one-step Bayes optimal set of points is recovered. We show in numerical experiments that our method for maximizing the q-EI is faster than methods based on closed-form evaluation using high-dimensional integration, when considering many parallel function evaluations, and is comparable in speed when considering few. We also show that the resulting one-step Bayes optimal algorithm for parallel global optimization finds high quality solutions with fewer evaluations that a heuristic based on approximately maximizing the q-EI. A high quality open source implementation of this algorithm is available in the open source Metrics Optimization Engine (MOE).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2016

The Parallel Knowledge Gradient Method for Batch Bayesian Optimization

In many applications of black-box optimization, one can evaluate multipl...
research
09/04/2020

On the implementation of a global optimization method for mixed-variable problems

We describe the optimization algorithm implemented in the open-source de...
research
07/20/2017

Discretization-free Knowledge Gradient Methods for Bayesian Optimization

This paper studies Bayesian ranking and selection (R&S) problems with co...
research
02/19/2021

A Variance Controlled Stochastic Method with Biased Estimation for Faster Non-convex Optimization

In this paper, we proposed a new technique, variance controlled stochast...
research
05/25/2018

Maximizing acquisition functions for Bayesian optimization

Bayesian optimization is a sample-efficient approach to global optimizat...
research
11/27/2015

Gradient Estimation with Simultaneous Perturbation and Compressive Sensing

This paper aims at achieving a "good" estimator for the gradient of a fu...
research
09/09/2016

Efficient batch-sequential Bayesian optimization with moments of truncated Gaussian vectors

We deal with the efficient parallelization of Bayesian global optimizati...

Please sign up or login with your details

Forgot password? Click here to reset