The Parallel Knowledge Gradient Method for Batch Bayesian Optimization

06/14/2016
by   Jian Wu, et al.
0

In many applications of black-box optimization, one can evaluate multiple points simultaneously, e.g. when evaluating the performances of several different neural network architectures in a parallel computing environment. In this paper, we develop a novel batch Bayesian optimization algorithm --- the parallel knowledge gradient method. By construction, this method provides the one-step Bayes optimal batch of points to sample. We provide an efficient strategy for computing this Bayes-optimal batch of points, and we demonstrate that the parallel knowledge gradient method finds global optima significantly faster than previous batch Bayesian optimization algorithms on both synthetic test functions and when tuning hyperparameters of practical machine learning algorithms, especially when function evaluations are noisy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/20/2017

Discretization-free Knowledge Gradient Methods for Bayesian Optimization

This paper studies Bayesian ranking and selection (R&S) problems with co...
research
11/23/2015

Parallel Predictive Entropy Search for Batch Global Optimization of Expensive Objective Functions

We develop parallel predictive entropy search (PPES), a novel algorithm ...
research
01/20/2021

A New Knowledge Gradient-based Method for Constrained Bayesian Optimization

Black-box problems are common in real life like structural design, drug ...
research
08/21/2019

A tree-based radial basis function method for noisy parallel surrogate optimization

Parallel surrogate optimization algorithms have proven to be efficient m...
research
02/16/2016

Parallel Bayesian Global Optimization of Expensive Functions

We consider parallel global optimization of derivative-free expensive-to...
research
10/21/2020

Batch Sequential Adaptive Designs for Global Optimization

Compared with the fixed-run designs, the sequential adaptive designs (SA...
research
10/18/2021

A portfolio approach to massively parallel Bayesian optimization

One way to reduce the time of conducting optimization studies is to eval...

Please sign up or login with your details

Forgot password? Click here to reset