Differentiating the multipoint Expected Improvement for optimal batch design

by   Sébastien Marmin, et al.

This work deals with parallel optimization of expensive objective functions which are modeled as sample realizations of Gaussian processes. The study is formalized as a Bayesian optimization problem, or continuous multi-armed bandit problem, where a batch of q 0 arms is pulled in parallel at each iteration. Several algorithms have been developed for choosing batches by trading off exploitation and exploration. As of today, the maximum Expected Improvement (EI) and Upper Confidence Bound (UCB) selection rules appear as the most prominent approaches for batch selection. Here, we build upon recent work on the multipoint Expected Improvement criterion, for which an analytic expansion relying on Tallis' formula was recently established. The computational burden of this selection rule being still an issue in application, we derive a closed-form expression for the gradient of the multipoint Expected Improvement, which aims at facilitating its maximization using gradient-based ascent algorithms. Substantial computational savings are shown in application. In addition, our algorithms are tested numerically and compared to state-of-the-art UCB-based batch-sequential algorithms. Combining starting designs relying on UCB with gradient-based EI local optimization finally appears as a sound option for batch design in distributed Gaussian Process optimization.


Efficient batch-sequential Bayesian optimization with moments of truncated Gaussian vectors

We deal with the efficient parallelization of Bayesian global optimizati...

Parallelizing Exploration-Exploitation Tradeoffs with Gaussian Process Bandit Optimization

Can one parallelize complex exploration exploitation tradeoffs? As an ex...

Multi-Armed Bandit Problem and Batch UCB Rule

We obtain the upper bound of the loss function for a strategy in the mul...

Efficient Gaussian Process Bandits by Believing only Informative Actions

Bayesian optimization is a framework for global search via maximum a pos...

A Parallel Technique for Multi-objective Bayesian Global Optimization: Using a Batch Selection of Probability of Improvement

Bayesian global optimization (BGO) is an efficient surrogate-assisted te...

Fast Calculation of the Knowledge Gradient for Optimization of Deterministic Engineering Simulations

A novel efficient method for computing the Knowledge-Gradient policy for...

Please sign up or login with your details

Forgot password? Click here to reset