Bayesian Optimization with Gradients

03/13/2017
by   Jian Wu, et al.
0

Bayesian optimization has been successful at global optimization of expensive-to-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to decrease the number of objective function evaluations required for good performance. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledge-gradient (dKG), for which we show one-step Bayes-optimality, asymptotic consistency, and greater one-step value of information than is possible in the derivative-free setting. Our procedure accommodates noisy and incomplete derivative information, comes in both sequential and batch forms, and can optionally reduce the computational cost of inference through automatically selected retention of a single directional derivative. We also compute the d-KG acquisition function and its gradient using a novel fast discretization-free technique. We show d-KG provides state-of-the-art performance compared to a wide range of optimization procedures with and without gradients, on benchmarks including logistic regression, deep learning, kernel learning, and k-nearest neighbors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/20/2017

Discretization-free Knowledge Gradient Methods for Bayesian Optimization

This paper studies Bayesian ranking and selection (R&S) problems with co...
research
03/23/2018

Bayesian Optimization with Expensive Integrands

We propose a Bayesian optimization algorithm for objective functions tha...
research
08/11/2016

Warm Starting Bayesian Optimization

We develop a framework for warm-starting Bayesian optimization, that red...
research
01/08/2021

Bayesian optimization with improved scalability and derivative information for efficient design of nanophotonic structures

We propose the combination of forward shape derivatives and the use of a...
research
02/07/2016

Stratified Bayesian Optimization

We consider derivative-free black-box global optimization of expensive n...
research
09/18/2020

Modifier Adaptation Meets Bayesian Optimization and Derivative-Free Optimization

This paper investigates a new class of modifier-adaptation schemes to ov...
research
08/09/2023

Enhancing Optimization Performance: A Novel Hybridization of Gaussian Crunching Search and Powell's Method for Derivative-Free Optimization

This research paper presents a novel approach to enhance optimization pe...

Please sign up or login with your details

Forgot password? Click here to reset