Function Optimization with Posterior Gaussian Derivative Process

by   Sucharita Roy, et al.

In this article, we propose and develop a novel Bayesian algorithm for optimization of functions whose first and second partial derivatives are known. The basic premise is the Gaussian process representation of the function which induces a first derivative process that is also Gaussian. The Bayesian posterior solutions of the derivative process set equal to zero, given data consisting of suitable choices of input points in the function domain and their function values, emulate the stationary points of the function, which can be fine-tuned by setting restrictions on the prior in terms of the first and second derivatives of the objective function. These observations motivate us to propose a general and effective algorithm for function optimization that attempts to get closer to the true optima adaptively with in-built iterative stages. We provide theoretical foundation to this algorithm, proving almost sure convergence to the true optima as the number of iterative stages tends to infinity. The theoretical foundation hinges upon our proofs of almost sure uniform convergence of the posteriors associated with Gaussian and Gaussian derivative processes to the underlying function and its derivatives in appropriate fixed-domain infill asymptotics setups; rates of convergence are also available. We also provide Bayesian characterization of the number of optima using information inherent in our optimization algorithm. We illustrate our Bayesian optimization algorithm with five different examples involving maxima, minima, saddle points and even inconclusiveness. Our examples range from simple, one-dimensional problems to challenging 50 and 100-dimensional problems.


page 20

page 22

page 23

page 24

page 27

page 28


Exploiting gradients and Hessians in Bayesian optimization and Bayesian quadrature

An exciting branch of machine learning research focuses on methods for l...

Scaling Gaussian Processes with Derivative Information Using Variational Inference

Gaussian processes with derivative information are useful in many settin...

Bayesian Inference for Stationary Points in Gaussian Process Regression Models for Event-Related Potentials Analysis

Stationary points embedded in the derivatives are often critical for a m...

Bayesian optimization with improved scalability and derivative information for efficient design of nanophotonic structures

We propose the combination of forward shape derivatives and the use of a...

Hermite-type modifications of BOBYQA for optimization with some partial derivatives

In this work we propose two Hermite-type optimization methods, Hermite l...

Harnessing Low-Fidelity Data to Accelerate Bayesian Optimization via Posterior Regularization

Bayesian optimization (BO) is a powerful derivative-free technique for g...

A Scalable Evolution Strategy with Directional Gaussian Smoothing for Blackbox Optimization

We developed a new scalable evolution strategy with directional Gaussian...