Harnessing Low-Fidelity Data to Accelerate Bayesian Optimization via Posterior Regularization

02/11/2019
by   Bin Liu, et al.
0

Bayesian optimization (BO) is a powerful derivative-free technique for global optimization of expensive black-box objective functions (BOFs). However, the overhead of BO can still be prohibitive if the number of allowed function evaluations is less than required. In this paper, we investigate how to reduce the required number of function evaluations for BO without compromise in solution quality. We explore the idea of posterior regularization for harnessing low fidelity (LF) data within the Gaussian process upper confidence bound (GP-UCB) framework. The LF data are assumed to arise from previous evaluations of a LF approximation of the BOF. An extra GP expert called LF-GP is trained to fit the LF data. We develop a dynamic weighted product of experts (DW-POE) fusion operator. The regularization is induced from this fusion operator on the posterior of the BOF. The impact of the LF-GP expert on the resulting regularized posterior is adaptively adjusted via Bayesian formulism. Extensive experimental results on benchmark BOF optimization tasks demonstrate the superior performance of the proposed algorithm over state-of-the-arts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset