Doubly Bayesian Optimization

12/11/2018 ∙ by Alexander Lavin, et al. ∙ 0

Bayesian optimization (BO) is a powerful method for optimizing complex black-box functions that are costly to evaluate directly. Although useful out of the box, complexities arise when the domain exhibits non-smooth structure, noise, or greater than five dimensions. Extending BO for these issues is non-trivial, which is why we suggest casting BO methods into the probabilistic programming paradigm. These systems (PPS) enable users to encode model structure and naturally reason about uncertainties, which can be leveraged towards improved BO methods. Here we present a probabilistic domain-specific language where BO is native, showing this probabilistic approach to optimization is more naturally expressed in a PPS. We show results on standard optimization benchmarks, and, more importantly, demonstrate that the framework enables the user to more readily use advanced techniques such as unscented BO and noisy expected improvement.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.