DeepAI AI Chat
Log In Sign Up

Active Bayesian Optimization: Minimizing Minimizer Entropy

by   Il Memming Park, et al.
The University of Texas at Austin

The ultimate goal of optimization is to find the minimizer of a target function.However, typical criteria for active optimization often ignore the uncertainty about the minimizer. We propose a novel criterion for global optimization and an associated sequential active learning strategy using Gaussian processes.Our criterion is the reduction of uncertainty in the posterior distribution of the function minimizer. It can also flexibly incorporate multiple global minimizers. We implement a tractable approximation of the criterion and demonstrate that it obtains the global minimizer accurately compared to conventional Bayesian optimization criteria.


page 1

page 2

page 3

page 4


Self-Correcting Bayesian Optimization through Bayesian Active Learning

Gaussian processes are cemented as the model of choice in Bayesian optim...

A sampling criterion for constrained Bayesian optimization with uncertainties

We consider the problem of chance constrained optimization where it is s...

Design of Experiments for Verifying Biomolecular Networks

There is a growing trend in molecular and synthetic biology of using mec...

Near-Optimal Active Learning of Multi-Output Gaussian Processes

This paper addresses the problem of active learning of a multi-output Ga...

Active Learning and Bayesian Optimization: a Unified Perspective to Learn with a Goal

Both Bayesian optimization and active learning realize an adaptive sampl...

Stopping criteria for boosting automatic experimental design using real-time fMRI with Bayesian optimization

Bayesian optimization has been proposed as a practical and efficient too...

Stopping Criterion Design for Recursive Bayesian Classification: Analysis and Decision Geometry

Systems that are based on recursive Bayesian updates for classification ...