Scaling Bayesian Optimization With Game Theory

10/07/2021
by   L. Mathesen, et al.
0

We introduce the algorithm Bayesian Optimization (BO) with Fictitious Play (BOFiP) for the optimization of high dimensional black box functions. BOFiP decomposes the original, high dimensional, space into several sub-spaces defined by non-overlapping sets of dimensions. These sets are randomly generated at the start of the algorithm, and they form a partition of the dimensions of the original space. BOFiP searches the original space with alternating BO, within sub-spaces, and information exchange among sub-spaces, to update the sub-space function evaluation. The basic idea is to distribute the high dimensional optimization across low dimensional sub-spaces, where each sub-space is a player in an equal interest game. At each iteration, BO produces approximate best replies that update the players belief distribution. The belief update and BO alternate until a stopping condition is met. High dimensional problems are common in real applications, and several contributions in the BO literature have highlighted the difficulty in scaling to high dimensions due to the computational complexity associated to the estimation of the model hyperparameters. Such complexity is exponential in the problem dimension, resulting in substantial loss of performance for most techniques with the increase of the input dimensionality. We compare BOFiP to several state-of-the-art approaches in the field of high dimensional black box optimization. The numerical experiments show the performance over three benchmark objective functions from 20 up to 1000 dimensions. A neural network architecture design problem is tested with 42 up to 911 nodes in 6 up to 92 layers, respectively, resulting into networks with 500 up to 10,000 weights. These sets of experiments empirically show that BOFiP outperforms its competitors, showing consistent performance across different problems and increasing problem dimensionality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2022

Linear Embedding-based High-dimensional Batch Bayesian Optimization without Reconstruction Mappings

The optimization of high-dimensional black-box functions is a challengin...
research
03/11/2016

High-dimensional Black-box Optimization via Divide and Approximate Conquer

Divide and Conquer (DC) is conceptually well suited to high-dimensional ...
research
09/20/2021

Computationally Efficient High-Dimensional Bayesian Optimization via Variable Selection

Bayesian Optimization (BO) is a method for globally optimizing black-box...
research
11/09/2011

Scaling Up Estimation of Distribution Algorithms For Continuous Optimization

Since Estimation of Distribution Algorithms (EDA) were proposed, many at...
research
11/27/2016

Embedded Bandits for Large-Scale Black-Box Optimization

Random embedding has been applied with empirical success to large-scale ...
research
11/04/2021

LassoBench: A High-Dimensional Hyperparameter Optimization Benchmark Suite for Lasso

Even though Weighted Lasso regression has appealing statistical guarante...
research
10/03/2022

New Paradigms for Exploiting Parallel Experiments in Bayesian Optimization

Bayesian optimization (BO) is one of the most effective methods for clos...

Please sign up or login with your details

Forgot password? Click here to reset