Trading Convergence Rate with Computational Budget in High Dimensional Bayesian Optimization

by   Hung Tran-The, et al.

Scaling Bayesian optimisation (BO) to high-dimensional search spaces is a active and open research problems particularly when no assumptions are made on function structure. The main reason is that at each iteration, BO requires to find global maximisation of acquisition function, which itself is a non-convex optimization problem in the original search space. With growing dimensions, the computational budget for this maximisation gets increasingly short leading to inaccurate solution of the maximisation. This inaccuracy adversely affects both the convergence and the efficiency of BO. We propose a novel approach where the acquisition function only requires maximisation on a discrete set of low dimensional subspaces embedded in the original high-dimensional search space. Our method is free of any low dimensional structure assumption on the function unlike many recent high-dimensional BO methods. Optimising acquisition function in low dimensional subspaces allows our method to obtain accurate solutions within limited computational budget. We show that in spite of this convenience, our algorithm remains convergent. In particular, cumulative regret of our algorithm only grows sub-linearly with the number of iterations. More importantly, as evident from our regret bounds, our algorithm provides a way to trade the convergence rate with the number of subspaces used in the optimisation. Finally, when the number of subspaces is "sufficiently large", our algorithm's cumulative regret is at most O^*(√(Tγ_T)) as opposed to O^*(√(DTγ_T)) for the GP-UCB of Srinivas et al. (2012), reducing a crucial factor √(D) where D being the dimensional number of input space.


Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces

Bayesian optimization is known to be difficult to scale to high dimensio...

Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search Spaces

Bayesian optimisation is a popular method for efficient optimisation of ...

Global optimization using random embeddings

We propose a random-subspace algorithmic framework for global optimizati...

Relaxing the Additivity Constraints in Decentralized No-Regret High-Dimensional Bayesian Optimization

Bayesian Optimization (BO) is typically used to optimize an unknown func...

Exploiting Active Subspaces in Global Optimization: How Complex is your Problem?

When applying optimization method to a real-world problem, the possessio...

Good practices for Bayesian Optimization of high dimensional structured spaces

The increasing availability of structured but high dimensional data has ...

On the choice of the low-dimensional domain for global optimization via random embeddings

The challenge of taking many variables into account in optimization prob...

Please sign up or login with your details

Forgot password? Click here to reset