Multi-Scale Zero-Order Optimization of Smooth Functions in an RKHS

05/11/2020
by   Shubhanshu Shekhar, et al.
0

We aim to optimize a black-box function f:XR under the assumption that f is Hölder smooth and has bounded norm in the RKHS associated with a given kernel K. This problem is known to have an agnostic Gaussian Process (GP) bandit interpretation in which an appropriately constructed GP surrogate model with kernel K is used to obtain an upper confidence bound (UCB) algorithm. In this paper, we propose a new algorithm (LP-GP-UCB) where the usual GP surrogate model is augmented with Local Polynomial (LP) estimators of the Hölder smooth function f to construct a multi-scale UCB guiding the search for the optimizer. We analyze this algorithm and derive high probability bounds on its simple and cumulative regret. We then prove that the elements of many common RKHS are Hölder smooth and obtain the corresponding Hölder smoothness parameters, and hence, specialize our regret bounds for several commonly used kernels. When specialized to the Squared Exponential (SE) kernel, LP-GP-UCB matches the optimal performance, while for the case of Matérn kernels (K_ν)_ν>0, it results in uniformly tighter regret bounds for all values of the smoothness parameter ν>0. Most notably, for certain ranges of ν, the algorithm achieves near-optimal bounds on simple and cumulative regrets, matching the algorithm-independent lower bounds up to polylog factors, and thus closing the large gap between the existing upper and lower bounds for these values of ν. Additionally, our analysis provides the first explicit regret bounds, in terms of the budget n, for the Rational-Quadratic (RQ) and Gamma-Exponential (GE). Finally, experiments with synthetic functions as well as a CNN hyperparameter tuning task demonstrate the practical benefits of our multi-scale partitioning approach over some existing algorithms numerically.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset