Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization

05/31/2017
by   Jonathan Scarlett, et al.
0

In this paper, we consider the problem of sequentially optimizing a black-box function f based on noisy samples and bandit feedback. We assume that f is smooth in the sense of having a bounded norm in some reproducing kernel Hilbert space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian process bandit optimization. We provide algorithm-independent lower bounds on the simple regret, measuring the suboptimality of a single point reported after T rounds, and on the cumulative regret, measuring the sum of regrets over the T chosen points. For the isotropic squared-exponential kernel in d dimensions, we find that an average simple regret of ϵ requires T = Ω(1/ϵ^2 (1/ϵ)^d/2), and the average cumulative regret is at least Ω( √(T( T)^d)), thus matching existing upper bounds up to the replacement of d/2 by d+O(1) in both cases. For the Matérn-ν kernel, we give analogous bounds of the form Ω( (1/ϵ)^2+d/ν) and Ω( T^ν + d/2ν + d), and discuss the resulting gaps to the existing upper bounds.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro