Adaptive Bandit Convex Optimization with Heterogeneous Curvature
We consider the problem of adversarial bandit convex optimization, that is, online learning over a sequence of arbitrary convex loss functions with only one function evaluation for each of them. While all previous works assume known and homogeneous curvature on these loss functions, we study a heterogeneous setting where each function has its own curvature that is only revealed after the learner makes a decision. We develop an efficient algorithm that is able to adapt to the curvature on the fly. Specifically, our algorithm not only recovers or even improves existing results for several homogeneous settings, but also leads to surprising results for some heterogeneous settings – for example, while Hazan and Levy (2014) showed that O(d^3/2√(T)) regret is achievable for a sequence of T smooth and strongly convex d-dimensional functions, our algorithm reveals that the same is achievable even if T^3/4 of them are not strongly convex, and sometimes even if a constant fraction of them are not strongly convex. Our approach is inspired by the framework of Bartlett et al. (2007) who studied a similar heterogeneous setting but with stronger gradient feedback. Extending their framework to the bandit feedback setting requires novel ideas such as lifting the feasible domain and using a logarithmically homogeneous self-concordant barrier regularizer.
READ FULL TEXT