On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization

09/11/2012
by   Ohad Shamir, et al.
0

The problem of stochastic convex optimization with bandit feedback (in the learning community) or without knowledge of gradients (in the optimization community) has received much attention in recent years, in the form of algorithms and performance upper bounds. However, much less is known about the inherent complexity of these problems, and there are few lower bounds in the literature, especially for nonlinear functions. In this paper, we investigate the attainable error/regret in the bandit and derivative-free settings, as a function of the dimension d and the available number of queries T. We provide a precise characterization of the attainable performance for strongly-convex and smooth functions, which also imply a non-trivial lower bound for more general problems. Moreover, we prove that in both the bandit and derivative-free setting, the required number of queries must scale at least quadratically with the dimension. Finally, we show that on the natural class of quadratic functions, it is possible to obtain a "fast" O(1/T) error rate in terms of T, under mild assumptions, even without having access to gradients. To the best of our knowledge, this is the first such rate in a derivative-free stochastic setting, and holds despite previous results which seem to imply the contrary.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2018

An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization

We consider smooth stochastic convex optimization problems in the contex...
research
05/18/2018

Projection-Free Bandit Convex Optimization

In this paper, we propose the first computationally efficient projection...
research
06/05/2023

Curvature and complexity: Better lower bounds for geodesically convex optimization

We study the query complexity of geodesically convex (g-convex) optimiza...
research
02/25/2018

An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization

We consider an unconstrained problem of minimization of a smooth convex ...
research
12/03/2019

Online and Bandit Algorithms for Nonstationary Stochastic Saddle-Point Optimization

Saddle-point optimization problems are an important class of optimizatio...
research
07/12/2012

Optimal rates for first-order stochastic convex optimization under Tsybakov noise condition

We focus on the problem of minimizing a convex function f over a convex ...
research
09/11/2012

Query Complexity of Derivative-Free Optimization

This paper provides lower bounds on the convergence rate of Derivative F...

Please sign up or login with your details

Forgot password? Click here to reset