Optimal rates for first-order stochastic convex optimization under Tsybakov noise condition

07/12/2012
by   Aaditya Ramdas, et al.
0

We focus on the problem of minimizing a convex function f over a convex set S given T queries to a stochastic first order oracle. We argue that the complexity of convex minimization is only determined by the rate of growth of the function around its minimizer x^*_f,S, as quantified by a Tsybakov-like noise condition. Specifically, we prove that if f grows at least as fast as x-x^*_f,S^κ around its minimum, for some κ > 1, then the optimal rate of learning f(x^*_f,S) is Θ(T^-κ/2κ-2). The classic rate Θ(1/√(T)) for convex functions and Θ(1/T) for strongly convex functions are special cases of our result for κ→∞ and κ=2, and even faster rates are attained for κ <2. We also derive tight bounds for the complexity of learning x_f,S^*, where the optimal rate is Θ(T^-1/2κ-2). Interestingly, these precise rates for convex optimization also characterize the complexity of active learning and our results further strengthen the connections between the two fields, both of which rely on feedback-driven queries.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/03/2010

Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization

Relative to the large literature on upper bounds on complexity of convex...
research
05/15/2015

Algorithmic Connections Between Active Learning and Stochastic Convex Optimization

Interesting theoretical associations have been established by recent pap...
research
07/01/2019

Open Problem: The Oracle Complexity of Convex Optimization with Limited Memory

We note that known methods achieving the optimal oracle complexity for f...
research
02/11/2014

On Zeroth-Order Stochastic Convex Optimization via Random Walks

We propose a method for zeroth order stochastic convex optimization that...
research
04/24/2022

Complexity and Avoidance

In this dissertation we examine the relationships between the several hi...
research
02/12/2021

MetaGrad: Adaptation using Multiple Learning Rates in Online Learning

We provide a new adaptive method for online convex optimization, MetaGra...
research
09/11/2012

On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization

The problem of stochastic convex optimization with bandit feedback (in t...

Please sign up or login with your details

Forgot password? Click here to reset