Lower Bounds for Non-Convex Stochastic Optimization

12/05/2019
by   Yossi Arjevani, et al.
0

We lower bound the complexity of finding ϵ-stationary points (with gradient norm at most ϵ) using stochastic first-order methods. In a well-studied model where algorithms access smooth, potentially non-convex functions through queries to an unbiased stochastic gradient oracle with bounded variance, we prove that (in the worst case) any algorithm requires at least ϵ^-4 queries to find an ϵ stationary point. The lower bound is tight, and establishes that stochastic gradient descent is minimax optimal in this model. In a more restrictive model where the noisy gradient estimates satisfy a mean-squared smoothness property, we prove a lower bound of ϵ^-3 queries, establishing the optimality of recently proposed variance reduction techniques.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset