SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator

07/04/2018
by   Cong Fang, et al.
0

In this paper, we propose a new technique named Stochastic Path-Integrated Differential EstimatoR (SPIDER), which can be used to track many deterministic quantities of interest with significantly reduced computational cost. Combining SPIDER with the method of normalized gradient descent, we propose two new algorithms, namely SPIDER-SFO and SPIDER-SSO, that solve non-convex stochastic optimization problems using stochastic gradients only. We provide sharp error-bound results on their convergence rates. Specially, we prove that the SPIDER-SFO and SPIDER-SSO algorithms achieve a record-breaking Õ(ϵ^-3) gradient computation cost to find an ϵ-approximate first-order and (ϵ, O(ϵ^0.5))-approximate second-order stationary point, respectively. In addition, we prove that SPIDER-SFO nearly matches the algorithmic lower bound for finding stationary point under the gradient Lipschitz assumption in the finite-sum setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2020

Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations

We design an algorithm which finds an ϵ-approximate stationary point (wi...
research
12/05/2019

Lower Bounds for Non-Convex Stochastic Optimization

We lower bound the complexity of finding ϵ-stationary points (with gradi...
research
10/03/2019

Escaping Saddle Points for Zeroth-order Nonconvex Optimization using Estimated Gradient Descent

Gradient descent and its variants are widely used in machine learning. H...
research
02/20/2023

Private (Stochastic) Non-Convex Optimization Revisited: Second-Order Stationary Points and Excess Risks

We consider the problem of minimizing a non-convex objective while prese...
research
05/28/2020

Robust estimation via generalized quasi-gradients

We explore why many recently proposed robust estimation problems are eff...
research
11/03/2022

Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum Minimization

We propose an adaptive variance-reduction method, called AdaSpider, for ...
research
08/17/2020

A near-optimal stochastic gradient method for decentralized non-convex finite-sum optimization

This paper describes a near-optimal stochastic first-order gradient meth...

Please sign up or login with your details

Forgot password? Click here to reset