Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization

by   Rong Ge, et al.
Duke University
Tsinghua University

Variance reduction techniques like SVRG provide simple and fast algorithms for optimizing a convex finite-sum objective. For nonconvex objectives, these techniques can also find a first-order stationary point (with small gradient). However, in nonconvex optimization it is often crucial to find a second-order stationary point (with small gradient and almost PSD hessian). In this paper, we show that Stabilized SVRG (a simple variant of SVRG) can find an ϵ-second-order stationary point using only O(n^2/3/ϵ^2+n/ϵ^1.5) stochastic gradients. To our best knowledge, this is the first second-order guarantee for a simple variant of SVRG. The running time almost matches the known guarantees for finding ϵ-first-order stationary points.


SSRGD: Simple Stochastic Recursive Gradient Descent for Escaping Saddle Points

We analyze stochastic gradient algorithms for optimizing nonconvex probl...

Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent

Nesterov's accelerated gradient descent (AGD), an instance of the genera...

Escaping Saddle Points in Constrained Optimization

In this paper, we focus on escaping from saddle points in smooth nonconv...

On the Acceleration of L-BFGS with Second-Order Information and Stochastic Batches

This paper proposes a framework of L-BFGS based on the (approximate) sec...

Multi-Point Bandit Algorithms for Nonstationary Online Nonconvex Optimization

Bandit algorithms have been predominantly analyzed in the convex setting...

Private (Stochastic) Non-Convex Optimization Revisited: Second-Order Stationary Points and Excess Risks

We consider the problem of minimizing a non-convex objective while prese...

A Generic Approach for Escaping Saddle points

A central challenge to using first-order methods for optimizing nonconve...

Please sign up or login with your details

Forgot password? Click here to reset