Stochastic Cubic Regularization for Fast Nonconvex Optimization

11/08/2017
by   Nilesh Tripuraneni, et al.
0

This paper proposes a stochastic variant of a classic algorithm---the cubic-regularized Newton method [Nesterov and Polyak 2006]. The proposed algorithm efficiently escapes saddle points and finds approximate local minima for general smooth, nonconvex functions in only Õ(ϵ^-3.5) stochastic gradient and stochastic Hessian-vector product evaluations. The latter can be computed as efficiently as stochastic gradients. This improves upon the Õ(ϵ^-4) rate of stochastic gradient descent. Our rate matches the best-known result for finding local minima without requiring any delicate acceleration or variance-reduction techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2019

Stochastic Recursive Variance-Reduced Cubic Regularization Methods

Stochastic Variance-Reduced Cubic regularization (SVRC) algorithms have ...
research
08/29/2017

Natasha 2: Faster Non-Convex Optimization Than SGD

We design a stochastic algorithm to train any smooth neural network to ε...
research
09/28/2020

Escaping Saddle-Points Faster under Interpolation-like Conditions

In this paper, we show that under over-parametrization several standard ...
research
10/25/2021

Faster Perturbed Stochastic Gradient Methods for Finding Local Minima

Escaping from saddle points and finding local minima is a central proble...
research
02/23/2023

Unified Convergence Theory of Stochastic and Variance-Reduced Cubic Newton Methods

We study the widely known Cubic-Newton method in the stochastic setting ...
research
05/22/2018

Cutting plane methods can be extended into nonconvex optimization

We show that it is possible to obtain an O(ϵ^-4/3) runtime --- including...
research
11/29/2018

Sample Efficient Stochastic Variance-Reduced Cubic Regularization Method

We propose a sample efficient stochastic variance-reduced cubic regulari...

Please sign up or login with your details

Forgot password? Click here to reset