How to Escape Saddle Points Efficiently

03/02/2017
by   Chi Jin, et al.
0

This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost "dimension-free"). The convergence rate of this procedure matches the well-known convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are non-degenerate, all second-order stationary points are local minima, and our result thus shows that perturbed gradient descent can escape saddle points almost for free. Our results can be directly applied to many machine learning applications, including deep learning. As a particular concrete example of such an application, we show that our results can be used directly to establish sharp global convergence rates for matrix factorization. Our results rely on a novel characterization of the geometry around saddle points, which may be of independent interest to the non-convex optimization community.

READ FULL TEXT
research
10/03/2019

Escaping Saddle Points for Zeroth-order Nonconvex Optimization using Estimated Gradient Descent

Gradient descent and its variants are widely used in machine learning. H...
research
10/29/2019

Efficiently avoiding saddle points with zero order methods: No gradients required

We consider the case of derivative-free algorithms for non-convex optimi...
research
01/21/2019

A Deterministic Approach to Avoid Saddle Points

Loss functions with a large number of saddle points are one of the main ...
research
02/28/2018

On the Sublinear Convergence of Randomly Perturbed Alternating Gradient Descent to Second Order Stationary Solutions

The alternating gradient descent (AGD) is a simple but popular algorithm...
research
08/19/2019

Second-Order Guarantees of Stochastic Gradient Descent in Non-Convex Optimization

Recent years have seen increased interest in performance guarantees of g...
research
11/28/2021

Escape saddle points by a simple gradient-descent based algorithm

Escaping saddle points is a central research topic in nonconvex optimiza...
research
03/02/2017

The Second Order Linear Model

We study a fundamental class of regression models called the second orde...

Please sign up or login with your details

Forgot password? Click here to reset