DeepAI
Log In Sign Up

On the Convergence of Perturbed Distributed Asynchronous Stochastic Gradient Descent to Second Order Stationary Points in Non-convex Optimization

10/14/2019
by   Lifu Wang, et al.
0

In this paper, the second order convergence of non-convex optimization in asynchronous stochastic gradient descent (ASGD) algorithm is studied systematically. We investigate the behavior of ASGD near and away from saddle points and show that, different from general stochastic gradient descent(SGD), ASGD may return back after escaping the saddle points, yet after staying near a saddle point for a long enough time (O(T)), ASGD will finally go away from strictly saddle points. An inequality is given to describe the process of ASGD to escape from saddle points. We show the exponential instability of the perturbed gradient dynamics near the strictly saddle points and use a novel Razumikhin-Lyapunov method to give a more detailed estimation about how the time delay parameter T influence the speed to escape. In particular, we consider the optimization of smooth nonconvex functions, and propose a perturbed asynchronous stochastic gradient descent algorithm with guarantee of convergence to second order stationary points with high probability in O(1/ϵ^4) iterations. To the best of our knowledge, this is the first work on the second order convergence of asynchronous algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/25/2017

Stochastic Non-convex Optimization with Strong High Probability Second-order Convergence

In this paper, we study stochastic non-convex optimization with non-conv...
10/23/2020

Escape saddle points faster on manifolds via perturbed Riemannian stochastic recursive gradient

In this paper, we propose a variant of Riemannian stochastic recursive g...
08/19/2019

Second-Order Guarantees of Stochastic Gradient Descent in Non-Convex Optimization

Recent years have seen increased interest in performance guarantees of g...
03/07/2021

Escaping Saddle Points with Stochastically Controlled Stochastic Gradient Methods

Stochastically controlled stochastic gradient (SCSG) methods have been p...
02/13/2019

Stochastic Gradient Descent Escapes Saddle Points Efficiently

This paper considers the perturbed stochastic gradient descent algorithm...
02/28/2018

On the Sublinear Convergence of Randomly Perturbed Alternating Gradient Descent to Second Order Stationary Solutions

The alternating gradient descent (AGD) is a simple but popular algorithm...
07/03/2019

Distributed Learning in Non-Convex Environments – Part II: Polynomial Escape from Saddle-Points

The diffusion strategy for distributed learning from streaming data empl...