On the Convergence of Perturbed Distributed Asynchronous Stochastic Gradient Descent to Second Order Stationary Points in Non-convex Optimization

10/14/2019
by   Lifu Wang, et al.
0

In this paper, the second order convergence of non-convex optimization in asynchronous stochastic gradient descent (ASGD) algorithm is studied systematically. We investigate the behavior of ASGD near and away from saddle points and show that, different from general stochastic gradient descent(SGD), ASGD may return back after escaping the saddle points, yet after staying near a saddle point for a long enough time (O(T)), ASGD will finally go away from strictly saddle points. An inequality is given to describe the process of ASGD to escape from saddle points. We show the exponential instability of the perturbed gradient dynamics near the strictly saddle points and use a novel Razumikhin-Lyapunov method to give a more detailed estimation about how the time delay parameter T influence the speed to escape. In particular, we consider the optimization of smooth nonconvex functions, and propose a perturbed asynchronous stochastic gradient descent algorithm with guarantee of convergence to second order stationary points with high probability in O(1/ϵ^4) iterations. To the best of our knowledge, this is the first work on the second order convergence of asynchronous algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2017

Stochastic Non-convex Optimization with Strong High Probability Second-order Convergence

In this paper, we study stochastic non-convex optimization with non-conv...
research
10/23/2020

Escape saddle points faster on manifolds via perturbed Riemannian stochastic recursive gradient

In this paper, we propose a variant of Riemannian stochastic recursive g...
research
08/19/2019

Second-Order Guarantees of Stochastic Gradient Descent in Non-Convex Optimization

Recent years have seen increased interest in performance guarantees of g...
research
03/07/2021

Escaping Saddle Points with Stochastically Controlled Stochastic Gradient Methods

Stochastically controlled stochastic gradient (SCSG) methods have been p...
research
02/13/2019

Stochastic Gradient Descent Escapes Saddle Points Efficiently

This paper considers the perturbed stochastic gradient descent algorithm...
research
02/28/2018

On the Sublinear Convergence of Randomly Perturbed Alternating Gradient Descent to Second Order Stationary Solutions

The alternating gradient descent (AGD) is a simple but popular algorithm...
research
08/04/2023

Eva: A General Vectorized Approximation Framework for Second-order Optimization

Second-order optimization algorithms exhibit excellent convergence prope...

Please sign up or login with your details

Forgot password? Click here to reset