Efficiently avoiding saddle points with zero order methods: No gradients required

10/29/2019
by   Lampros Flokas, et al.
0

We consider the case of derivative-free algorithms for non-convex optimization, also known as zero order algorithms, that use only function evaluations rather than gradients. For a wide variety of gradient approximators based on finite differences, we establish asymptotic convergence to second order stationary points using a carefully tailored application of the Stable Manifold Theorem. Regarding efficiency, we introduce a noisy zero-order method that converges to second order stationary points, i.e avoids saddle points. Our algorithm uses only Õ(1 / ϵ^2) approximate gradient calculations and, thus, it matches the converge rate guarantees of their exact gradient counterparts up to constants. In contrast to previous work, our convergence rate analysis avoids imposing additional dimension dependent slowdowns in the number of iterations required for non-convex zero order optimization.

READ FULL TEXT
research
03/02/2017

How to Escape Saddle Points Efficiently

This paper shows that a perturbed form of gradient descent converges to ...
research
10/03/2019

Escaping Saddle Points for Zeroth-order Nonconvex Optimization using Estimated Gradient Descent

Gradient descent and its variants are widely used in machine learning. H...
research
09/23/2018

Second-order Guarantees of Distributed Gradient Algorithms

We consider distributed smooth nonconvex unconstrained optimization over...
research
07/03/2019

Distributed Learning in Non-Convex Environments – Part I: Agreement at a Linear Rate

Driven by the need to solve increasingly complex optimization problems i...
research
07/09/2019

SNAP: Finding Approximate Second-Order Stationary Solutions Efficiently for Non-convex Linearly Constrained Problems

This paper proposes low-complexity algorithms for finding approximate se...
research
10/12/2021

Global Convergence of Triangularized Orthogonalization-free Method

This paper proves the global convergence of a triangularized orthogonali...
research
10/20/2017

First-order Methods Almost Always Avoid Saddle Points

We establish that first-order methods avoid saddle points for almost all...

Please sign up or login with your details

Forgot password? Click here to reset