Inertial Newton Algorithms Avoiding Strict Saddle Points

11/08/2021
by   Camille Castera, et al.
0

We study the asymptotic behavior of second-order algorithms mixing Newton's method and inertial gradient descent in non-convex landscapes. We show that, despite the Newtonian behavior of these methods, they almost always escape strict saddle points. We also evidence the role played by the hyper-parameters of these methods in their qualitative behavior near critical points. The theoretical results are supported by numerical illustrations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2018

Second-order Guarantees of Distributed Gradient Algorithms

We consider distributed smooth nonconvex unconstrained optimization over...
research
10/18/2018

First-order and second-order variants of the gradient descent: a unified framework

In this paper, we provide an overview of first-order and second-order va...
research
08/25/2017

Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study

The resurgence of deep learning, as a highly effective machine learning ...
research
01/16/2019

DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization

For optimization of a sum of functions in a distributed computing enviro...
research
06/01/2020

Exit Time Analysis for Approximations of Gradient Descent Trajectories Around Saddle Points

This paper considers the problem of understanding the exit time for traj...
research
01/29/2019

Numerically Recovering the Critical Points of a Deep Linear Autoencoder

Numerically locating the critical points of non-convex surfaces is a lon...
research
06/25/2020

Newton-type Methods for Minimax Optimization

Differential games, in particular two-player sequential games (a.k.a. mi...

Please sign up or login with your details

Forgot password? Click here to reset