New Q-Newton's method meets Backtracking line search: good convergence guarantee, saddle points avoidance, quadratic rate of convergence, and easy implementation

08/23/2021
by   Tuyen Trung Truong, et al.
0

In a recent joint work, the author has developed a modification of Newton's method, named New Q-Newton's method, which can avoid saddle points and has quadratic rate of convergence. While good theoretical convergence guarantee has not been established for this method, experiments on small scale problems show that the method works very competitively against other well known modifications of Newton's method such as Adaptive Cubic Regularization and BFGS, as well as first order methods such as Unbounded Two-way Backtracking Gradient Descent. In this paper, we resolve the convergence guarantee issue by proposing a modification of New Q-Newton's method, named New Q-Newton's method Backtracking, which incorporates a more sophisticated use of hyperparameters and a Backtracking line search. This new method has very good theoretical guarantees, which for a Morse function yields the following (which is unknown for New Q-Newton's method): Theorem. Let f:ℝ^m→ℝ be a Morse function, that is all its critical points have invertible Hessian. Then for a sequence {x_n} constructed by New Q-Newton's method Backtracking from a random initial point x_0, we have the following two alternatives: i) lim_n→∞||x_n||=∞, or ii) {x_n} converges to a point x_∞ which is a local minimum of f, and the rate of convergence is quadratic. Moreover, if f has compact sublevels, then only case ii) happens. As far as we know, for Morse functions, this is the best theoretical guarantee for iterative optimization algorithms so far in the literature. We have tested in experiments on small scale, with some further simplified versions of New Q-Newton's method Backtracking, and found that the new method significantly improve New Q-Newton's method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2022

Backtracking New Q-Newton's method: a good algorithm for optimization and solving systems of equations

In this paper, by combining the algorithm New Q-Newton's method - develo...
research
10/14/2021

A more direct and better variant of New Q-Newton's method Backtracking for m equations in m variables

In this paper we apply the ideas of New Q-Newton's method directly to a ...
research
06/02/2020

A modification of quasi-Newton's methods helping to avoid saddle points

We recall that if A is an invertible and symmetric real m× m matrix, the...
research
09/03/2022

Quadratic Gradient: Uniting Gradient Algorithm and Newton Method as One

It might be inadequate for the line search technique for Newton's method...
research
08/25/2020

Unconstrained optimisation on Riemannian manifolds

In this paper, we give explicit descriptions of versions of (Local-) Bac...
research
09/23/2021

Generalisations and improvements of New Q-Newton's method Backtracking

In this paper, we propose a general framework for the algorithm New Q-Ne...
research
08/16/2021

Adaptive Gradient Descent Methods for Computing Implied Volatility

In this paper, a new numerical method based on adaptive gradient descent...

Please sign up or login with your details

Forgot password? Click here to reset