Newton-MR: Newton's Method Without Smoothness or Convexity

09/30/2018
by   Fred Roosta, et al.
0

Establishing global convergence of the classical Newton's method has long been limited to making (strong) convexity assumptions. This has limited the application range of Newton's method in its classical form. Hence, many Newton-type variants have been proposed which aim at extending the classical Newton's method beyond (strongly) convex problems. Furthermore, as a common denominator, the analysis of almost all these methods relies heavily on the Lipschitz continuity assumptions of the gradient and Hessian. In fact, it is widely believed that in the absence of well-behaved and continuous Hessian, the application of curvature can hurt more so that it can help. Here, we show that two seemingly simple modifications of the classical Newton's method result in an algorithm, called Newton-MR, which can readily be applied to invex problems. Newton-MR appears almost indistinguishable from the classical Newton's method, yet it offers a diverse range of algorithmic and theoretical advantages. In particular, not only Newton-MR's application extends far beyond convexity, but also it is more suitable than the classical Newton's method for (strongly) convex problems. Furthermore, by introducing a much weaker notion of joint regularity of Hessian and gradient, we show that the global convergence of Newton-MR can be established even in the absence of continuity assumptions of the gradient and/or Hessian. We further obtain local convergence guarantees of Newton-MR and show that our local analysis indeed generalizes that of the classical Newton's method. Specifically, our analysis does not make use of the notion of isolated minimum, which is required for the local convergence analysis of the classical Newton's method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2018

Global linear convergence of Newton's method without strong-convexity or Lipschitz gradients

We show that Newton's method converges globally at a linear rate for obj...
research
11/04/2021

Quasi-Newton Methods for Saddle Point Problems

This paper studies quasi-Newton methods for solving strongly-convex-stro...
research
01/16/2019

DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization

For optimization of a sum of functions in a distributed computing enviro...
research
02/16/2023

Online Learning Guided Curvature Approximation: A Quasi-Newton Method with Global Non-Asymptotic Superlinear Convergence

Quasi-Newton algorithms are among the most popular iterative methods for...
research
08/19/2023

Complexity Guarantees for Nonconvex Newton-MR Under Inexact Hessian Information

We consider extensions of the Newton-MR algorithm for nonconvex optimiza...
research
07/29/2015

A Gauss-Newton Method for Markov Decision Processes

Approximate Newton methods are a standard optimization tool which aim to...

Please sign up or login with your details

Forgot password? Click here to reset