Non-asymptotic Superlinear Convergence of Standard Quasi-Newton Methods

03/30/2020
by   Qiujiang Jin, et al.
0

In this paper, we study the non-asymptotic superlinear convergence rate of DFP and BFGS, which are two well-known quasi-Newton methods. The asymptotic superlinear convergence rate of these quasi-Newton methods has been extensively studied, but their explicit finite time local convergence rate has not been established yet. In this paper, we provide a finite time (non-asymptotic) convergence analysis for BFGS and DFP methods under the assumptions that the objective function is strongly convex, its gradient is Lipschitz continuous, and its Hessian is Lipschitz continuous only in the direction of the optimal solution. We show that in a local neighborhood of the optimal solution, the iterates generated by both DFP and BFGS converge to the optimal solution at a superlinear rate of O((1/k)^k/2), where k is the number of iterations. In particular, for a specific choice of the local neighborhood, both DFP and BFGS converge to the optimal solution at the rate of (0.85/k)^k/2. Our theoretical guarantee is one of the first results that provide a non-asymptotic superlinear convergence rate for DFP and BFGS quasi-Newton methods.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/04/2021

Quasi-Newton Methods for Saddle Point Problems

This paper studies quasi-Newton methods for solving strongly-convex-stro...
06/10/2021

Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach

In this paper, we study the application of quasi-Newton methods for solv...
08/06/2019

On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond

The DANE algorithm is an approximate Newton method popularly used for co...
02/17/2021

Newton-Krylov-BDDC deluxe solvers for non-symmetric fully implicit time discretizations of the Bidomain model

A novel theoretical convergence rate estimate for a Balancing Domain Dec...
01/05/2021

On the Local convergence of two-step Newton type Method in Banach Spaces under generalized Lipschitz Conditions

The motive of this paper is to discuss the local convergence of a two-st...
07/08/2021

Identification and Adaptation with Binary-Valued Observations under Non-Persistent Excitation Condition

Dynamical systems with binary-valued observations are widely used in inf...
07/05/2021

The q-Levenberg-Marquardt method for unconstrained nonlinear optimization

A q-Levenberg-Marquardt method is an iterative procedure that blends a q...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.