
Proximal QuasiNewton Methods for Regularized Convex Optimization with Linear and Accelerated Sublinear Convergence Rates
In [19], a general, inexact, efficient proximal quasiNewton algorithm f...
read it

On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond
The DANE algorithm is an approximate Newton method popularly used for co...
read it

On the Local convergence of twostep Newton type Method in Banach Spaces under generalized Lipschitz Conditions
The motive of this paper is to discuss the local convergence of a twost...
read it

Curvatureaided Incremental Aggregated Gradient Method
We propose a new algorithm for finite sum optimization which we call the...
read it

Convex Optimization for Linear Query Processing under Approximate Differential Privacy
Differential privacy enables organizations to collect accurate aggregate...
read it

Fast Linear Convergence of Randomized BFGS
Since the late 1950's when quasiNewton methods first appeared, they hav...
read it

Distributed Mirror Descent with Integral Feedback: Asymptotic Convergence Analysis of Continuoustime Dynamics
This work addresses distributed optimization, where a network of agents ...
read it
Nonasymptotic Superlinear Convergence of Standard QuasiNewton Methods
In this paper, we study the nonasymptotic superlinear convergence rate of DFP and BFGS, which are two wellknown quasiNewton methods. The asymptotic superlinear convergence rate of these quasiNewton methods has been extensively studied, but their explicit finite time local convergence rate has not been established yet. In this paper, we provide a finite time (nonasymptotic) convergence analysis for BFGS and DFP methods under the assumptions that the objective function is strongly convex, its gradient is Lipschitz continuous, and its Hessian is Lipschitz continuous only in the direction of the optimal solution. We show that in a local neighborhood of the optimal solution, the iterates generated by both DFP and BFGS converge to the optimal solution at a superlinear rate of O((1/k)^k/2), where k is the number of iterations. In particular, for a specific choice of the local neighborhood, both DFP and BFGS converge to the optimal solution at the rate of (0.85/k)^k/2. Our theoretical guarantee is one of the first results that provide a nonasymptotic superlinear convergence rate for DFP and BFGS quasiNewton methods.
READ FULL TEXT
Comments
There are no comments yet.