Sharpened Lazy Incremental Quasi-Newton Method

05/26/2023
by   Aakash Lahoti, et al.
0

We consider the finite sum minimization of n strongly convex and smooth functions with Lipschitz continuous Hessians in d dimensions. In many applications where such problems arise, including maximum likelihood estimation, empirical risk minimization, and unsupervised learning, the number of observations n is large, and it becomes necessary to use incremental or stochastic algorithms whose per-iteration complexity is independent of n. Of these, the incremental/stochastic variants of the Newton method exhibit superlinear convergence, but incur a per-iteration complexity of O(d^3), which may be prohibitive in large-scale settings. On the other hand, the incremental Quasi-Newton method incurs a per-iteration complexity of O(d^2) but its superlinear convergence rate has only been characterized asymptotically. This work puts forth the Sharpened Lazy Incremental Quasi-Newton (SLIQN) method that achieves the best of both worlds: an explicit superlinear convergence rate with a per-iteration complexity of O(d^2). Building upon the recently proposed Sharpened Quasi-Newton method, the proposed incremental variant incorporates a hybrid update strategy incorporating both classic and greedy BFGS updates. The proposed lazy update rule distributes the computational complexity between the iterations, so as to enable a per-iteration complexity of O(d^2). Numerical tests demonstrate the superiority of SLIQN over all other incremental and stochastic Quasi-Newton variants.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2021

Quasi-Newton Methods for Saddle Point Problems

This paper studies quasi-Newton methods for solving strongly-convex-stro...
research
03/30/2020

Non-asymptotic Superlinear Convergence of Standard Quasi-Newton Methods

In this paper, we study the non-asymptotic superlinear convergence rate ...
research
04/13/2011

Hybrid Deterministic-Stochastic Methods for Data Fitting

Many structured data-fitting applications require the solution of an opt...
research
07/17/2022

SPIRAL: A Superlinearly Convergent Incremental Proximal Algorithm for Nonconvex Finite Sum Minimization

We introduce SPIRAL, a SuPerlinearly convergent Incremental pRoximal ALg...
research
10/24/2017

Curvature-aided Incremental Aggregated Gradient Method

We propose a new algorithm for finite sum optimization which we call the...
research
08/26/2020

Convergence Rate Improvement of Richardson and Newton-Schulz Iterations

Fast convergent, accurate, computationally efficient, parallelizable, an...
research
06/07/2023

Quasi-Newton Updating for Large-Scale Distributed Learning

Distributed computing is critically important for modern statistical ana...

Please sign up or login with your details

Forgot password? Click here to reset