Convergence Rate Improvement of Richardson and Newton-Schulz Iterations

08/26/2020
by   Alexander Stotsky, et al.
0

Fast convergent, accurate, computationally efficient, parallelizable, and robust matrix inversion and parameter estimation algorithms are required in many time-critical and accuracy-critical applications such as system identification, signal and image processing, network and big data analysis, machine learning and in many others. This paper introduces new composite power series expansion with optionally chosen rates (which can be calculated simultaneously on parallel units with different computational capacities) for further convergence rate improvement of high order Newton-Schulz iteration. New expansion was integrated into the Richardson iteration and resulted in significant convergence rate improvement. The improvement is quantified via explicit transient models for estimation errors and by simulations. In addition, the recursive and computationally efficient version of the combination of Richardson iteration and Newton-Schulz iteration with composite expansion is developed for simultaneous calculations. Moreover, unified factorization is developed in this paper in the form of tool-kit for power series expansion, which results in a new family of computationally efficient Newton-Schulz algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2017

A Unifying Framework for Convergence Analysis of Approximate Newton Methods

Many machine learning models are reformulated as optimization problems. ...
research
08/12/2015

Convergence rates of sub-sampled Newton methods

We consider the problem of minimizing a sum of n functions over a convex...
research
05/22/2023

Sketch-and-Project Meets Newton Method: Global 𝒪(k^-2) Convergence with Low-Rank Updates

In this paper, we propose the first sketch-and-project Newton method wit...
research
05/26/2023

Sharpened Lazy Incremental Quasi-Newton Method

We consider the finite sum minimization of n strongly convex and smooth ...
research
01/08/2013

A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions

We propose an algorithmic framework for convex minimization problems of ...
research
10/03/2022

Generalization of Higher Order Methods for Fast Iterative Matrix Inversion Suitable for GPU Acceleration

Recent technological developments have led to big data processing, which...

Please sign up or login with your details

Forgot password? Click here to reset