Pseudospectral Shattering, the Sign Function, and Diagonalization in Nearly Matrix Multiplication Time

12/18/2019
by   Jess Banks, et al.
0

We exhibit a randomized algorithm which given a square n× n complex matrix A with A < 1 and δ>0, computes with high probability invertible V and diagonal D such that A-VDV^-1<δ and VV^-1< O(n^2.5/δ) in O(T_MM(n)log^2(n/δ)) arithmetic operations on a floating point machine with O(log^4(n/δ)log n) bits of precision. Here T_MM(n) is the number of arithmetic operations required to multiply two n× n complex matrices numerically stably, with T_MM (n)=O(n^ω+η) for every η>0, where ω is the exponent of matrix multiplication. The algorithm is a variant of the practical spectral bisection algorithm in numerical linear algebra (Beavers and Denman, 1974). This running time is optimal up to polylogarithmic factors, in the sense that verifying that a given similarity diagonalizes a matrix requires at least matrix multiplication time. It significantly improves best previously provable running times of O(n^9/δ^2) arithmetic operations for diagonalization of general matrices (Armentano et al., 2018), and (w.r.t. dependence on n) O(n^3) arithmetic operations for Hermitian matrices (Parlett, 1998). The proof rests on two new ingredients. (1) We show that adding a small complex Gaussian perturbation to any matrix splits its pseudospectrum into n small well-separated components. This implies that the eigenvalues of the perturbation have a large minimum gap, a property of independent interest in random matrix theory. (2) We rigorously analyze Roberts' 1970 Newton iteration method for computing the matrix sign function in finite arithmetic, itself an open problem in numerical analysis since at least 1986. This is achieved by controlling the evolution the iterates' pseudospectra using a carefully chosen sequence of shrinking contour integrals in the complex plane.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/14/2019

Faster arbitrary-precision dot product and matrix multiplication

We present algorithms for real and complex dot product and matrix multip...
research
05/13/2022

Global Convergence of Hessenberg Shifted QR III: Approximate Ritz Values via Shifted Inverse Iteration

We give a self-contained randomized algorithm based on shifted inverse i...
research
06/06/2023

Generalized Pseudospectral Shattering and Inverse-Free Matrix Pencil Diagonalization

We present a randomized, inverse-free algorithm for producing an approxi...
research
11/20/2018

A Fast Randomized Geometric Algorithm for Computing Riemann-Roch Spaces

We propose a probabilistic Las Vegas variant of Brill-Noether's algorith...
research
12/11/2017

StrassenNets: Deep learning with a multiplication budget

A large fraction of the arithmetic operations required to evaluate deep ...
research
05/13/2022

Global Convergence of Hessenberg Shifted QR II: Numerical Stability

We develop a framework for proving rapid convergence of shifted QR algor...
research
11/18/2022

Optimal Algorithms for Linear Algebra in the Current Matrix Multiplication Time

We study fundamental problems in linear algebra, such as finding a maxim...

Please sign up or login with your details

Forgot password? Click here to reset