On the Unreasonable Effectiveness of Single Vector Krylov Methods for Low-Rank Approximation

05/04/2023
by   Raphael A. Meyer, et al.
0

Krylov subspace methods are a ubiquitous tool for computing near-optimal rank k approximations of large matrices. While "large block" Krylov methods with block size at least k give the best known theoretical guarantees, block size one (a single vector) or a small constant is often preferred in practice. Despite their popularity, we lack theoretical bounds on the performance of such "small block" Krylov methods for low-rank approximation. We address this gap between theory and practice by proving that small block Krylov methods essentially match all known low-rank approximation guarantees for large block methods. Via a black-box reduction we show, for example, that the standard single vector Krylov method run for t iterations obtains the same spectral norm and Frobenius norm error bounds as a Krylov method with block size ℓ≥ k run for O(t/ℓ) iterations, up to a logarithmic dependence on the smallest gap between sequential singular values. That is, for a given number of matrix-vector products, single vector methods are essentially as effective as any choice of large block size. By combining our result with tail-bounds on eigenvalue gaps in random matrices, we prove that the dependence on the smallest singular value gap can be eliminated if the input matrix is perturbed by a small random matrix. Further, we show that single vector methods match the more complex algorithm of [Bakshi et al. `22], which combines the results of multiple block sizes to achieve an improved algorithm for Schatten p-norm low-rank approximation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/27/2020

Input-Sparsity Low Rank Approximation in Schatten Norm

We give the first input-sparsity time algorithms for the rank-k low rank...
research
01/28/2022

On the algorithm of best approximation by low rank matrices in the Chebyshev norm

The low-rank matrix approximation problem is ubiquitous in computational...
research
12/02/2022

On the optimal rank-1 approximation of matrices in the Chebyshev norm

The problem of low rank approximation is ubiquitous in science. Traditio...
research
07/05/2021

Dominant subspace and low-rank approximations from block Krylov subspaces without a gap

In this work we obtain results related to the approximation of h-dimensi...
research
01/04/2021

A Block Bidiagonalization Method for Fixed-Precision Low-Rank Matrix Approximation

We present randUBV, a randomized algorithm for matrix sketching based on...
research
07/05/2021

A Note on Error Bounds for Pseudo Skeleton Approximations of Matrices

Due to their importance in both data analysis and numerical algorithms, ...
research
08/08/2017

Improved Fixed-Rank Nyström Approximation via QR Decomposition: Practical and Theoretical Aspects

The Nyström method is a popular technique for computing fixed-rank appro...

Please sign up or login with your details

Forgot password? Click here to reset