Optimal Algorithms for Linear Algebra in the Current Matrix Multiplication Time

by   Yeshwanth Cherapanamjeri, et al.

We study fundamental problems in linear algebra, such as finding a maximal linearly independent subset of rows or columns (a basis), solving linear regression, or computing a subspace embedding. For these problems, we consider input matrices ๐€โˆˆโ„^nร— d with n > d. The input can be read in nnz(๐€) time, which denotes the number of nonzero entries of ๐€. In this paper, we show that beyond the time required to read the input matrix, these fundamental linear algebra problems can be solved in d^ฯ‰ time, i.e., where ฯ‰โ‰ˆ 2.37 is the current matrix-multiplication exponent. To do so, we introduce a constant-factor subspace embedding with the optimal m=๐’ช(d) number of rows, and which can be applied in time ๐’ช(nnz(๐€)/ฮฑ) + d^2 + ฮฑpoly(log d) for any trade-off parameter ฮฑ>0, tightening a recent result by Chepurko et. al. [SODA 2022] that achieves an exp(poly(loglog n)) distortion with m=dยทpoly(loglog d) rows in ๐’ช(nnz(๐€)/ฮฑ+d^2+ฮฑ+o(1)) time. Our subspace embedding uses a recently shown property of stacked Subsampled Randomized Hadamard Transforms (SRHT), which actually increase the input dimension, to "spread" the mass of an input vector among a large number of coordinates, followed by random sampling. To control the effects of random sampling, we use fast semidefinite programming to reweight the rows. We then use our constant-factor subspace embedding to give the first optimal runtime algorithms for finding a maximal linearly independent subset of columns, regression, and leverage score sampling. To do so, we also introduce a novel subroutine that iteratively grows a set of independent rows, which may be of independent interest.


page 1

page 2

page 3

page 4

โˆ™ 07/16/2021

Near-Optimal Algorithms for Linear Algebra in the Current Matrix Multiplication Time

Currently, in the numerical linear algebra community, it is thought that...
โˆ™ 07/19/2012

The Fast Cauchy Transform and Faster Robust Linear Regression

We provide fast algorithms for overconstrained โ„“_p regression and relate...
โˆ™ 03/30/2014

Sharpened Error Bounds for Random Sampling Based โ„“_2 Regression

Given a data matrix X โˆˆ R^nร— d and a response vector y โˆˆ R^n, suppose n>...
โˆ™ 11/09/2020

Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra

We create classical (non-quantum) dynamic data structures supporting que...
โˆ™ 04/13/2022

Sketching Algorithms and Lower Bounds for Ridge Regression

We give a sketching-based iterative algorithm that computes 1+ฮต approxim...
โˆ™ 12/18/2019

Pseudospectral Shattering, the Sign Function, and Diagonalization in Nearly Matrix Multiplication Time

We exhibit a randomized algorithm which given a square nร— n complex matr...
โˆ™ 05/25/2021

Hashing embeddings of optimal dimension, with applications to linear least squares

The aim of this paper is two-fold: firstly, to present subspace embeddin...

Please sign up or login with your details

Forgot password? Click here to reset