Tighter Low-rank Approximation via Sampling the Leveraged Element

10/14/2014
by   Srinadh Bhojanapalli, et al.
0

In this work, we propose a new randomized algorithm for computing a low-rank approximation to a given matrix. Taking an approach different from existing literature, our method first involves a specific biased sampling, with an element being chosen based on the leverage scores of its row and column, and then involves weighted alternating minimization over the factored form of the intended low-rank matrix, to minimize error only on these samples. Our method can leverage input sparsity, yet produce approximations in spectral (as opposed to the weaker Frobenius) norm; this combines the best aspects of otherwise disparate current results, but with a dependence on the condition number κ = σ_1/σ_r. In particular we require O(nnz(M) + nκ^2 r^5/ϵ^2) computations to generate a rank-r approximation to M in spectral norm. In contrast, the best existing method requires O(nnz(M)+ nr^2/ϵ^4) time to compute an approximation in Frobenius norm. Besides the tightness in spectral norm, we have a better dependence on the error ϵ. Our method is naturally and highly parallelizable. Our new approach enables two extensions that are interesting on their own. The first is a new method to directly compute a low-rank approximation (in efficient factored form) to the product of two given matrices; it computes a small random set of entries of the product, and then executes weighted alternating minimization (as before) on these. The sampling strategy is different because now we cannot access leverage scores of the product matrix (but instead have to work with input matrices). The second extension is an improved algorithm with smaller communication complexity for the distributed PCA setting (where each server has small set of rows of the matrix, and want to compute low rank approximation with small amount of communication with other servers).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/02/2022

On the optimal rank-1 approximation of matrices in the Chebyshev norm

The problem of low rank approximation is ubiquitous in science. Traditio...
research
06/12/2013

Completing Any Low-rank Matrix, Provably

Matrix completion, i.e., the exact and provable recovery of a low-rank m...
research
10/21/2016

Single Pass PCA of Matrix Products

In this paper we present a new algorithm for computing a low rank approx...
research
05/04/2015

An Explicit Sampling Dependent Spectral Error Bound for Column Subset Selection

In this paper, we consider the problem of column subset selection. We pr...
research
07/11/2023

Making the Nyström method highly accurate for low-rank approximations

The Nyström method is a convenient heuristic method to obtain low-rank a...
research
02/06/2016

Recovery guarantee of weighted low-rank approximation via alternating minimization

Many applications require recovering a ground truth low-rank matrix from...
research
06/10/2019

Low Rank Approximation Directed by Leverage Scores and Computed at Sub-linear Cost

Low rank approximation (LRA) of a matrix is a major subject of matrix an...

Please sign up or login with your details

Forgot password? Click here to reset