Improved Algorithms for Low Rank Approximation from Sparsity

11/01/2021
by   David P. Woodruff, et al.
0

We overcome two major bottlenecks in the study of low rank approximation by assuming the low rank factors themselves are sparse. Specifically, (1) for low rank approximation with spectral norm error, we show how to improve the best known 𝗇𝗇𝗓(𝐀) k / √(ε) running time to 𝗇𝗇𝗓(𝐀)/√(ε) running time plus low order terms depending on the sparsity of the low rank factors, and (2) for streaming algorithms for Frobenius norm error, we show how to bypass the known Ω(nk/ε) memory lower bound and obtain an s k (log n)/ poly(ε) memory bound, where s is the number of non-zeros of each low rank factor. Although this algorithm is inefficient, as it must be under standard complexity theoretic assumptions, we also present polynomial time algorithms using poly(s,k,log n,ε^-1) memory that output rank k approximations supported on a O(sk/ε)× O(sk/ε) submatrix. Both the prior 𝗇𝗇𝗓(𝐀) k / √(ε) running time and the nk/ε memory for these problems were long-standing barriers; our results give a natural way of overcoming them assuming sparsity of the low rank factors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/20/2020

Optimal ℓ_1 Column Subset Selection and a Fast PTAS for Low Rank Approximation

We study the problem of entrywise ℓ_1 low rank approximation. We give th...
research
05/24/2018

Simple and practical algorithms for ℓ_p-norm low-rank approximation

We propose practical algorithms for entrywise ℓ_p-norm low-rank approxim...
research
05/14/2019

Spectral Approximate Inference

Given a graphical model (GM), computing its partition function is the mo...
research
09/27/2019

Total Least Squares Regression in Input Sparsity Time

In the total least squares problem, one is given an m × n matrix A, and ...
research
05/17/2021

Learning a Latent Simplex in Input-Sparsity Time

We consider the problem of learning a latent k-vertex simplex K⊂ℝ^d, giv...
research
08/07/2016

Robust High-Dimensional Linear Regression

The effectiveness of supervised learning techniques has made them ubiqui...
research
02/10/2022

Low-Rank Approximation with 1/ε^1/3 Matrix-Vector Products

We study iterative methods based on Krylov subspaces for low-rank approx...

Please sign up or login with your details

Forgot password? Click here to reset