LazySVD: Even Faster SVD Decomposition Yet Without Agonizing Pain

by   Zeyuan Allen-Zhu, et al.

We study k-SVD that is to obtain the first k singular vectors of a matrix A. Recently, a few breakthroughs have been discovered on k-SVD: Musco and Musco [1] proved the first gap-free convergence result using the block Krylov method, Shamir [2] discovered the first variance-reduction stochastic method, and Bhojanapalli et al. [3] provided the fastest O(nnz(A) + poly(1/ε))-time algorithm using alternating minimization. In this paper, we put forward a new and simple LazySVD framework to improve the above breakthroughs. This framework leads to a faster gap-free method outperforming [1], and the first accelerated and stochastic method outperforming [2]. In the O(nnz(A) + poly(1/ε)) running-time regime, LazySVD outperforms [3] in certain parameter regimes without even using alternating minimization.



There are no comments yet.


page 1

page 2

page 3

page 4


Computing low-rank approximations of large-scale matrices with the Tensor Network randomized SVD

We propose a new algorithm for the computation of a singular value decom...

Convergence Analysis of the Rank-Restricted Soft SVD Algorithm

The soft SVD is a robust matrix decomposition algorithm and a key compon...

Tensor Completion via Tensor QR Decomposition and L_2,1-Norm Minimization

In this paper, we consider the tensor completion problem, which has many...

A Double Exponential Lower Bound for the Distinct Vectors Problem

In the (binary) Distinct Vectors problem we are given a binary matrix A ...

Almost-Optimal Sublinear-Time Edit Distance in the Low Distance Regime

We revisit the task of computing the edit distance in sublinear time. In...

A New High Performance and Scalable SVD algorithm on Distributed Memory Systems

This paper introduces a high performance implementation of Zolo-SVD algo...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.