Exploiting Numerical Sparsity for Efficient Learning : Faster Eigenvector Computation and Regression

11/27/2018
by   Neha Gupta, et al.
0

In this paper, we obtain improved running times for regression and top eigenvector computation for numerically sparse matrices. Given a data matrix A ∈R^n × d where every row a ∈R^d has a_2^2 ≤ L and numerical sparsity at most s, i.e. a_1^2 / a_2^2 ≤ s, we provide faster algorithms for these problems in many parameter settings. For top eigenvector computation, we obtain a running time of Õ(nd + r(s + √(r s)) / gap^2) where gap > 0 is the relative gap between the top two eigenvectors of A^ A and r is the stable rank of A. This running time improves upon the previous best unaccelerated running time of O(nd + r d / gap^2) as it is always the case that r ≤ d and s ≤ d. For regression, we obtain a running time of Õ(nd + (nL / μ) √(s nL / μ)) where μ > 0 is the smallest eigenvalue of A^ A. This running time improves upon the previous best unaccelerated running time of Õ(nd + n L d / μ). This result expands the regimes where regression can be solved in nearly linear time from when L/μ = Õ(1) to when L / μ = Õ(d^2/3 / (sn)^1/3). Furthermore, we obtain similar improvements even when row norms and numerical sparsities are non-uniform and we show how to achieve even faster running times by accelerating using approximate proximal point [Frostig et. al. 2015] / catalyst [Lin et. al. 2015]. Our running times depend only on the size of the input and natural numerical measures of the matrix, i.e. eigenvalues and ℓ_p norms, making progress on a key open problem regarding optimal running times for efficient large-scale learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset