
InputSparsity Low Rank Approximation in Schatten Norm
We give the first inputsparsity time algorithms for the rankk low rank...
read it

Total Least Squares Regression in Input Sparsity Time
In the total least squares problem, one is given an m × n matrix A, and ...
read it

Generalized Gappedkmer Filters for Robust Frequency Estimation
In this paper, we study the generalized gapped kmer filters and derive ...
read it

The fastest ℓ_1,∞ prox in the west
Proximal operators are of particular interest in optimization problems d...
read it

Sublinear Time Numerical Linear Algebra for Structured Matrices
We show how to solve a number of problems in numerical linear algebra, s...
read it

Randomized Numerical Linear Algebra: Foundations Algorithms
This survey describes probabilistic algorithms for linear algebra comput...
read it

Optimal approximate matrix product in terms of stable rank
We prove, using the subspace embedding guarantee in a black box way, tha...
read it
ReducedRank Regression with Operator Norm Error
A common data analysis task is the reducedrank regression problem: min_rankk XAXB, where A ∈ℝ^n × c and B ∈ℝ^n × d are given large matrices and · is some norm. Here the unknown matrix X ∈ℝ^c × d is constrained to be of rank k as it results in a significant parameter reduction of the solution when c and d are large. In the case of Frobenius norm error, there is a standard closed form solution to this problem and a fast algorithm to find a (1+ε)approximate solution. However, for the important case of operator norm error, no closed form solution is known and the fastest known algorithms take singular value decomposition time. We give the first randomized algorithms for this problem running in time (nnz(A) + nnz(B) + c^2) · k/ε^1.5 + (n+d)k^2/ϵ + c^ω, up to a polylogarithmic factor involving condition numbers, matrix dimensions, and dependence on 1/ε. Here nnz(M) denotes the number of nonzero entries of a matrix M, and ω is the exponent of matrix multiplication. As both (1) spectral low rank approximation (A = B) and (2) linear system solving (m = n and d = 1) are special cases, our time cannot be improved by more than a 1/ε factor (up to polylogarithmic factors) without a major breakthrough in linear algebra. Interestingly, known techniques for low rank approximation, such as alternating minimization or sketchandsolve, provably fail for this problem. Instead, our algorithm uses an existential characterization of a solution, together with Krylov methods, low degree polynomial approximation, and sketchingbased preconditioning.
READ FULL TEXT
Comments
There are no comments yet.