Sparse Regression Faster than d^ω

09/23/2021
by   Mehrdad Ghadiri, et al.
0

The current complexity of regression is nearly linear in the complexity of matrix multiplication/inversion. Here we show that algorithms for 2-norm regression, i.e., standard linear regression, as well as p-norm regression (for 1 < p < ∞) can be improved to go below the matrix multiplication threshold for sufficiently sparse matrices. We also show that for some values of p, the dependence on dimension in input-sparsity time algorithms can be improved beyond d^ω for tall-and-thin row-sparse matrices.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/13/2022

Skew-sparse matrix multiplication

Based on the observation that ℚ^(p-1) × (p-1) is isomorphic to a quotien...
research
04/04/2023

The Bit Complexity of Efficient Continuous Optimization

We analyze the bit complexity of efficient algorithms for fundamental op...
research
10/27/2021

An Efficient Reversible Algorithm for Linear Regression

This paper presents an efficient reversible algorithm for linear regress...
research
07/20/2020

Solving Sparse Linear Systems Faster than Matrix Multiplication

Can linear systems be solved faster than matrix multiplication? While th...
research
03/14/2022

Fast Regression for Structured Inputs

We study the ℓ_p regression problem, which requires finding 𝐱∈ℝ^d that m...
research
07/11/2019

Schatten Norms in Matrix Streams: Hello Sparsity, Goodbye Dimension

The spectrum of a matrix contains important structural information about...
research
06/22/2019

Asymmetric Random Projections

Random projections (RP) are a popular tool for reducing dimensionality w...

Please sign up or login with your details

Forgot password? Click here to reset