The gradient complexity of linear regression

11/06/2019
by   Mark Braverman, et al.
14

We investigate the computational complexity of several basic linear algebra primitives, including largest eigenvector computation and linear regression, in the computational model that allows access to the data via a matrix-vector product oracle. We show that for polynomial accuracy, Θ(d) calls to the oracle are necessary and sufficient even for a randomized algorithm. Our lower bound is based on a reduction to estimating the least eigenvalue of a random Wishart matrix. This simple distribution enables a concise proof, leveraging a few key properties of the random Wishart ensemble.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2017

Compressed Sparse Linear Regression

High-dimensional sparse linear regression is a basic problem in machine ...
research
06/16/2022

On the well-spread property and its relation to linear regression

We consider the robust linear regression model y = Xβ^* + η, where an ad...
research
02/06/2019

QMA Lower Bounds for Approximate Counting

We prove a query complexity lower bound for QMA protocols that solve app...
research
06/29/2022

Hardness and Algorithms for Robust and Sparse Optimization

We explore algorithms and limitations for sparse optimization problems s...
research
08/08/2016

Sampling Requirements and Accelerated Schemes for Sparse Linear Regression with Orthogonal Least-Squares

The Orthogonal Least Squares (OLS) algorithm sequentially selects column...
research
10/23/2015

On the complexity of switching linear regression

This technical note extends recent results on the computational complexi...
research
12/07/2022

Multi-Randomized Kaczmarz for Latent Class Regression

Linear regression is effective at identifying interpretable trends in a ...

Please sign up or login with your details

Forgot password? Click here to reset