The Fine-Grained Hardness of Sparse Linear Regression

06/06/2021
by   Aparna Gupte, et al.
0

Sparse linear regression is the well-studied inference problem where one is given a design matrix 𝐀∈ℝ^M× N and a response vector 𝐛∈ℝ^M, and the goal is to find a solution 𝐱∈ℝ^N which is k-sparse (that is, it has at most k non-zero coordinates) and minimizes the prediction error ||𝐀𝐱 - 𝐛||_2. On the one hand, the problem is known to be 𝒩𝒫-hard which tells us that no polynomial-time algorithm exists unless 𝒫 = 𝒩𝒫. On the other hand, the best known algorithms for the problem do a brute-force search among N^k possibilities. In this work, we show that there are no better-than-brute-force algorithms, assuming any one of a variety of popular conjectures including the weighted k-clique conjecture from the area of fine-grained complexity, or the hardness of the closest vector problem from the geometry of numbers. We also show the impossibility of better-than-brute-force algorithms when the prediction error is measured in other ℓ_p norms, assuming the strong exponential-time hypothesis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2022

Hardness and Algorithms for Robust and Sparse Optimization

We explore algorithms and limitations for sparse optimization problems s...
research
06/24/2021

Optimal Fine-grained Hardness of Approximation of Linear Equations

The problem of solving linear systems is one of the most fundamental pro...
research
03/17/2020

Hardness of Bounded Distance Decoding on Lattices in ℓ_p Norms

Bounded Distance Decoding _p,α is the problem of decoding a lattice when...
research
05/23/2020

Finding Small Satisfying Assignments Faster Than Brute Force: A Fine-grained Perspective into Boolean Constraint Satisfaction

To study the question under which circumstances small solutions can be f...
research
05/22/2018

More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture

The Strong Exponential Time Hypothesis and the OV-conjecture are two pop...
research
09/11/2023

On the Fine-Grained Hardness of Inverting Generative Models

The objective of generative model inversion is to identify a size-n late...
research
04/10/2017

On the Fine-Grained Complexity of Empirical Risk Minimization: Kernel Methods and Neural Networks

Empirical risk minimization (ERM) is ubiquitous in machine learning and ...

Please sign up or login with your details

Forgot password? Click here to reset