The Fine-Grained Hardness of Sparse Linear Regression
Sparse linear regression is the well-studied inference problem where one is given a design matrix 𝐀∈ℝ^M× N and a response vector 𝐛∈ℝ^M, and the goal is to find a solution 𝐱∈ℝ^N which is k-sparse (that is, it has at most k non-zero coordinates) and minimizes the prediction error ||𝐀𝐱 - 𝐛||_2. On the one hand, the problem is known to be 𝒩𝒫-hard which tells us that no polynomial-time algorithm exists unless 𝒫 = 𝒩𝒫. On the other hand, the best known algorithms for the problem do a brute-force search among N^k possibilities. In this work, we show that there are no better-than-brute-force algorithms, assuming any one of a variety of popular conjectures including the weighted k-clique conjecture from the area of fine-grained complexity, or the hardness of the closest vector problem from the geometry of numbers. We also show the impossibility of better-than-brute-force algorithms when the prediction error is measured in other ℓ_p norms, assuming the strong exponential-time hypothesis.
READ FULL TEXT