Optimal Fine-grained Hardness of Approximation of Linear Equations

06/24/2021
by   Mitali Bafna, et al.
0

The problem of solving linear systems is one of the most fundamental problems in computer science, where given a satisfiable linear system (A,b), for A ∈ℝ^n × n and b ∈ℝ^n, we wish to find a vector x ∈ℝ^n such that Ax = b. The current best algorithms for solving dense linear systems reduce the problem to matrix multiplication, and run in time O(n^ω). We consider the problem of finding ε-approximate solutions to linear systems with respect to the L_2-norm, that is, given a satisfiable linear system (A ∈ℝ^n × n, b ∈ℝ^n), find an x ∈ℝ^n such that ||Ax - b||_2 ≤ε||b||_2. Our main result is a fine-grained reduction from computing the rank of a matrix to finding ε-approximate solutions to linear systems. In particular, if the best known O(n^ω) time algorithm for computing the rank of n × O(n) matrices is optimal (which we conjecture is true), then finding an ε-approximate solution to a dense linear system also requires Ω̃(n^ω) time, even for ε as large as (1 - 1/poly(n)). We also prove (under some modified conjectures for the rank-finding problem) optimal hardness of approximation for sparse linear systems, linear systems over positive semidefinite matrices, well-conditioned linear systems, and approximately solving linear systems with respect to the L_p-norm, for p ≥ 1. At the heart of our results is a novel reduction from the rank problem to a decision version of the approximate linear systems problem. This reduction preserves properties such as matrix sparsity and bit complexity.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset