Hardness of Bounded Distance Decoding on Lattices in ℓ_p Norms

03/17/2020
by   Huck Bennett, et al.
0

Bounded Distance Decoding _p,α is the problem of decoding a lattice when the target point is promised to be within an α factor of the minimum distance of the lattice, in the ℓ_p norm. We prove that _p, α is -hard under randomized reductions where α→ 1/2 as p →∞ (and for α=1/2 when p=∞), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for _p,α. For example, we prove that for all p ∈ [1,∞) ∖ 2 and constants C > 1, > 0, there is no 2^(1-)n/C-time algorithm for _p,α for some constant α (which approaches 1/2 as p →∞), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available. Compared to prior work on the hardness of _p,α by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of α for which the problem is known to be -hard for all p > p_1 ≈ 4.2773, and give the very first fine-grained hardness for (in any norm). Our reductions rely on a special family of "locally dense" lattices in ℓ_p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset