Continuous LWE is as Hard as LWE Applications to Learning Gaussian Mixtures

04/06/2022
by   Aparna Gupte, et al.
0

We show direct and conceptually simple reductions between the classical learning with errors (LWE) problem and its continuous analog, CLWE (Bruna, Regev, Song and Tang, STOC 2021). This allows us to bring to bear the powerful machinery of LWE-based cryptography to the applications of CLWE. For example, we obtain the hardness of CLWE under the classical worst-case hardness of the gap shortest vector problem. Previously, this was known only under quantum worst-case hardness of lattice problems. More broadly, with our reductions between the two problems, any future developments to LWE will also apply to CLWE and its downstream applications. As a concrete application, we show an improved hardness result for density estimation for mixtures of Gaussians. In this computational problem, given sample access to a mixture of Gaussians, the goal is to output a function that estimates the density function of the mixture. Under the (plausible and widely believed) exponential hardness of the classical LWE problem, we show that Gaussian mixture density estimation in ℝ^n with roughly log n Gaussian components given 𝗉𝗈𝗅𝗒(n) samples requires time quasi-polynomial in n. Under the (conservative) polynomial hardness of LWE, we show hardness of density estimation for n^ϵ Gaussians for any constant ϵ > 0, which improves on Bruna, Regev, Song and Tang (STOC 2021), who show hardness for at least √(n) Gaussians under polynomial (quantum) hardness assumptions. Our key technical tool is a reduction from classical LWE to LWE with k-sparse secrets where the multiplicative increase in the noise is only O(√(k)), independent of the ambient dimension n.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset