Hardness of Bounded Distance Decoding on Lattices in ℓ_p Norms

03/17/2020
by   Huck Bennett, et al.
University of Michigan
0

Bounded Distance Decoding _p,α is the problem of decoding a lattice when the target point is promised to be within an α factor of the minimum distance of the lattice, in the ℓ_p norm. We prove that _p, α is -hard under randomized reductions where α→ 1/2 as p →∞ (and for α=1/2 when p=∞), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for _p,α. For example, we prove that for all p ∈ [1,∞) ∖ 2 and constants C > 1, > 0, there is no 2^(1-)n/C-time algorithm for _p,α for some constant α (which approaches 1/2 as p →∞), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available. Compared to prior work on the hardness of _p,α by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of α for which the problem is known to be -hard for all p > p_1 ≈ 4.2773, and give the very first fine-grained hardness for (in any norm). Our reductions rely on a special family of "locally dense" lattices in ℓ_p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/09/2021

Improved Hardness of BDD and SVP Under Gap-(S)ETH

We show improved fine-grained hardness of two key lattice problems in th...
11/06/2019

Fine-grained hardness of CVP(P)— Everything that we can prove (and nothing else)

We show that the Closest Vector Problem in the ℓ_p norm (CVP_p) cannot b...
02/15/2022

Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes

We give a simple proof that the (approximate, decisional) Shortest Vecto...
06/06/2021

The Fine-Grained Hardness of Sparse Linear Regression

Sparse linear regression is the well-studied inference problem where one...
10/09/2020

Lattice (List) Decoding Near Minkowski's Inequality

Minkowski proved that any n-dimensional lattice of unit determinant has ...
12/04/2017

(Gap/S)ETH Hardness of SVP

We prove the following quantitative hardness results for the Shortest ...
02/22/2019

A time-distance trade-off for GDD with preprocessing---Instantiating the DLW heuristic

For 0 ≤α≤ 1/2, we show an algorithm that does the following. Given appro...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Lattices in  are a rich source of computational problems with applications across computer science, and especially in cryptography and cryptanalysis. (A lattice is a discrete additive subgroup of 

, or equivalently, the set of integer linear combinations of a set of linearly independent vectors.) Many important lattice problems appear intractable, and there is a wealth of research showing that central problems like the Shortest Vector Problem (SVP) and Closest Vector Problem (CVP) are

-hard, even to approximate to within various factors and in various  norms [31, 8, 7, 22, 23, 17, 16, 14, 25]. (For the sake of concision, throughout this introduction the term “-hard” allows for randomized reductions, which are needed in some important cases.)

Bounded Distance Decoding.

In recent years, the emergence of lattices as a powerful foundation for cryptography, including for security against quantum attacks, has increased the importance of other lattice problems. In particular, many modern lattice-based encryption schemes rely on some form of the Bounded Distance Decoding (BDD) problem, which is like the Closest Vector Problem with a promise. An instance of for relative distance is a lattice  and a target point  whose distance from the lattice is guaranteed to be within an  factor of the lattice’s minimum distance , and the goal is to find a lattice vector within that distance of ; when distances are measured in the  norm we denote the problem . Note that when there is a unique solution, but the problem is interesting and well-defined for larger relative distances as well. We also consider preprocessing variants of CVP and BDD (respectively denoted CVPP and BDDP), in which unbounded precomputation can be applied to the lattice before the target is available. For example, this can model cryptographic contexts where a fixed long-term lattice may be shared among many users.

The importance of BDD(P) to cryptography is especially highlighted by the Learning With Errors (LWE) problem of Regev [28], which is an average-case form of BDD that has been used (with inverse-polynomial ) in countless cryptosystems, including several that share a lattice among many users (see, e.g., [13]). Moreover, Regev gave a worst-case to average-case reduction from BDD to LWE, so the security of cryptosystems is intimately related to the worst-case complexity of BDD.

Compared to problems like SVP and CVP, the BDD(P) problem has received much less attention from a complexity-theoretic perspective. We are aware of essentially only one work showing its -hardness: Liu, Lyubashevsky, and Micciancio [19] proved that and even are -hard for relative distances approaching , which is for . A few other works relate BDD(P) to other lattice problems (in both directions) in regimes where the problems are not believed to be -hard, e.g., [24, 11, 9]. (Dadush, Regev, and Stephens-Davidowitz [11] also gave a reduction that implies -hardness of for any , which is larger than the relative distance of achieved by [19].)

Fine-grained hardness.

An important aspect of hard lattice problems, especially for cryptography, is their quantitative hardness. That is, we want not only that a problem cannot be solved in polynomial time, but that it cannot be solved in, say, time or even time for a certain constant . Statements of this kind can be proven under generic complexity assumptions like the Exponential Time Hypothesis (ETH) of Impagliazzo and Paturi [15] or its variants like Strong ETH (SETH), via fine-grained reductions that are particularly efficient in the relevant parameters.

Recently, Bennett, Golovnev, and Stephens-Davidowitz [10] initiated a study of the fine-grained hardness of lattice problems, focusing on CVP; follow-up work extended to SVP and showed more for CVP(P) [5, 2]. The technical goal of these works is a reduction having good rank efficiency, i.e., a reduction from -SAT on  variables to a lattice problem in rank  for some constant , which we call the reduction’s “rank inefficiency.” (All of the lattice problems in question can be solved in time [3, 4, 6], so corresponds to optimal rank efficiency.) We mention that Regev’s BDD-to-LWE reduction [28] has optimal rank efficiency, in that it reduces rank- BDD to rank- LWE. However, to date there are no fine-grained -hardness results for BDD itself; the prior -hardness proof for BDD [19] incurs a large polynomial blowup in rank.

1.1 Our Results

We show improved -hardness, and entirely new fine-grained hardness, for Bounded Distance Decoding (and BDD with preprocessing) in arbitrary  norms. Our work improves upon the known hardness of BDD in two respects: the relative distance , and the rank inefficiency  (i.e., fine-grainedness) of the reductions. As  grows, both quantities improve, simultaneously approaching the unique-decoding threshold and optimal rank efficiency of as , and achieving those quantities for . We emphasize that these are the first fine-grained hardness results of any kind for BDD, for any  norm.

Our main theorem summarizing the - and fine-grained hardness of BDD (with and without preprocessing) appears below in Theorem 1. For and , the quantities and appearing in the theorem statement are certain positive real numbers that are decreasing in  and , and approaching  as (for any ). See Figure 1 for a plot of their behavior, Equations 8 and 7 for their formal definitions, and Lemma 5 for quite tight closed-form upper bounds.

Theorem 1

The following hold for and in rank :

  1. [itemsep=0pt]

  2. For every and constant (where ), and for , there is no polynomial-time algorithm for (respectively, ) unless (resp., ).

  3. For every and constant , and for , there is no -time algorithm for unless randomized ETH fails.

  4. For every and constant , and for , there is no -time algorithm for unless non-uniform ETH fails.

    Moreover, for every and there is no -time algorithm for unless non-uniform ETH fails.

  5. For every and constants , , and , and for , there is no -time algorithm for (respectively, ) unless randomized SETH (resp., non-uniform SETH) fails.

Although we do not have closed-form expressions for and , we do get quite tight closed-form upper bounds (see Lemma 5). Moreover, it is easy to numerically compute close approximations to them, and to the values of  at which they cross certain thresholds. For example, for all , so Item 1 of Theorem 1 improves on the prior best relative distance of any for the -hardness of in such norms [19].

As a few other example values and their consequences under Theorem 1, we have , , and . So by Item 2, BDD in the Euclidean norm for any relative distance requires time assuming randomized ETH. And by Item 4, for every there is no -time algorithm for , and no -time algorithm for , assuming randomized SETH.

Figure 1: Left: bounds on the relative distances for which was proved to be -hard in the  norm, in this work and in [19]; the crossover point is . (The plots include results obtained by norm embeddings [27], hence they are maximized at .) Right: our bounds on the relative distances for which there is no -time algorithm for for any , assuming randomized SETH.

1.2 Technical Overview

As in prior -hardness reductions for SVP and BDD (and fine-grained hardness proofs for the former) [7, 22, 16, 19, 14, 25, 5], the central component of our reductions is a family of rank- lattices  and target points  having a certain “local density” property in a desired  norm. Informally, this means that  has “large” minimum distance , i.e., there are no “short” nonzero vectors, but has many vectors “close” to the target . More precisely, we want and for some relative distance , where

denotes the number of lattice points within distance  of .

Micciancio [22] constructed locally dense lattices with relative distance approaching in the  norm (for every finite ), and used them to prove the -hardness of -approximate SVP in  for any . Subsequently, Liu, Lyubashevsky, and Micciancio [19] used these lattices to prove the -hardness of BDD in  for any relative distance . However, these works observed that the relative distance depends on  in the opposite way from what one might expect: as  grows, so does , hence the associated -hard SVP approximation factors and BDD relative distances worsen. Yet using norm embeddings, it can be shown that  is essentially the “easiest”  norm for lattice problems [27], so hardness in  implies hardness in  (up to an arbitrarily small loss in approximation factor). Therefore, the locally dense lattices from [22] do not seem to provide any benefits for over , where the relative distance approaches . In addition, the rank of these lattices is a large polynomial in the relevant parameter, so they are not suitable for proving fine-grained hardness.111We mention that Khot [16] gave a different construction of locally dense lattices with other useful properties, but their relative distance is no smaller than that of Micciancio’s construction in any  norm, and their rank is also a large polynomial in the relevant parameter.

Local density via sparsification.

More recently, Aggarwal and Stephens-Davidowitz [5] (building on [10]) proved fine-grained hardness for exact SVP in  norms, via locally dense lattices obtained in a different way. Because they target exact SVP, it suffices to have local density for relative distance , but for fine-grained hardness they need , preferably with a large hidden constant (which determines the rank efficiency of the reduction). Following [21, 12], they start with the integer lattice  and all-s target vector . Clearly, there are  lattice vectors all at distance from  in the norm, but the minimum distance of the lattice is only , so the relative distance of the “close” vectors is , which is far too large.

To improve the relative distance, they increase the minimum distance to at least  using the elegant technique of random sparsification, which is implicit in [12] and was first used for proving -hardness of approximate SVP in [17, 16]. The idea is to upper-bound the number of “short” lattice points of length at most , by some . Then, by taking a random sublattice of determinant (index) slightly larger than 

, with noticeable probability none of the “short” nonzero vectors will be included in 

, whereas roughly of the vectors “close” to  will be in . So, as long as , there are sufficiently many lattice vectors at the desired relative distance from .

Bounds for were given by Mazo and Odlyzko [21], by a simple but powerful technique using the theta function . They showed (see Proposition 1) that

(1)

where the equality is by . So, Aggarwal and Stephens-Davidowitz need , and it turns out that this is the case for every . (They also deal with smaller  by using a different target point .)

This work: local density for small relative distance.

For the - and fine-grained hardness of BDD we use the same basic approach as in [5], but with the different goal of getting local density for as small of a relative distance as we can manage. That is, we still have  integral vectors all at distance from the target , but we want to “sparsify away” all the nonzero integral vectors of length less than . So, we want the right-hand side of the Mazo-Odlyzko bound (Equation 1) to be at most for as large of a positive hidden constant as we can manage. More specifically, for any and (which ultimately corresponds to the reduction’s rank inefficiency) we can obtain local density of at least close vectors at any relative distance greater than

The value of  is strictly decreasing in both  and , and for large  and it drops below the relative distance of approached by the local-density construction of [22] for  (and also  by norm embeddings.) This is the source of our improved relative distance for the -hardness of BDD in high  norms.

We also show that obtaining local density by sparsifying the integer lattice happens to yield a very simple reduction to BDD from the exact version of CVP, which is how we obtain fine-grained hardness. Given a CVP instance consisting of a lattice and a target point, we essentially just take their direct sum with the integer lattice and the target (respectively), then sparsify. (See Lemma 4 and Theorem 5 for details.) Because this results in the (sparsified) locally dense lattice having close vectors all exactly at the threshold of the BDD promise, concatenating the CVP instance either keeps the target within the (slightly weaker) BDD promise, or puts it just outside. This is in contrast to the prior reduction of [19], where the close vectors in the locally dense lattices of [22] are at various distances from the target, hence a reduction from approximate-CVP with a large constant factor is needed to put the target outside the BDD promise. While approximating CVP to within any constant factor is known to be -hard [8], no fine-grained hardness is known for approximate CVP, except for factors just slightly larger than one [2].

1.3 Discussion and Future Work

Our work raises a number of interesting issues and directions for future research. First, it highlights that there are now two incomparable approaches for obtaining local density in the  norm—Micciancio’s construction [22], and sparsifying the integer lattice [12, 5]—with each delivering a better relative distance for certain ranges of . For , Micciancio’s construction (with norm embeddings from , where applicable) delivers the better relative distance, which approaches . Moreover, this is essentially optimal in , where is unachievable due to the Rankin bound, which says that in  we can have at most subunit vectors with pairwise distances of  or more.

A first question, therefore, is whether relative distance less than can be obtained for all . We conjecture that this is true, but can only manage to prove it via sparsification for all . More generally, an important open problem is to give a unified local-density construction that subsumes both of the above-mentioned approaches in terms of relative distance, and ideally in rank efficiency as well. In the other direction, another important goal is to give lower bounds on the relative distance in general  norms. Apart from the Rankin bound, the only bound we are aware of is the trivial one of implied by the triangle inequality, which is essentially tight for  and tight for  (as shown by [22] and our work, respectively).

More broadly, for the BDD relative distance parameter  there are three regimes of interest: the local-density regime, where we know how to prove -hardness; the unique-decoding regime ; and (at least in some  norms, including ) the intermediate regime between them. It would be very interesting, and would seem to require new techniques, to show -hardness outside the local-density regime. One potential route would be to devise a gap amplification technique for BDD, analogous to how SVP has been proved to be -hard to approximate to within any constant factor [16, 14, 25]. Gap amplification may also be interesting in the absence of -hardness, e.g., for the inverse-polynomial relative distances used in cryptography. Currently, the only efficient gap amplification we are aware of is a modest one that decreases the relative distance by any factor [20].

A final interesting research direction is related to the unique Shortest Vector Problem (uSVP), where the goal is to find a shortest nonzero vector  in a given lattice, under the promise that it is unique (up to sign). More generally, approximate uSVP has the promise that all lattice vectors not parallel to  are a certain factor  as long. It is known that exact uSVP is -hard in  [18], and by known reductions it is straightforward to show the -hardness of -approximate uSVP in . Can recent techniques help to prove -hardness of -approximate uSVP, for some constant , in  for some finite , or specifically for ? Do -hard approximation factors for uSVP grow smoothly with ?

Acknowledgments.

We thank Noah Stephens-Davidowitz for sharing his plot-generating code from [5] with us.

2 Preliminaries

For any positive integer , we identify the quotient group with some set of distinguished representatives, e.g., . Let denote the Moore-Penrose pseudoinverse of a real-valued matrix  with full column rank. Observe that is the unique coefficient vector  with respect to of any in the column span of .

2.1 Problems with Preprocessing

In addition to ordinary computational problems, we are also interested in (promise) problems with preprocessing. In such a problem, an instance is comprised of a “preprocessing” part  and a “query” part , and an algorithm is allowed to perform unbounded computation on the preprocessing part before receiving the query part.

Formally, a preprocessing problem is a relation of instance-solution pairs, where is the set of problem instances, and is the set of solutions for any particular instance . If every instance has exactly one solution that is either YES or NO, then  is called a decision problem.

Definition 1

A preprocessing algorithm is a pair where  is a (possibly randomized) function representing potentially unbounded computation, and  is an algorithm. The execution of on an input proceeds in two phases:

  • [itemsep=0pt]

  • first, in the preprocessing phase, takes  as input and produces some preprocessed output ;

  • then, in the query phase, takes both  and  as input and produces some ultimate output.

The running time  of the algorithm is defined to be the time used in the query phase alone, and is considered as a function of the total input length . The length of the preprocessed output is defined as , and is also considered as a function of the total input length. Note that without loss of generality, .

If is deterministic, we say that it solves preprocessing problem  if for all . If is potentially randomized, we say that it solves  if

for all , where the probability is taken over the random coins of both  and .222Note that it could be the case that some preprocessed outputs fail to make the query algorithm output a correct answer on some, or even all, query inputs.

As shown below using a routine quantifier-swapping argument (as in Adleman’s Theorem [1]), it turns out that for relations and decision problems, any randomized preprocessing algorithm can be derandomized if the length of the query input  is polynomial in the length of the preprocessing input . So for convenience, in this work we allow for randomized algorithms, only switching to deterministic ones for our ultimate hardness theorems.

Lemma 1

Let preprocessing problem  be an relation or a decision problem for which for all . If has a randomized -time algorithm, then it has a deterministic -time algorithm with -length preprocessed output.

Proof

Let be a polynomial for which for all . Let be a randomized -time algorithm for , which by standard repetition techniques we can assume has probability strictly less than of being incorrect on any , with only a -factor overhead in the running time and preprocessed output length. Fix some arbitrary . Then by the union bound over all  and the hypothesis, we have

So, there exist coins for  and  for which for all . By fixing these coins we make  a deterministic function of , and we include the coins for  along with the preprocessed output , thus making  deterministic as well. The resulting deterministic algorithm solves  with the claimed resources, as needed.

Reductions for preprocessing problems.

We need the following notions of reductions for preprocessing problems. The following generalizes Turing reductions and Cook reductions (i.e., polynomial-time Turing reductions).

Definition 2

A Turing reduction from one preprocessing problem  to another one  is a pair of algorithms satisfying the following properties: is a (potentially randomized) function with access to an oracle , whose output length is polynomial in its input length; is an algorithm with access to an oracle ; and if solves problem , then solves problem . Additionally, it is a Cook reduction if  runs in time polynomial in the total input length of  and .

Similarly, the following generalizes mapping reductions and Karp reductions (i.e., polynomial-time mapping reductions) for decision problems.

Definition 3

A mapping reduction from one preprocessing decision problem  to another one  is a pair satisfying the following properties: is a deterministic function whose output length is polynomial in its input length; is a deterministic algorithm; and for any YES (respectively, NO) instance of , the output pair is a YES (resp., NO) instance of , where are defined as follows:

  • [itemsep=0pt]

  • first, takes  as input and outputs some , where  is some “internal” preprocessed output;

  • then, takes as input and outputs some .

Additionally, it is a Karp reduction if  runs in time polynomial in the total input length of  and .

It is straightforward to see that if  mapping reduces to , and there is a deterministic polynomial-time preprocessing algorithm that solves , then there is also one that solves , which works as follows:

  1. [itemsep=0pt]

  2. the preprocessing algorithm , given a preprocessing input , first computes , then computes and outputs ;

  3. the query algorithm , given and a query input , computes and finally outputs .

2.2 Lattices

A lattice is the set of all integer linear combinations of some linearly independent vectors . It is convenient to arrange these vectors as the columns of a matrix. Accordingly, we define a basis to be a matrix with linearly independent columns, and the lattice generated by basis  as

Let denote the centered unit ball in dimensions. Given a lattice of rank , for let

denote the th successive minimum of with respect to the norm.

We denote the distance of a vector to a lattice as

2.3 Bounded Distance Decoding (with Preprocessing)

The primary computational problem that we study in this work is the Bounded Distance Decoding Problem (BDD), which is a version of the Closest Vector Problem (CVP) in which the target vector is promised to be relatively close to the lattice.

Definition 4

For and , the -Bounded Distance Decoding problem in the norm () is the (search) promise problem defined as follows. The input is (a basis of) a rank- lattice and a target vector satisfying . The goal is to output a lattice vector that satisfies .

The preprocessing (search) promise problem is defined analogously, where the preprocessing input is (a basis of) the lattice, and the query input is the target .

We note that in some works, is defined to have the goal of finding a  such that . This formulation is clearly no easier than the one defined above. So, our hardness theorems, which are proved for the definition above, immediately apply to the alternative formulation as well.

We also remark that for , the promise ensures that there is a unique vector satisfying . However, is still well defined for , i.e., above the unique-decoding radius. As in prior work, our hardness results for are limited to this regime.

To the best of our knowledge, essentially the only previous study of the -hardness of is due to [19], which showed the following result.333Additionally, [11] gave a reduction from to but only for some . Also, [26, 20] gave a reduction from to , but only for large for which is not known to be -hard.

Theorem 2 ([19, Corollaries 1 and 2])

For any and , there is no polynomial-time algorithm for (respectively, with preprocessing) unless (resp., unless ).

Regev and Rosen [27] used norm embeddings to show that almost any lattice problem is at least as hard in the norm, for any , as it is in the  norm, up to an arbitrarily small constant-factor loss in the approximation factor. In other words, they essentially showed that is the “easiest” norm for lattice problems. (In addition, their reduction preserves the rank of the lattice.) Based on this, [19] observed the following corollary, which is an improvement on the factor  from Theorem 2 for all .

Theorem 3 ([19, Corollary 3])

For any and , there is no polynomial-time algorithm for (respectively, with preprocessing) unless (resp., unless ).

Figure 1 shows the bounds from Theorems 3 and 2 together with the new bounds achieved in this work as a function of .

2.4 Sparsification

A powerful idea, first used in the context of hardness proofs for lattice problems in [17], is that of random lattice sparsification. Given a lattice with basis , we can construct a random sublattice as

for uniformly random , where is a suitably chosen prime.

Lemma 2

Let be a prime and let be arbitrary. Then

Proof

We have for each , and the claim follows by the union bound.

The following corollary is immediate.

Corollary 1

Let be a prime and be a lattice of rank with basis . Then for all and all ,

where .

Theorem 4 ([30, Theorem 3.1])

For any lattice of rank  with basis , prime , and lattice vectors such that for all , we have

We will use only the lower bound from Theorem 4, but we note that the upper bound is relatively tight for .

Corollary 2

For any and , lattice of rank  with basis , vector , prime , and lattice vectors such that for all  and such that all the are distinct, we have

where .

Proof

Observe that for each , the events

are disjoint, and by invoking Theorem 4 with and the being the remaining for , we have

Also observe that if  occurs, then (also for all , but we will not need this). Therefore,

So, the probability in the left-hand side of the claim is at least

2.5 Counting Lattice Points in a Ball

Following [5], for any discrete set  of points (e.g., a lattice, or a subset thereof), we denote the number of points in  contained in the closed and open (respectively)  ball of radius  centered at a point  as

(2)
(3)

Clearly, .

For and define

We use the following upper bound due to Mazo and Odlyzko [21] on the number of short vectors in the integer lattice. We include its short proof for completeness.

Proposition 1 ([21])

For any , , and ,

Proof

For we have

The result follows by rearranging and taking the minimum over all .

2.6 Hardness Assumptions

We recall the Exponential Time Hypothesis (ETH) of Impagliazzo and Paturi [15], and several of its variants. These hypotheses make stronger assumptions about the complexity of the -SAT problem than the assumption , and serve as highly useful tools for studying the fine-grained complexity of hard computational problems. Indeed, we will show that strong fine-grained hardness for follows from these hypotheses.

Definition 5

The (randomized) Exponential Time Hypothesis ((randomized) ETH) asserts that there is no (randomized) -time algorithm for -SAT on  variables.

Definition 6

The (randomized) Strong Exponential Time Hypothesis ((randomized) SETH) asserts that for every there exists such that there is no (randomized) -time algorithm for -SAT on  variables.

For proving hardness of lattice problem with preprocessing, we define (Max-)-SAT with preprocessing as follows. The preprocessing input is a size parameter , encoded in unary. The query input is a -SAT formula with  variables and  (distinct) clauses, together with a threshold in the case of Max--SAT. For -SAT, it is a YES instance if  is satisfiable, and is a NO instance otherwise. For Max--SAT, it is a YES instance if there exists an assignment to the variables of  that satisfies at least  of its clauses, and is a NO instance otherwise.

Observe that because the preprocessing input is just , a preprocessing algorithm for (Max-)-SAT with preprocessing is equivalent to a (non-uniform) family of circuits for the problem without preprocessing. Also, for any fixed , because there are only possible clauses on  variables, the length of the query input for (Max-)-SAT instances having preprocessing input  is , so we get the following corollary of Lemma 1.

Corollary 3

If (Max-)-SAT with preprocessing has a randomized -time algorithm, then it has a deterministic -time algorithm using -length preprocessed output.

Following, e.g., [29, 2], we also define non-uniform variants of ETH and SETH, which deal with the complexity of -SAT with preprocessing. More precisely, non-uniform ETH asserts that no family of size- circuits solves -SAT on  variables (equivalently, -SAT with preprocessing does not have a -time algorithm), and non-uniform SETH asserts that for every there exists such that no family of circuits of size solves -SAT on  variables (equivalently, -SAT with preprocessing does not have a -time algorithm). These hypotheses are useful for analyzing the fine-grained complexity of preprocessing problems.

One might additionally consider “randomized non-uniform” versions of (S)ETH. However, Corollary 3 says that a randomized algorithm for (Max-)-SAT with preprocessing can be derandomized with only polynomial overhead, so randomized non-uniform (S)ETH is equivalent to (deterministic) non-uniform (S)ETH, so we only consider the latter.

Finally, we remark that one can define weaker versions of randomized or non-uniform (S)ETH with Max--SAT (respectively, Max--SAT) in place of -SAT (resp., -SAT). Many of our results hold even under these weaker hypotheses. In particular, the derandomization result in Corollary 3 applies to both -SAT and Max--SAT.

3 Hardness of

In this section, we present our main result by giving a reduction from a known-hard variant of the Closest Vector Problem (CVP) to . We peform this reduction in two main steps.

  1. First, in Section 3.1 we define a variant of , which we call -. Essentially, an instance of this problem is a lattice that may have up to  “short” nonzero vectors of  norm bounded by some , and a target vector that is “close” to—i.e., within distance  of—at least  lattice vectors. (The presence of short vectors prevents this from being a true instance.) We then give a reduction, for , from - to itself, using sparsification.

  2. Then, in Section 3.2 we reduce from to - for suitable whenever  is sufficiently large as a function of  (and the desired rank efficiency), based on analysis given in Section 3.3 and Lemma 5.

3.1 -BDD to BDD

We start by defining a special decision variant of BDD. Essentially, the input is a lattice and a target vector, and the problem is to distinguish between the case where there are few “short” lattice vectors but many lattice vectors “close” to the target, and the case where the target is not close to the lattice. There is a gap factor between the “close” and “short” distances, and for technical reasons we count only those “close” vectors having binary coefficients with respect to the given input basis.

Definition 7

Let , , and . An instance of the decision promise problem - is a lattice basis , a distance , and a target .

  • It is a YES instance if and .

  • It is a NO instance if .

The search version is: given a YES instance , find a such that .

The preprocessing search and decision problems - are defined analogously, where the preprocessing input is  and , and the query input is .

We stress that in the preprocessing problems , the distance  is part of the preprocessing input; this makes the problem no harder than a variant where  is part of the query input. So, our hardness results for the above definition immediately apply to that variant as well. However, our reduction from - (given in Lemma 3) critically relies on the fact that  is part of the preprocessing input.

Clearly, there is a trivial reduction from the decision version of - to its search version (and similarly for the preprocessing problems): just call the oracle for the search problem and test whether it returns a lattice vector within distance  of the target. So, to obtain more general results, our reductions involving - will be from the search version, and to the decision version.

Reducing to BDD.

We next observe that for and any , there is almost a trivial reduction from - to ordinary , because YES instances of the former satisfy the promise. (See below for the easy proof.) The only subtlety is that we want the oracle to return a lattice vector that is within distance of the target; recall that the definition of only guarantees distance . This issue is easily resolved by modifying the lattice to upper bound its minimum distance by , which increases the lattice’s rank by one. (For the alternative definition of described after Definition 4, the trivial reduction works, and no increase in the rank is needed.)

Lemma 3

For any ,