Fine-grained hardness of CVP(P)— Everything that we can prove (and nothing else)

11/06/2019
by   Divesh Aggarwal, et al.
0

We show that the Closest Vector Problem in the ℓ_p norm (CVP_p) cannot be solved in 2^(1-ε)n time for all p ∉ 2Z and ε > 0 (assuming SETH). In fact, we show that the same holds even for (1) the approximate version of the problem (assuming a gap version of SETH); and (2) CVP_p with preprocessing, in which we are allowed arbitrary advice about the lattice (assuming a non-uniform version of SETH). For "plain" CVP_p, the same hardness result was shown in [Bennett, Golovnev, and Stephens-Davidowitz FOCS 2017] for all but finitely many p ∉ 2Z, where the set of exceptions depended on ε and was not explicit. For the approximate and preprocessing problems, only very weak bounds were known prior to this work. We also show that the restriction to p ∉ 2Z is in some sense inherent. In particular, we show that no "natural" reduction can rule out even a 2^3n/4-time algorithm for CVP_2 under SETH. For this, we prove that the possible sets of closest lattice vectors to a target in the ℓ_2 norm have quite rigid structure, which essentially prevents them from being as expressive as 3-CNFs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/17/2020

Hardness of Bounded Distance Decoding on Lattices in ℓ_p Norms

Bounded Distance Decoding _p,α is the problem of decoding a lattice when...
09/09/2021

Improved Hardness of BDD and SVP Under Gap-(S)ETH

We show improved fine-grained hardness of two key lattice problems in th...
12/04/2017

(Gap/S)ETH Hardness of SVP

We prove the following quantitative hardness results for the Shortest ...
07/19/2020

A 2^n/2-Time Algorithm for √(n)-SVP and √(n)-Hermite SVP, and an Improved Time-Approximation Tradeoff for (H)SVP

We show a 2^n/2+o(n)-time algorithm that finds a (non-zero) vector in a ...
02/22/2019

A time-distance trade-off for GDD with preprocessing---Instantiating the DLW heuristic

For 0 ≤α≤ 1/2, we show an algorithm that does the following. Given appro...
11/12/2020

Hardness of Approximate Nearest Neighbor Search under L-infinity

We show conditional hardness of Approximate Nearest Neighbor Search (ANN...
07/04/2019

Hardness of Bichromatic Closest Pair with Jaccard Similarity

Consider collections A and B of red and blue sets, respectively. Bichrom...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A lattice is the set of all integer linear combinations of linearly independent basis vectors ,

We call the rank of the lattice and the dimension or the ambient dimension of the lattice.

The two most important computational problems on lattices are the Shortest Vector Problem () and the Closest Vector Problem (). Given a basis for a lattice , asks us to compute the minimal length of a non-zero vector in , and asks us to compute the distance from some target point to the lattice. Typically, we define length and distance in terms of the norm for some , given by

for finite and

In particular, the case where corresponds to the Euclidean norm, which is the most important and best-studied in this context. We write and for the respective problems in the norm. is known to be at least as hard as (in any norm, under an efficient reduction that preserves the rank and approximation factor) [GMSS99] and appears to be significantly harder.

In the past decade, these problems have taken on still more importance, as their hardness underlies the security of most post-quantum public-key cryptography schemes, while the schemes that are currently used for most practical applications are not secure against quantum computers. Recent rapid progress in quantum computing (e.g., [A19]) has therefore created a rush to switch to lattice-based cryptography in many applications. Indeed, for this reason, lattice-based cryptography is in the process of standardization for widespread use [NIS16].

Given the obvious importance of these problems, they have been studied quite extensively. However, in spite of much effort algorithmic progress has stalled for . The fastest algorithm for runs in time [ADS15]—even for arbitrarily large constant approximation factors—and there are fundamental reasons that our current techniques cannot do better.111There are only two algorithms that solve in its exact form in time  [MV13, ADS15], and both of them involve enumeration over all cosets of modulo . (These cosets arise naturally in this context, and they play a large role in Section 6.) There are other approaches that achieve constant-factor approximation in time , but the constant in the exponent is significantly larger. The situation for is far more dynamic. See, e.g., [BDGL16, AS18b]. For arbitrary , the fastest known exact algorithm is still Kannan’s -time algorithm from over thirty years ago [Kan87]. For constant-factor approximation and arbitrary , Blömer and Naewe [BN09] gave a -time algorithm, which was later improved to time by Dadush [Dad12], and a -time algorithm for by Aggarwal and Mukhopadhyay [AM18].

While we have known for decades that is NP-hard [vEB81], even to approximate [DKRS03], such coarse hardness results are insufficient to rule out, e.g., a -time algorithm or even a -time algorithm. If such algorithms were found, they would have innumerable positive applications, but they would also render current lattice-based cryptographic constructions broken in practice. Even a small improvement beyond time would have major consequences.

In [BGS17], we therefore initiated the study of the fine-grained hardness of CVP in an effort to explain this lack of algorithmic progress and to give evidence for the quantitative security of lattice-based cryptography. We showed that there is no -time algorithm for assuming the Strong Exponential Time Hypothesis (SETH, a widely believed conjecture in complexity theory, defined in Section 2

), but we were only able to prove this lower bound explicitly for odd integers

(and ). For other values of , our result was much weaker. For every , we showed that there are at most finitely many with a -time algorithm for (assuming SETH). In particular, for any specific value of , we could not rule out such an algorithm. (We did, however, rule out -time algorithms for all .)

Furthermore, our results were far weaker for two important variants of the problem. First, for the near-exact version of the problem (i.e., the problem of approximating up to some constant factor), we were only able to rule out -time algorithms. Second, our lower bounds were quite weak for the problem of with preprocessing (), an offline-online variant of where an unbounded-time preprocessing algorithm may perform arbitrary preprocessing on the lattice in a way that helps an online query algorithm to find a closest lattice vector to a given target . In [BGS17], we were only able to rule out a -time algorithm for this problem. It therefore remained plausible that much faster algorithms could exist for than for or for constant-factor approximate . Such algorithms would, for example, lead to very strong attacks on certain lattice-based cryptographic schemes.

In follow-up work, we used the main result of [BGS17] to prove strong lower bounds for SVP [AS18a] and also for SIVP [AC19]. However, these works inherited some of the deficiencies described above. Specifically, the strongest hardness results in both works only applied to odd integers (and ) and some non-explicit set of additional .

1.1 Our results

Our first main result is an extension of the main result in [BGS17] to all except for the even integers, and to and approximate . (See Table 1. In the introduction, we informally refer to an “approximate variant of SETH” as Gap-SETH. See Definition 2.7 for a formal definition due to Manurangsi [Man19].)

Theorem 1.1 (Informal).

For every with , there is no -time algorithm for for any constant unless SETH is false. The same conclusion holds for unless non-uniform SETH is false.

Furthermore, for every with and constant , there is no -time algorithm for -approximate for some unless Gap-SETH is false.

As in [BGS17], our result is actually a bit stronger than the above. SETH-based hardness only requires a reduction from -SAT to , but we show a reduction from Max--SAT, and even from weighted Max--SAT.

In fact, we also rule out -time algorithms for under a weaker complexity-theoretic assumption: the (non-uniform) Exponential Time Hypothesis. This weaker lower bound under a weaker assumption holds for all —including even integers .

Theorem 1.1 also yields immediate similar improvements to the hardness of and , i.e., to the results of [AS18a, AC19]. In particular, by the main results in [AC19], the hardness for and its approximate variant immediately extends to . The results for are rather complicated, as they vary with in complex ways [AS18a], but our results imply extensions of [AS18a] to more values of than were known previously. See Appendix A for a complete statement of the result.

The restriction that is not an even integer is unfortunate, especially because we are most interested in the case when . But, this seems inherent. (In fact, it is known that is “the easiest norm” in a certain precise sense [RR06].) Indeed, in [BGS17], we already showed that our specific techniques are insufficient to prove hardness for .

Here, we also rule out a far more general class of techniques for , which we call “natural reductions.” These are reductions with a bijection between witnesses. Specifically, a reduction from a -SAT formula to over a lattice with basis is natural if there is a fixed (not necessarily efficient) mapping such that is a closest lattice vector if and only if is a satisfying assignment (assuming that is satisfiable). We also mention here the fact that natural reductions cannot prove better than hardness for . We include a simple proof of this fact in Section 1.3.

Theorem 1.2 (Informal).

There is no natural reduction from -SAT on variables to on a lattice with rank . In particular, no natural reduction can rule out even a -time algorithm for under SETH.

Furthermore, for any , there is no natural reduction from -SAT on variables to on a lattice with rank . In particular, no natural reduction can rule out a -time algorithm for under SETH for .

Notice that we even rule out reductions from -SAT to CVP. To prove SETH-hardness, we would need to show a reduction from -SAT for all constant .

Behind (the non-trivial part of) Theorem 1.2 are two new techniques. First is a new result concerning the structure of the closest lattice vectors to a target point in the norm. Specifically, we show that the structure of the closest vectors is quite rigid modulo . Second is a new and tighter proof of Szemerédi’s cube lemma for the boolean hypercube. We expect both of these results to be of independent interest.

Problem Upper bounds Lower bounds
Exact Approximate Exact Approximate
*
Table 1: A summary of known quantitative upper and lower bounds under various assumptions on the complexity of and for . New results appear in blue (with a star next to the one result that is only novel for some ). Upper bounds for the approximate problems are for any constant approximation factor , while lower bounds are for some small, explicit approximation factor depending on (and, in the case of for , also on ). The -time lower bounds are based on SETH (or Gap-SETH or non-uniform SETH), while the -time and -time lower bounds are based on ETH (or Gap-ETH or non-uniform ETH).

1.2 Our reductions

The high-level idea behind our reductions (and those of [BGS17]) is as follows. The reduction is given as input a list of -clauses on boolean variables , where is some constant. We wish to construct some basis and target such that for any , for is small if and only if represents an assignment that satisfies all of the .

To that end, for each , we wish to find a matrix and target such that is small if and only if represents an assignment that satisfies . If we could find such matrices, we could take

(1)

where is the vector whose coordinates are all . Then, will be small if and only if corresponds to a satisfying assignment. (By taking to be sufficiently large, we can guarantee that any closest vectors must be of the form for .)

Since is a parallelepiped, and since the most important case (corresponding to -SAT) is when all but one point in this set is long and all others are short, we call such objects isolating parallelepipeds, as we explain below. The difficult step in these reductions is therefore to find isolating parallelepipeds .

Finding isolating parallelepipeds.

We say that a parallelepiped is a -isolating parallelepiped if all for non-zero and . (We think of the vertex as “isolated” from the others. See Figure 1.) To find isolating parallelepipeds, we construct a family of parallelepipeds parameterized by and . This family has the useful property that the norms are linear in the for fixed . (In [BGS17], we used a less general family of parallelepipeds.)

Figure 1: -isolating parallelepipeds for (left) and (right). On the left, the vectors , , and are all at the same distance from , while is strictly farther away. On the right is the degenerate parallelepiped generated by copies of the vector . The vectors are all at the same distance from for , while is strictly farther away. The (scaled) unit balls centered at are shown in red, while the parallelepipeds are shown in black. (Figure taken from [BGS17].)

So, finding isolating parallelepipeds essentially reduces to showing that a certain system of linear equations has a solution. (We actually need a non-negative solution, but we ignore this technical issue in the introduction.) To that end, we study the matrix corresponding to this system of linear equations and try to show that its determinant is non-zero for some computable choice of . To do this, we observe that satisfies the recurrence

(It is this recurrence that makes this family more useful than the less general family in [BGS17].) This makes showing that is non-zero susceptible to a proof by induction on .

To that end, we use a formula for the determinant of block matrices of this form to show by induction that is equal to the product of functions of . These functions are in turn each non-zero -linear combinations of functions of the form for distinct . (The determinant is actually a piecewise combination of such functions, but we ignore this here.) We prove that such functions are -linearly independent if (and only if) either or . Therefore, the functions cannot be identically zero for such , which in turn implies that is not identically zero as a function of , as needed. We finish the proof by noting that is (piecewise) analytic so that its zeros must be isolated, and it therefore has a computable non-zero point.

By combining this construction with our previous work, we completely characterize the values of and for which -isolating parallelepipeds exist. Namely, the only case not handled by the construction above is the case where . In this case, [BGS17] showed that such parallelepipeds exist for odd but cannot exist for even . (We provide a full proof of this latter claim in Lemma 6.1.) So, -isolating parallelepipeds exist if and only if .

As a corollary, we show a reduction from (weighted Max-)-SAT on variables to a instance with rank for all . In particular, we prove that is SETH-hard for all .

Hardness of .

We next show how to extend the hardness result above from to the Closest Vector Problems with Preprocessing in the norm (). Namely, we show that is -hard assuming (non-uniform) SETH for all . To do this, we define an enhanced notion of an isolating parallelepiped, that we call an on-off-isolating parallelepiped (this is analogous to what [SV19] does for codes). An on-off-isolating parallelepiped is an isolating parallelepiped together with a target such that is constant for all .

To use these objects to reduce (Max-)-SAT on variables to a instance with rank , we must reduce -SAT to with a fixed basis matrix . We use the matrix

consisting of the on-off-isolating parallelepipeds for each possible -clause on variables, stacked on top of each other, where . Given a -SAT formula , we create the target

such that if and otherwise . (We are oversimplifying a bit here. In our actual construction, we must shift in a way depending on which literals in the clause are negated. See Section 4.) I.e., we use to “turn off” the clauses that do not appear in our SAT instance.

Finally, we show that -on-off-isolating parallelepipeds exist if and only if -isolating parallelepipeds exist. To transform a -isolating parallelepiped into a -on-off-isolating parallelepiped, we simply take , , and . A simple calculation shows that for all and for all non-zero , as needed.

Hardness of approximation.

To prove hardness of approximation, we must show how to reduce approximate Max--SAT instance with variables to an approximate instance with rank . The -hardness of approximate described in Theorem 1.1 then follows from the recent Gap-SETH conjecture of Manurangsi [Man19].

The construction shown in Eq. (1

) is insufficient to prove hardness of approximation because the presence of the “identity matrix gadget”

forces the closest vector to be within distance roughly to the target. As a result, all SAT instances yield a instance with for some radius .

To reduce to approximate , we therefore need to somehow remove this gadget, which we do by extending isolating parallelepipeds to “isolating lattices.” Specifically, we show how to construct a basis and target vector such that is a closest lattice vector to if and only if and corresponds to a satisfying assignment of the -CNF . I.e., while previously the satisfying assignments corresponded exactly to the closest vectors to in the parallelepiped , now the satisfying assignments must correspond exactly to the closest vectors to in the entire lattice . This eliminates the need for the identity matrix gadget.

Again, we show how to convert any isolating parallelepiped into a full isolating lattice. The main idea is simply to “append an identity matrix gadget” to the isolating parallelepiped directly, rather than appending it to the full basis as in Eq. (1). Namely, we convert an isolating parallelepiped into an isolating lattice by appending a scaled identity matrix to the bottom of , and a constant vector to the bottom of . By setting to be large enough, we ensure that any non-binary combination of vectors in will be far from . By “putting the identity matrix in the parallelepiped,” rather than in the whole basis, we are able to obtain an approximation factor that depends only on (and the Gap--SAT approximation factor) and not on .

1.3 Impossibility of natural reductions for

In [BGS17], we showed that the technique described above cannot work for even integers . Specifically, we showed that isolating parallelepipeds do not exist in this case. However, this still left open the possibility of some other (potentially even simple) reduction from -SAT to for even integers —perhaps even for . Here, we show that a very large class of reductions cannot work for . Behind these limitations is a new result concerning the structure of the closest lattice vectors to a target in the Euclidean norm.

Before we define natural reductions and show their limitations, we motivate the definition by showing a simple limitation that applies for all . Specifically, we recall the well-known fact that for such , the number of closest lattice vectors to a target is at most , where is the rank of the lattice. (We show the simple proof of this fact below. Notice that closest vectors are actually achieved by the integer lattice and the all-halves target vector .) Therefore, if a reduction maps each satisfying assignment of some -SAT formula to a distinct closest lattice vector, the rank of the resulting lattice must be at least , where is the number of satisfying assignments. (Here, and below, we only consider the YES case, when there exists at least one satisfying assignment.) Since the number of satisfying assignments can be as large as , where is the number of variables in the input instance, we must have .

Our specific reductions described above actually map each assignment to a very simple lattice vector: . I.e., is a satisfying assignment if and only if . This suggests the following generalization of this type of reduction.

We call a reduction natural if there exists a map from assignments to coordinate vectors such that whenever the input -SAT formula is satisfiable, if and only if for some satisfying assignment . (We do not require , or even the reduction itself, to be efficiently computable.) Our reductions described above then correspond to the special case when and is the identity map.

Closest vectors mod two.

To rule out such reductions for , we study the algebraic and combinatorial properties of the set of closest vectors in a lattice to some target vector . To motivate our techniques, let us first recall the well-known simple proof of the fact (mentioned above) that the number of closest vectors is at most for . Consider two distinct closest vectors to some target . Suppose that for some lattice vector . Then, , where we have used the strict convexity of the norms for . (I.e., the triangle inequality is tight for if and only if is a scalar multiple of . Notice that this is false for and , and in each of these cases it is easy to show that there can be arbitrarily many closest lattice vectors to a target, even in two dimensions.)

The above proof does not only show that the number of closest vectors is at most ; it also shows that the set of coordinates of closest vectors in some basis have some algebraic structure. Specifically, there can be at most one element in in each coset of . Here, a coset is the set of all integer vectors with fixed parity. Notice that two cosets can be added together to obtain a new coset, , and the above proof relied crucially on this structure. Of course, under addition, the cosets are isomorphic to the vector space of . It is then natural to ask about the structure of , viewed as a subset of the hypercube .

Indeed, in Section 6 we show the following curious property of for . Let be an affine square mod two (i.e., a two-dimensional affine subspace), and suppose that . Let be the elements such that . (As we discussed above, there must be exactly four such elements.) Then, we show that either (1) the points in form a parallelogram over the reals (i.e., they must have the form ), or (2) there is some specific set of four other elements that must also lie in .

Studying the image of .

To see how this can be used to rule out natural reductions, consider the image of and . Suppose that contains an affine square , with corresponding set . Suppose that is not a parallelogram over the reals, and let be the other four elements guaranteed by the above discussion. Then, let and be the corresponding set of assignments. We observe that there exist -SAT instances that are satisfied by all elements in but not all elements in . (This can be accomplished with a single clause.) But, our reduction must map any such instance to a basis and a target such that . This contradicts the assumption that only maps satisfying assignments to closest vectors.

Therefore, whenever contains an affine square , the corresponding set in must be a parallelogram. It follows that any affine 3-cube in must correspond to a -dimensional parallelepiped in . Finally, we find a -SAT instance satisfied by exactly seven of the eight elements in . It follows that the reduction must produce a parallelepiped with exactly seven out of eight points closest to some target. In [BGS17], we already showed that this is impossible. (We provide a simpler proof in Section 6 as well.)

From this, we conclude that cannot contain any affine -cube.

Using additive combinatorics to finish the proof.

Above, we observed that the image of modulo cannot contain any -cube. But, we have already observed that (i.e., the closest vectors must be distinct modulo ). So, is some subset of points in that contains no affine hypercube. By Szemerédi’s cube lemma, we must have , which is what we wished to prove.

In fact, we only need a special case of Szemerédi’s cube lemma. We provide a simpler proof of this special case based on the pigeon-hole principle. Though the proof is quite simple, to the authors’ knowledge it is novel.

1.4 Related work

The most closely related work to this paper is of course [BGS17]. There are two additional papers showing fine-grained hardness of lattice problems, [AS18a], which showed such results for SVP; and [AC19], which did the same for SIVP. Both of these works relied on the results in [BGS17], and our improvements therefore immediately imply better hardness results for both SVP and SIVP.

An additional line of work has shown different kinds of hardness for , , and related problems. In particular, Bhattacharyya, Ghoshal, Karthik, and Manurangsi showed the parameterized hardness of and , as well as the analogous coding problems [BGKM18][SV19] showed tight hardness results for coding problems, using many ideas from [BGS17]. We in turn use some ideas from [SV19], and particular the idea of on-off-isolating parallelepipeds.

Finally, we wish to draw attention to the beautiful Gap-SETH hypothesis of Manurangsi [Man19], presented here in Definition 2.7. The conjecture is quite natural, and we suspect that it will have many additional applications in the study of fine-grained hardness of approximation. E.g., it was already mentioned in [SV19] that something like this Gap-SETH hypothesis would imply strong hardness of approximation results for coding problems.

1.5 Open questions

The most obvious question that we leave open is, of course, to prove similar hardness results for , and more generally, for for even integers . In the case, we show that any such proof (via SETH) would have to use an “unnatural reduction.” So, a fundamentally different approach is needed.222We note that the main reduction in [BGS17] works as a (natural) reduction from weighted Max--SAT formulas on variables with arbitrary (possibly exponential) weights to instances of rank for all , including . So, a -time algorithm for would imply a -time algorithm for weighted Max--SAT with arbitrary weights, for which no such algorithm is known (Ryan Williams’ algorithm for Max--SAT [Wil05] runs in -time, where is the largest weight of a clause and is the matrix multiplication constant). So, there is (potentially weak) evidence that there is no -time algorithm for .

Another potentially easier problem would be to show hardness of in terms of the ambient dimension , rather than . Indeed, though there do exist -time constant-factor approximation algorithms for , the parameter is in some sense more natural. (E.g., the original algorithm of [BN09] runs in time , and the algorithm of [AM18] also has its running time in terms of .) This problem is potentially easier than the above because for we may assume without loss of generality that .

Of course, another open question is to prove stronger quantitative lower bounds for , and in particular for . While [AS18a] did prove quite strong lower bounds for sufficiently large , their bounds for small and in particular for are quite weak.

We also note that for has received relatively little attention from an algorithmic perspective. In particular, there has not been much work trying to optimize the hidden constants in the exponent in the running times of or of the best known algorithms for constant-factor approximate . Our lower bounds provide new motivation for work on this subject. In particular, we ask whether our lower bounds are tight.

In fact, we do not expect our lower bound to be tight in the case when . (Recall that our limitation in Theorem 1.2 does not apply to or .) Indeed, because the kissing number in the norm is , one might guess that the fastest algorithms for and actually run in time or perhaps . (See [AM18], which more-or-less achieves this.) We therefore ask whether stronger lower bounds can be proven in this special case.

Finally, we note that our results only apply for exact or with a small constant approximation factor. For cryptographic applications, one is interested in much larger approximation factors, typically approximation factors polynomial in . While there are strong complexity-theoretic barriers to proving hardness in that regime, one might still hope to prove fine-grained hardness results for larger approximation factors—such as large constants or even superconstant. Indeed, we know NP-hardness up to an approximation factor of , but this result is not fine-grained [DKRS03].

2 Preliminaries

Throughout this paper, we work with lattice problems over for convenience. As usual, to be formal we must pick a suitable representation of real numbers and consider both the size of the representation and the efficiency of arithmetic operations in the given representation. But, we omit such details throughout to ease readability.

2.1 Lattice problems

Let denote the distance of to . We next formally define the lattice problems that we consider.

Definition 2.1.

For any and , the -approximate Shortest Vector Problem with respect to the norm (-) is the promise problem defined as follows. Given a lattice (specified by a basis ) and a number , distinguish between a ‘YES’ instance where there exists a non-zero vector such that , and a ‘NO’ instance where for all non-zero .

Definition 2.2.

For any and , the -approximate Closest Vector Problem with respect to the norm (-) is the promise problem defined as follows. Given a lattice (specified by a basis ), a target vector , and a number , distinguish between a ‘YES’ instance where , and a ‘NO’ instance where .

When , we simply refer to the problems as and .

Definition 2.3.

The Closest Vector Problem with Preprocessing with respect to the norm ( is the problem of finding a preprocessing function and an algorithm which work as follows. Given a lattice (specified by a basis ), outputs a new description of . Given , a target vector , and a number , decides whether .

When we measure the runtime of a algorithm, we only count the runtime of , and not of the preprocessing algorithm . We will assume that the runtime of is at least the size of the preprocessing, .

2.2 Isolating parallelepipeds

We recall the definition of an isolating parallelepiped from [BGS17]. See Figure 1.

Definition 2.4.

For any and integer , we say that and define a -isolating parallelepiped if:

  1. for all ,

  2. .

We will more generally refer to the set for and as a -parallelepiped. We call a -parallelepiped a parallelogram.

2.3 -Sat

A -SAT formula on boolean variables is the conjunction of clauses, each of which is a dijunction of literals. Each literal is either a variable or its negation for some . The -SAT problem is, given a -SAT formula , to decide whether there exists an assignment to the variables of that satisfies , i.e., such that .

We next introduce some notation related to -SAT. Let be a -SAT formula on variables and clauses . Let denote the index of the variable underlying a literal . I.e., if or . Call a literal positive if and negative if for some variable . Given a clause , let and let denote the indices of positive and negative literals in respectively. Given an assignment to the variables of , let denote the indices of literals in satisfied by . I.e., . Finally, when a formula is clear from context, let denote the number of clauses of satisfied by the assignment , i.e., the number of clauses for which .

The value of a -SAT formula , denoted , is the maximum fraction of clauses satisfied by an assignment to .

Definition 2.5.

Given a -SAT formula and constants , the -Gap--SAT problem is the promise problem defined as follows. The goal is to distinguish between a ‘YES’ instance in which , and a ‘NO’ instance in which .

2.4 Hardness assumptions

Definition 2.6 (Seth; [Ipz01]).

For every there exists a such that no algorithm solves -SAT on variables in time.

In his Ph.D. thesis, Manurangsi [Man19] gave one possible definition of Gap-SETH.

Definition 2.7 (Gap-SETH; [Man19, Conjecture 12.1]).

For every there exist and such that there is no algorithm that can distinguish between a -SAT formula with variables that is satisfiable and one that has value less than in time.

We will show that cannot be approximated to within some factor in time assuming Gap-SETH. Unfortunately, decays as a function of . However, our reduction from Gap--SAT to can be adapted to a reduction from any Gap--CSP to with the same relevant parameters. (Namely, our reduction maps CSP instances on variables to CVP(P) instances of rank .)

We will also use non-uniform variants of ETH and SETH to prove hardness results about .

Definition 2.8 (Non-uniform ETH).

There is no family of circuits of size that solves -SAT instances on variables.

Definition 2.9 (Non-uniform SETH).

For every there exists a such that no family of circuits of size solves -SAT instances on variables.

Our results are also quite robust to how we define non-uniform (S)ETH. For example, one of our main results about the complexity of roughly says that assuming non-uniform ETH (as stated above) there is no subexponential-sized family of circuits that decides for . However, if we were to change non-uniform ETH to say that there is no -time algorithm using advice, then we would get a corresponding statement for : that there is no -time algorithm for using advice.

Interestingly, many of our results only depend on weaker versions of these hypotheses, where we replace an assumption about the hardness of -SAT with an assumption about the hardness of Max--SAT or even weighted Max--SAT.

2.5 Linear algebra

We recall that an affine -cube in is for some and linearly independent .

We will use the following determinant identity for block matrices.

Fact 2.10.

Let for some . Then

We say that functions are linearly independent over the reals if given , the sum is identically zero (is equal to for all ) only if . We say that if the first derivatives of exist and are continuous, if has derivatives of all orders, and that is analytic if and if the Taylor series of expanded around any point in the domain converges to in some neighborhood of . We say that if the first derivatives of exist and are continuous on the (open) interval (we define and being analytic on analogously).

Definition 2.11.

We define the Wronskian of to be , where is the matrix defined by

for .

Because the derivative is a linear operator, we have the following.

Fact 2.12.

Functions are linearly independent over the reals if their Wronskian exists and is not identically zero on some interval .

3 Isolating parallelepipeds in norms for all non-integer

Our first new result is a strengthening of a result in [BGS17], which asserts that for every fixed there exist -isolating parallelepipeds for almost every , to a result showing that this is true for every . We also show that there exist -isolating parallelepipeds when . Moreover, we show that one of these conditions is also necessary, and therefore obtain a complete characterization of values of and for which isolating parallelepipeds exist (such isolating parellelepipeds are computable if is computable).

Our construction generalizes the approach from [BGS17], and follows the same high-level structure. We start by showing that it suffices to “define isolating parallelepipeds over instead of ,” i.e., that if there exist and that satisfy for and , then there exists a -isolating parallelepiped.

We then define a family of -parallelepipeds parameterized by numbers, for , and a number . Specifically, the row of indexed by is equal to and the coordinate of . (Throughout this section, we will adopt the convention that vectors for some are indexed by elements in in lexicographic order. We adopt an analogous convention for rows (resp. columns) of matrices of the form (resp. ) for some .) Figure 2 shows the form of such a -parallelepiped when .

We observe that for such a family of -parallelepipeds and , . I.e., for fixed and , is linear in the values . This leads us to define the matrix whose entry in row and column is equal to . Then, for non-negative , the coordinate of indexed by is equal to .

In order to show that there exist choices of and such that and form a “ isolating parallelepiped,” it therefore suffices to find non-negative such that for some . We then use the following proof strategy for finding such : (1) Show that for certain values of and ,