1 Introduction
A lattice is the set of all integer combinations of linearly independent basis vectors ,
We call the rank of the lattice and the dimension or the ambient dimension of the lattice .
For , the successive minimum, denoted by , is the smallest such that there are linearly independent lattice vectors that have length at most .
The Shortest Independent Vector Problem () takes as input a basis for a lattice and and asks us to decide whether the largest successive minima is at most , i.e., . Typically, we define length in terms of the norm for some , defined as
for finite and
In particular, the norm is the familiar Euclidean norm, and it is the most interesting case from our perspective. We write for in the norm (and just when we do not wish to specify a norm).
Starting with the breakthrough work of Lenstra, Lenstra, and Lovász in 1982 [LLL82], algorithms for solving lattice problems in both its exact and approximate forms have found innumerable applications, including factoring polynomials over the rationals [LLL82], integer programming [Len83, Kan87, DPV11], cryptanalysis [Sha84, Odl90, JS98, NS01], etc. More recently, many cryptographic primitives have been constructed whose security is based on the (worstcase) hardness of or closely related lattice problems [Ajt04, Reg09, GPV08, Pei10, Pei16]. In particular, the (worstcase) hardness of for approximation factors implies the existence of several fundamental cryptographic primitives like oneway functions, collisionresistant hash functions, etc (see, for example, [GGH96], [Ajt98]). Such latticebased cryptographic constructions are likely to be used on massive scales (e.g., as part of the TLS protocol) in the nottoodistant future [ADPS16, BCD16, NIS].
Blömer and Seifert [BS99] showed that is NPhard to approximate for any constant approximation factor. Whille their result is shown only for the Euclidean norm, there proofs can easily be extended to arbitrary norms. As is true for many other lattice problems, is believed to be hard to approximate up to factors polynomial in , the rank of the lattice. In particular, the best known algorithms for , even for approximation factors run in time exponential in [ADRS15, ADS15].
However, NPHardness itself does not exclude the possibility of subexponential time algorithms since it merely shows that there does not exist a polynomial time algorithm unless P = NP. To rule out such algorithms, we typically rely on a finegrained complexitytheoretic hypothesis—such as the Strong Exponential Time Hypothesis (SETH), the Exponential Time Hypothesis (ETH), or the GapExponential Time Hypothesis (GapETH). To that end, a few recent results showed quantitative hardness results for the Closest Vector Problem () [BGS17], and the Shortest Vector Problem () [AS18] which are closely related problems. In particular, assuming SETH, [BGS17] showed that there is no time algorithm for or for any and “almost all” (not including ). Under ETH, [BGS17] showed that there is no time algorithm for for any . Under GapETH, [BGS17] showed that there is no time algorithm for approximating up to some constant factor for any . Similar, but slightly weaker, results were obtained for in [AS18].
1.1 Our results and techniques.
Blömer and Seifert [BS99] showed that is NPhard by giving a reduction from to . This reduction can easily be extended to all norms, and increases the rank of the lattice by . Thus, combined with the SETH hardness result from [BGS17], it implies the following.
Theorem 1.
Under the SETH, there is no time algorithm for for any and for all but finitely many values of in . Furthermore, under randomized ETH, There is no time algorithm for for any .
Note that the latter result is due to [BGS17].
A closer look at their reduction reveals that it cannot be extended to showing NPhardness of approximate directly (even though is known to be NPhard for almost polynomial approximation factors) in that for the lattice when given as a part of a instance, might be much larger than the distance of the target from the lattice, in which case, an oracle for approximating up to a constant factor, does not tell anything about the distance of the target from the lattice.
To overcome this difficulty, [BS99], the instance obtained from a reduction from the minimum label cover problem has a guarantee that for the CVP instance , is “not much larger” than the distance of from .
We introduce a new computational problem called the Gap Closest Vector Problem with Bounded Minima(), which captures the above mentioned requirement on the CVP instance that has an upper bound depending on the parameter . We observe that the reduction from to in [BGS17] (which implies approximate hardness of approximateCVP is actually a reduction from to for an appropriate choice of . We then show a reduction similar to [BS99] from to , which implies the following result.
Theorem 2.
Under the (randomised) Gap Exponential Time Hypothesis, for any , there exists , such that with rank is not solvable in time.
2 Basic Definitions
Lattices
Let be a real vector space, with an norm on the vectors such that . Then a lattice is defined as the set of all integer linear combinations of a finite set
of linearly independent vectors in
:We will then call such a set the basis of the lattice. Note that the dimension of the subspace spanned by (called the rank of the lattice) is a subspace of the space in which the basis vectors are obtained. Thus the rank of the lattice may be less than the dimension of the lattice. Cases where the rank of the lattice is equals to the dimension of the lattice are referred to as fullrank lattices.
Since we wish to have inputs of bounded size, we can assume that an dimensional lattice is generated by basis vectors from . Additionally, this can be scaled to integral values. Thus we may assume that lattices are generated by vectors from .
Successive Minima
Denoted by , the successive minimum denotes the minimum length such that there are exactly linearly independent lattice vectors that are at most this length.
Minkowski’s second theorem states the following with regards to the successive minima:
Theorem 3.
For any fullrank lattice we have that:
2.1 Computational problems
GapClosest Vector Problem (): Given a lattice , a target vector (which may or may not be in the lattice) and a value output YES if there exists a vector v in the lattice such that (i.e. the closest vector in the lattice to the vector has a distance to the target of less than ), and output NO if all the vectors in the lattice are of distance greater than to the target.
GapClosest Vector Problem with Bounded Minima (): Given a lattice , a target vector (which may or may not be in the lattice), and a value output YES if there exists a vector v in the lattice such that (i.e. the closest vector in the lattice to the vector has a distance to the target of less than ), and output NO if all the vectors in the lattice are of distance greater than to the target with the added guarantee that there exists a such that . Note that the bound on the minima hold for both the YES and NO instances.
GapShortest Independent Vector Problem (): Given a lattice , and value , output YES if there exists a set of linearly independent vectors that are in such that the longest vector in the set has length less than , and output NO if all such sets have a vector of length greater than .
For the above gap problems, the nongap variants are the exact cases where , and thus the  prefix will be omitted.
: Given a boolean formula in conjunctive normal form over variables, i.e. as a conjunction of clauses where each clause is a disjunction of literals, decide if there is a assignment (of either true or false) to the variables such that the boolean formula evaluates to true.
: Given a boolean formula in conjunctive normal form and a two constants , output YES if there exists an assignment such that it satisfies at least fraction of the clauses, and output NO if for all assignments they only satisfy at most fraction of the clauses. For convenience at times the (, ) prefix may be omitted when unnecessary.
2.2 ETH, SETH and GapETHhardness
[IP01] introduced conjectures that will be used as main assumptions to derive the hardness results that we have.
Definition 1 (Exponential Time Hypothesis).
The Exponential Time Hypothesis (ETH) states that for every there is exists a constant such that no algorithm solves formulas with variables in deterministic time.
Definition 2 (Strong Exponential Time Hypothesis).
The Strong Exponential Time Hypothesis (SETH) states that for all , there exists a such that no algorithm solves formulas with variables in deterministic time.
Definition 3 (Gap Exponential Time Hypothesis).
There exists constants and such that no algorithm solves instances with variables in deterministic time.
The above formulation is from [BGS17].
3 Related Results
The main result that has led to subsequent hardness proofs in other lattice problems was derived by [BGS17] through the construction of isolating parallelepipeds that encode assignments from instances of to choices of vectors such that each clause contributes the same distance regardless of how many literals are as long at least one literal is satisfied, however unsatisfied clauses would contribute a much greater distance.
3.1 SETHhardness of under also any norm
Theorem 4.
Solving exact CVP under all norms where is not even and is not possible in time where .
The same proof works for in general instead of .
3.2 GapETHhardness of approximating CVP within a constant factor
Theorem 5 ([Bgs17]).
There exists a reduction from  with variables and clauses to  for any norm, so that the rank of the lattice in the resulting instance is the same as the number of variables in the original instance. Furthermore, is given as:
We will provide their construction of the CVP instance here. Let t be a target vector defined by the following:
where denotes the number of negated literals in the clause, the distance be , and a set of basis (column) vectors defined by the following:
We will make the following claim about the reduction that was proposed in their paper as they will be useful to us in our reduction: In the resulting lattice, both and the length of the target vector is upper bounded by , where is proportional to the number of clauses in the  instance. Thus we can say that the resulting instance is also an instance of CVP, where .
Proof.
Consider the construction provided in [BGS17], the basis vectors that are then provided have values of either , thus in the worst case, we obtain a set of linearly independent vectors with the longest vector having all or ’s. ∎
3.3 GapETHhardness of 
Theorem 6 ([Gjs76]).
such that , there exists a a polynomial time reduction from  with variables and clauses to an instance of , , with variables and clauses.
Additionally, Bennett et al. used Dinur’s result in [Din16] to derive the following result:
Theorem 7 ([Bgs17]).
such that , there is a polynomial timerandomised reduction from a  with variables and clauses, to instances of  with variables and clauses.
This implies it is almost always possible to reduce the number of clauses in  instances so that reductions that run linear in may also be considered linear in , so that GapETH may still apply. However, since the reduction is randomised, existence of subexponential time algorithms that solve the resulting instances only imply existence of randomised subexponential time algorithms for  in the general case (i.e. when .
3.4 SETHhardness of exact CVP under almost any pnorm
Theorem 8 ([Bgs17]).
There exists a polynomial time reduction from to such that the rank of the resulting lattice is the same as the number of variables in the original instance, for all that is not even and less or equals to .
Corollary 1.
Solving exact CVP under all norms where is not even is not possible in time where .
[BS99] had also previously constructed a reduction that was tight in the resulting instance size since it only increased the rank by by intuitively treating the target vector as the vector in an
instance. To do this, an extra value that was large enough was padded to the bottom of the target vector to ensure it would be long enough to be considered the
successive minima.4 Main Contribution
We now present our main contribution, that is showing hardness of approximating  within a constant factor .
Theorem 9.
For any , and there exists an efficient reduction from  to  for any norm where , with . Moreover, the rank of the lattice in the  instance is equals to where is the rank  instance.
Proof.
We will let denote a  instance and denote a  instance. Likewise, we will let denote the minimum for the  whereas denotes the minimum for the .
Given a basis for the  instance as and the target vector t, we construct the basis for :
where is some value that we are able to tweak — we will choose such that . We will firstly analyse how the YES and NO instances of  translate into the corresponding YES and NO instances of , and will then show that there exist possible values for such that the reduction holds.
Recall that in , in the YES instances are when the shortest possible distance from the target vector to the given lattice is less than or equals to , and otherwise in the NO instances the shortest possible distance from the target vector is at least . Then in the resulting instance, we obtain the following inequalities:
YES :  
NO : 
Let be the vector closest to the target . Let be a set of linearly independent vectors in such that
is minimized.
Notice that is a set of linearly independent vectors in . Thus, if the instance is a YES instance, is upper bounded by the maximum of and .
Also, any set of linearly independent vectors must have at least one vector with a nonzero coefficient for the last vector . So, if the instance is a NO instance, then if the coefficient is or , then the length of the vector is at least , and if the coefficient has absolute value at least , then the length is at least .
From this we obtain:
YES :  
NO : 
For all cases, we will pick to be , it will always be the case that
.
 CASE 1: .

Since , then we get that .
 CASE 2: .

The in the YES case, we have that .
Ergo, by our choice of again, we get .
 CASE 3: .

In this case we have that . So . Then we have that is upper bounded by:
This reduction is clearly runs in polynomial time.. ∎
From this, we can conclude that if we were to set to , we would get that .
Theorem 10.
Under the randomised Gap Exponential Time Hypothesis, there exists , such that  with rank is not solvable in time.
Proof.
This can be achieved by considering of of the instances throughout the chain of reductions from  to  to  and finally .
In the original  instance with variables and clauses, we obtain a  with rank
with high probability. Thus under the randomised GapETH, there is no subexponential time algorithm for
, for all .∎
References
 [ADPS16] Erdem Alkim, Léo Ducas, Thomas Pöppelmann, and Peter Schwabe. Postquantum key exchange — A new hope. In USENIX Security Symposium, 2016.
 [ADRS15] Divesh Aggarwal, Daniel Dadush, Oded Regev, and Noah StephensDavidowitz. Solving the Shortest Vector Problem in time via discrete Gaussian sampling. In STOC, 2015.
 [ADS15] Divesh Aggarwal, Daniel Dadush, and Noah StephensDavidowitz. Solving the Closest Vector Problem in time— The discrete Gaussian strikes again! In FOCS, 2015.
 [Ajt98] Miklos Ajtai. Worstcase complexity, averagecase complexity and lattice problems. 1998.
 [Ajt04] Miklós Ajtai. Generating hard instances of lattice problems. In Complexity of computations and proofs, volume 13 of Quad. Mat., pages 1–32. Dept. Math., Seconda Univ. Napoli, Caserta, 2004. Preliminary version in STOC’96.

[AS18]
Divesh Aggarwal and Noah StephensDavidowitz.
(gap/s) eth hardness of svp.
In
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing
, pages 228–238. ACM, 2018.  [BCD16] Joppe W. Bos, Craig Costello, Léo Ducas, Ilya Mironov, Michael Naehrig, Valeria Nikolaenko, Ananth Raghunathan, and Douglas Stebila. Frodo: Take off the ring! Practical, quantumsecure key exchange from LWE. In CCS, 2016.
 [BGS17] Huck Bennett, Alexander Golovnev, and Noah StephensDavidowitz. On the quantitative hardness of CVP. In FOCS, 2017.
 [BS99] Johannes Blömer and JeanPierre Seifert. On the complexity of computing short linearly independent vectors and short bases in a lattice. In Proceedings of the Thirtyfirst Annual ACM Symposium on Theory of Computing, STOC ’99, pages 711–720, New York, NY, USA, 1999. ACM.
 [Din16] Irit Dinur. Mildly exponential reduction from gap 3sat to polynomialgap labelcover. Electronic Colloquium on Computational Complexity (ECCC), 23:128, 2016.
 [DPV11] Daniel Dadush, Chris Peikert, and Santosh Vempala. Enumerative lattice algorithms in any norm via Mellipsoid coverings. In FOCS, 2011.
 [GGH96] Oded Goldreich, Shafi Goldwasser, and Shai Halevi. Collisionfree hashing from lattice problems. IACR Cryptology ePrint Archive, 1996:9, 1996.
 [GJS76] M.R. Garey, D.S. Johnson, and L. Stockmeyer. Some simplified npcomplete graph problems. Theoretical Computer Science, 1(3):237 – 267, 1976.
 [GPV08] Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan. Trapdoors for hard lattices and new cryptographic constructions. In STOC, 2008.
 [IP01] Russell Impagliazzo and Ramamohan Paturi. On the complexity of ksat. Journal of Computer and System Sciences, 62(2):367 – 375, 2001.
 [JS98] Antoine Joux and Jacques Stern. Lattice reduction: A toolbox for the cryptanalyst. Journal of Cryptology, 11(3):161–185, 1998.
 [Kan87] Ravi Kannan. Minkowski’s convex body theorem and integer programming. Math. Oper. Res., 12(3):415–440, 1987.
 [Len83] H. W. Lenstra, Jr. Integer programming with a fixed number of variables. Math. Oper. Res., 8(4):538–548, 1983.
 [LLL82] A. K. Lenstra, H. W. Lenstra, Jr., and L. Lovász. Factoring polynomials with rational coefficients. Math. Ann., 261(4):515–534, 1982.
 [MR17] Pasin Manurangsi and Prasad Raghavendra. A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs. 80:78:1–78:15, 2017.
 [NIS] NIST postquantum standardization call for proposals.
 [NS01] Phong Q Nguyen and Jacques Stern. The two faces of lattices in cryptology. In Cryptography and lattices, pages 146–180. Springer, 2001.
 [Odl90] Andrew M Odlyzko. The rise and fall of knapsack cryptosystems. Cryptology and computational number theory, 42:75–88, 1990.
 [Pei10] Chris Peikert. An efficient and parallel Gaussian sampler for lattices. In CRYPTO. 2010.
 [Pei16] Chris Peikert. A decade of lattice cryptography. Foundations and Trends in Theoretical Computer Science, 10(4):283–424, 2016.
 [Reg09] Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. Journal of the ACM, 56(6):Art. 34, 40, 2009.
 [Sha84] Adi Shamir. A polynomialtime algorithm for breaking the basic MerkleHellman cryptosystem. IEEE Trans. Inform. Theory, 30(5):699–704, 1984.
Comments
There are no comments yet.