We give an efficient quantum algorithm for a special case of the closest lattice vector problem in a new range of approximation factors, namely the subexponential range. In this type of problem a basisand a target vector are given and the goal is to compute the closest lattice vector to for integer coefficients. This paper is about the important special case called Bounded Distance Decoding (BDD) with parameter . It has the extra promise that the distance is bounded in the sense that there exists such that , where is the shortest nonzero vector length in , and , making the answer unique. The term is the approximaton factor and it is typically a function of , the lattice dimension. A lattice can be specified by an infinite number of bases, making the problem difficult.
There are three broad ranges of approximation factors where lattice problems are unlikely to be NP-complete.
Starting at the right end, the exponential range typically has the form and was solved efficiently in the 80’s. Lentra, Lenstra, and Lovász [LLL82] gave an algorithm to compute an approximate shortest lattice vector, Babai [Bab86] gave algorithms for computing an approximate closest lattice vector to a target point, Kannan [Kan87] gave an exponential time enumeration algorithm, and Schnorr [Sch87, Sch94] extended these three by trading off running time in exchange for better approximation factors.
Adjacent to the exponential range is the subexponential range, which is the focus of this paper. Despite several decades of big advances in lattices, this region appears to either be very difficult, or has been neglected. In special-case lattices that allow more more efficient cryptography because of additional algebraic structure, a sequence of papers work led to an efficient quantum algorithm for approximating the shortest vector [EHKS14, BS16, CGS14, CDPR16]. Lattices with small determinant have also been examined [CL15]. The subexponential region has played a crucial role in recent advances in fully homomorphic encryption (FHE) [BV11a, BV11b, GSW13], where it is assumed that certain parameters cannot be solved efficiently.
At the hardest end of the range for algorithms and cryptography are polynomial approximation factors and very important questions about how well existing algorithms work and can be optimized for concrete security for the NIST standardization process.
Cryptography built on the LWE problem, which is as hard as worst-case lattice problems for good theoretical security, also allows many new primitives such as FHE and testing whether or not machines are quantum [BCM18, Mah18b], and quantum FHE [Mah18a].
Therefore, the most important and pressing question is whether or not efficient algorithms exist for the polynomial approximation factor range. The realistic approach is to start with problems in the range adjacent to the ones that are already solvable, which is the subexponential range. Even if an efficient algorithm exists for the polynomial range it may be too difficult to find in one step, but the hope is that the techniques here will be applicable to a broader range of cases.
We propose a partition of lattices into finer blocks than just by dimension alone. The partition consists of sets of lattices indexed by lattice dimension , periodicity , and finite group rank . The periodicity of a lattice is the minimum integer such that lattice contains the subgroup . Therefore is a finite abelian subgroup of and can be decomposed as .
Through this lens we give a quantum algorithm on a subset of lattices with parameter achieving a subexponential approximation factor of the form and running in time (Theorem 21). To give a simplified comparison to existing algorithms analyzed in terms of dimension, for example, lattices with we get an improvement over Babai’s algorithm, and BDD on lattices with finite group rank and periodicity can be solved for approximation factor . The periodicity cannot be smaller than or BDD becomes trivial. More generally, the new parameter range can be seen in the figures for and general . Changing changes the trivial region. The left axis has the parameter and the bottom axis plots the log of the approximation factor.
Moving beyond polynomial time, Schnorr’s hierarchy can also be applied to the output of the quantum algorithm as a black box to achieve an approximation factor of in time . For example, with , there is a quantum algorithm achieving approximation factor in time , while applying Schnorr to the original lattice gives time , i.e., with an extra in the time exponent.
Idea of proof. In the context of the partition, we reduce BDD on worst-case lattices in , to a problem we’ll call -random-BDD
where a random matrixis chosen and has as generators the columns of , and a target vector is given with distance at most to
. From lattice theory we need that these types of lattices have a long shortest vector with high probability[Mica]. The problem can be solved for approximation factor when the dimension is by running LLL and Babai’s algorithm. Different from the worst-case BDD problem, the random-BDD problem can be solved for higher dimensions too because deleting rows until is reached is still a random instance.
To randomly reduce the BDD instance to a random-BDD instance in a lower dimension, we revisit an old state with a “phase problem” and find a way to use it. Given a lattice basis and a radius, a goal for the last 20+ years has been to compute the quantum state , where is some shape with the prescribed radius such as a cube, Gaussian, or sphere (also see [AR05]). We use a cube for simplicity.
The main approach for computing is to compute a superposition over coefficients in the first register and over the cube in the second register to get , then to entangle the registers by adding the corresponding lattice vector into the second register to get . The last step would be to “uncompute” in the first register and this is where the state becomes difficult to use. If an algorithm could solve BDD at this point then it could use to compute and uncompute the first register, but this is circular.
found a way to use this in a constructive way by using an LWE oracle to erase the coefficient, and as a result of his construction, reduce worst-case lattice problems to LWE. The more typical approach for algorithms might be to compute the quantum Fourier transform of the first register, because it is possible, and measure it. The resulting state is, for a random vector . This changes the “uncomputing” problem into a “phase” problem because is similiar to , the desired case, but has the phase mixed in in a problematic way. The difficulty is that it is no longer clear how to use . In [BKSW18] states related to this but with Gaussians were used to show an equivalence between LWE and an extension of a certain nonabelian hidden subgroup problem. In [CLZ21] the Arora-Ge algorithm [AG10] for LWE was used to uncompute projections.
Overcoming the difficulty. In this paper we revisit the state which we call a Phased Cube State (PCS), because there is a cube around each lattice point, and each cube has single phase across it. The value is chosen as the periodicity of the lattice, and is a subgroup where computations are done. This state can be efficiently created for any side length, and with a uniformly random and known . The main idea is to see that
is almost an eigenvector of shifts by vectors close to the lattice, as in BDD instances, and that quantum phase estimation can compute an approximation of that information. More specifically, for, let be a shift operator inside . Then for , . If desired, the quantum phase estimation algorithm can be used to compute the inner product , and repeating the process results in these inner products for different random , and the coefficients can be computed.
For a BDD instance with and controlled by the BDD promise, shifting by results in and we show that quantum phase estimation still returns an approximation of . Quantum phase estimation exponentiates the operator to the power of the precision requested, and because of degradation of for higher powers, because , this limits how much information can be extracted. To accomodate the set of possible BDD target vectors we define the notation of having a set of operators together with a single approximate eigenvector. With the worst-case lattice problem BDD as input, this quantum subroutine is used to sample noisy inner products and construct a random BDD instance in a lower solvable dimension.
To summarize, we give a new quantum algorithm solving BDD on a range of subexponential approximation factors.
1.1 Comparison to classical algorithms and open problems
A talk given on this work in September 2021 at the Simons Institute for the Theory of Computing sparked much productive discussion. In particular, the paper [DvW21] was posted with a classical algorithm solving a part of what we solve that had not appeared in the literature before. After going through existing literature, having discussions with the community, and waiting for responses, we leave it as a challenge to provide a classical algorithm matching the subexponential-time quantum algorithm.
For the polynomial-time range, we show
There is a -time quantum algorithm solving -BDD on lattices of dimension , periodicity , and finite group rank .
This inspired the posting of a classical algorithm for this problem. In that paper [DvW21], the case matches Theorem 20, but details are left out about how to match the case. In Section 7 we complete the analysis and also generalize it to rectangle-periodic lattices, rather than just cubes. Our quantum algorithm already handles this type of lattice because it works for any finite group, so classical and quantum have the same performance, even though the ideas are completely different. The paper does not address our next theorem, which is exponentially faster than the best available classical algorithm:
Let be an -dimensional -periodic lattice with finite group rank . Given an instance of -BDD , with , and
Algorithm 22 runs in quantum time and returns the closest vector to with probability at least .
One other suggestion for a classical approach for this problem was using [GMPW20, Theorem 5.3]. It is not clear how the details would work, and in particular, how to handle the non-primitive case.
It is still open if the Schnorr trade-off we give in Section 6 can be done classically. These questions are out-of-scope for this paper and we leave them as open problems. The parameter for a matrix of dimension has appeared as a boundary for LWE where algorithms more carefully use the Gaussian error [MR09, LP11, BLP13, BCM18].
There are many open problems and possible extensions. The most interesting is to try variations of the quantum algorithm. It is relatively clean and is easy to experiment with. For example, solving approximate arithmetic progressions is a posssibility. Another is analyzing different worst-case lattice problems such as uSVP, using reductions between instances of LWE, which is a type of random BDD where the errors for each coordinate are i.i.d., to map between different dimensions and values, and use groups [BLP13, GINX16]. These new BDD algorithms can also be used to sample vectors of length in . This can be done via quantum ([Reg09, Theorem 1.3]) or classical sampling ([GPV08]).
2.1 Lattices, finite abelian groups, and distances
Every integer lattice in has minimum such that , and is called -periodic, or -ary. Because is a subgroup of , has all information about the lattice in the sense that distances are preserved mod and . Computing the closest vector to a lattice over can be reduced to this case by reducing the lattice and vector mod , solving the problem in , and then mapping back to the integer solution of the original problem. Starting from the finite group the associated lattice is constructed by reintroducing , so the columns of generate .
A finite abelian group can be decomposed as . The representation is the finite group decomposition , where the columns of span , the vector gives the orders of each column in in the decomposition, and makes the rank visibile in the notation. The specific decomposition can be chosen but in this case the unique one will be used where . The set will be viewed a the set of coefficients of the group elements , where each has a unique coefficient vector such that .
The setup is similar to the matrix used in lattice-cryptography, but not exactly the same. In the worst-case to average-case reduction the input lattice has dimension and is arbitrary subject to sampling in the dual. The matrix is typically chosen randomly and as a result has finite group decomposition with high probability. Here we also are using -ary lattices, but we start with a worst-case lattice , use the specific periodicity of , and decompose it mod .
An arbitrary full-dimensional integer lattice is a -periodic lattice for . This can be seen from the fact that , because using Cramer’s rule for inverting results in each entry having in the denominator an integer in the numerator. Then using the integer vectors from the columns of takes to , which is . The parameters set this way may not always work in the quantum algorithms, for example, when is too large relative to .
Given a lattice the finite group decomposition can be efficiently computed.
The quantum Fourier transform over the cyclic group maps to . In general for a finite group the Fourier transform maps vectors over the group to vectors over the character group .
There will be a reindexing step for the eigenvector/eigenvalue calculation when a register holding a superposition of coefficientsis transformed by the Fourier transform over . This uses the subgroup embedding of into . Concretely this means that maps to the element , and has phase . Then .
A distance on will be needed to define and solve BDD on subgroups of , and also for the phase estimation statement. Following Cassels [Cas97] specialized to finite groups, with , the modular distance on the quotient is defined from the Euclidean distance on by . For any , satisfies (1) for integers , (2) for , (3) and there exists such that . In one dimension we may write for . The distance between points in matches the Euclidean distance as long as it is at most . This definition also allows any choice of coset representatives for . It is equal to the zero-centered set for where the class is represented by an integer so that , then it holds that , when , and when .
For phase estimation we will also use a distance mod . In this case take the Euclidean distance on and define the distance on by . This has the same properties listed above, but to instead of .
Given a subgroup of , define a shortest (nonzero) element length to be
Also define for any . Note that all group elements have length at most . In particular, for the trivial case when , .
The main tool with -ary lattices in dimension is that for a randomly chosen one, the shortest vector length is known within a constant with high probability. The following can be found in [Mica].
There exists a constant such that if is a uniformly chosen matrix from , and let denote the corresponding -ary lattice. Then
To distinguish the underlying operations, ranges for integers will be donoted by . In this case, for example, addition and multiplication of numbers from are over . If an element , for example, then addition is mod . An integer times a group element represents the number of operations to perform, for example, for , , .
The following basis reduction algorithm will be used:
Lemma 2 ([Mg02, Lemma 7.1]).
There is a polynomial time algorithm that on input a lattice basis and linearly independent lattice vectors such that , outputs a basis equivalent to such that for all . Moreover, the new basis satisfies and for all .
Definition 3 (-Bdd).
Given a lattice and a vector such that , with , output the closest vector.
The nearest plane algorithm due to Babai is an algorithm that given , and returns a vector such that BDD can be solved with this algorithm when because the answer is unique. The BDD problem can be solved with an approximation factor/time tradeoff with an approximate CVP algorithm based on the following two theorems.
Theorem 4 ([Sch94, Theorem 8]).
Let be a -reduced basis and let . Suppose that . Let be a lattice point such that is minimal for all , and for , then .
Theorem 5 (-Approximate CVP in time ).
There is an algorithm that on input a CVP instance for an -dimensional lattice and a vector in the span of , returns a vector such that
The running time is , where is the maximal length of the given basis vectors.
To compute the approximate closest vector, following Page 516 of [Sch94], use [Sch87] to compute an “approximate” -reduced basis as in Theorem 4 using steps, then use Kannan’s algorithm to compute the closest vector using enumeration, and then use Theorem 4 for the bound. By the statement on page 511, for . ∎
2.2 Quantum computation
For a positive integer , let denote the Fourier transform over . On a basis state with , this operation maps and can be computed in time . The Fourier transform over a direct product is , and can be computed on one register at a time.
For quantum states and ,
Lemma 7 ([Bv97, Lemma 3.2.6]).
For quantum states and , if , then the total variation
distance between the probability distributions resulting from
measurements of the two states is at most
, then the total variation distance between the probability distributions resulting from measurements of the two states is at most.
For superpositions the representatives will be used. It is convenient because of the typical quantum Fourier transform definition. Note that the norm defined earlier is independent of the choice of representatives.
Two main subroutines are for computing the quantum Fourier transform and computing the phase of an eigenvalue of a unitary. Given a unitary and an eigenvector with eigenvalue , and a power , the phase estimation algorithm approximates the phase of the eigenvalue. The first step of the algorithm computes the Hadamard transform on qubits and then computes the controlled-- in superposition, resulting in the phase state . The Fourier transform over is computed in the first register and it is measured, resulting in a value , where approximates .
Theorem 8 (Phase estimation).
Let denote a quantum state on qubits and unitary on qubits for which is an eigenstate with eigenvalue . Let be an integer multiple of closest to . The phase estimation algorithm returns with probability at least . If and , then is returned such that satisfies with probability at least . The running time of the algorithm is times the time to compute .
3 Approximate eigenvector of many operators
In this section we define the notion of an approximate eigenvector and show how well the phase estimation algorithm works compared to the exact eigenvector case.
For a unitary , an -approximate eigenvector is a vector with associated eigenvalue satisfies . This may be denoted , and where and are given as input.
This notion will be used where one vector is used as an approximate eigenvector of a set of unitaries. From that point of view, it may be helpful to say that approximates the unitary on the subspace spanned by because . This may be denoted . This also means that , as in the definition.
Let be a unitary and be an -approximate eigenvector with eigenvalue . Then .
Proof by induction on . The base case is where by assumption. Assume the claim is true for , that is, . Let , where . Then
The fact that has norm one was used, and the last inequality is by definition of -approximate eigenvector. ∎
Consider the step before measuring in the phase estimation routine run up to power on an approximate eigenvector instance versus an exact instance . Then the distance between these two states is at most .
If is an eigenvector with eigenvalue for some operator , the first step of the eigenvalue estimation algorithm when given power would be to create the phase state . Instead, the approximate eigenvector instance is given, and the state computed is .
By Lemma 10, for each , let the difference vector be with Comparing the distance between the approximate eigenvector and the exact eigenvector state before measurement gives
Lemma 12 (Phase Estimation on an approximate eigenvector).
There exists a quantum algorithm that on input
where , is the time it takes to compute , is supported on vectors in , where is an approximate eigenvector for some , returns such that
The running time of the algorithm is .
and then . First consider running phase estimation on , eigenvector and power returns such that
by Theorem 8. By the choice of , , and by choice of , . Therefore
Scaling by , the condition is equivalent to . Let Then
For the error bound, consider using instead on the -approximate eigenvector . By Lemma 11 and Lemma 7 the error increases by at most The union bound on the phase estimation error and the approximate eigenvector error gives a total error at most . ∎
4 An approximate eigenvector of shift operators close to group elements
In this section a quantum state with a random phase is defined that is an approximate eigenvector of shift operators whose shifts are close to points in the lattice . As described in Section 2.1, formally the setting will be in the finite abelian group , which is a subgroup of , together with a distance on . This setup makes it possible to take a -periodic lattice and target vector, reduce them mod , define and solve BDD over , and to map the solution back to the integers. For clarity this section will be restricted to finite groups.
4.1 Phased Cube States and BDD on Subgroups of
The approximate eigenvector is a superposition of lattice points with a phased cube around each point. The cube’s side length controls how much cubes around two nearby points overlap.
For define the zero-centered set of “radius” as . The set has elements.
Let be a side length.
Define the cube state around a point by
Let be a subgroup of with , generator matrix and coefficient space . Define the phased cube state with label to be
Lemma 14 (Cube state properties).
Given , is computable in time .
Define the shift operator by .
, and the transformation to is computable in time .
Let be a cube state of side length and let .
If then .