I Introduction
Over the past few years, there have been many attractive developments in latticebased cryptographic protocols, whose security is based on worstcase hardness assumptions, and which are conjectured to be secure against quantum attacks. Thus, latticebased primitives are a promising candidate to replace constructions based on number theoretic assumptions like RSA [1] or DiffieHellman [2] that are currently in use.
One of the most versatile primitives for the design of provably secure cryptographic protocols is the learning with errors (LWE) problem introduced by Regev [3]. For instance it can serve for INDCPA (Indistinguishability under chosenplaintext attack) [3] and INDCCA (Indistinguishability under chosenciphertext attack) public key encryption [4]. A structured variant of LWE, the decision ring learning with errors (LWE) was proposed in [5] by Lyubashevsky et al. to allow more compact representations. Cryptographic applications of LWE include fast encryption [5] and fast homomorphic encryption [6]. Solving LWE is at least as hard as solving approximate SIVP on ideal lattices.
In [7], Peikert introduced an efficient latticebased key encapsulation mechanism (KEM) that allows two parties to share an ephemeral key that is useful for secret communications, featuring a low bandwidth reconciliation technique that aims to reach exact agreement on the shared key. A practical implementation of Peikert’s protocol called NewHope was proposed in [8] as a candidate to the NIST challenge on postquantum cryptography. In [7] and [8], although key generation is performed using dimensional lattices, the reconciliation step uses 1dimensional and 4dimensional lattices respectively^{1}^{1}1In fact, the latest implementation of the NewHope algorithm does not use reconciliation [9]..
In this paper, we consider a more general framework for KEM based on ringLWE, that does not require a dither, and where reconciliation is done directly on the dimensional lattice using WynerZiv coding.
More precisely, we consider BarnesWall lattices [10] and use Micciancio and Nicolosi’s BDD decoder with polynomial complexity [11] for the reconciliation step. In particular, we prove that this decoder is linear. This result is required for our security proof and may also be of independent interest. In the asymptotic regime for large , we show that this technique can generate bits of key per dimension. This improves upon [7] and [8] where the key rates are bit and bits per dimension respectively. Moreover, our scheme achieves exponentially small error probability , in particular when . Although current recommendations are to keep the error probability smaller than , this may be too conservative when transforming an INDCPA secure encryption scheme into an INDCCA secure one using the FujisakiOkamoto transform [12]. A smaller error probability is desirable to prevent leakage of information from decryption failure attacks [13].
Organization
This paper is organized as follows. In Section II we provide basic definitions about cyclotomic fields, lattices, etc. In Section III we present the BarnesWall lattice with some of its properties. In Section IV, we introduce our key generation algorithm. In Section V and VI, we provide a proof that the error probability is small, and that our scheme is INDCPA secure respectively.
Ii Preliminaries
In this section, we introduce the mathematical tools we use to describe and analyze our proposed scheme.
We write if , and if and . Finally a variant of notation that “ignores” logarithmic factors: , equivalent to for some integer .
Iia Lattices and Algebraic number theory
Lattice definitions
First of all, we define the space H as follows: when and with the Euler’s totient function, let
Note that is a proper subspace of and is isomorphic to as an inner product space.
For our purposes, a lattice is a real fullrank discrete additive subgroup . Any lattice is generated as the set of all integer linear combinations of
linearly independent basis vectors
in as A fundamental cell of is a bounded set, which, when shifted by the lattice points, generates a partition of . For a fundamental cell , any point can be uniquely expressed as a sumWe write and .
We will use implicitly in our proofs the fact that , and , as well as .
Given a lattice with basis and a vector such that dist, the bounded distance decoding problem is to find the lattice vector closest to .
Lemma 1.
Let and ; then defined as is a permutation of .
Cyclotomic fields and the canonical embedding
For an integer , the cyclotomic number field is the extension with degree , where is any primitive root of unity. We denote the ring of integers of by , its codifferent by , and define for any integer . In the same manner we can define . Note that is isomorphic to by an isomorphism [5, Lemma 2.15].
Now we describe the embedding of a cyclotomic number field, which induces a “canonical” geometry on it. will has exactly injective ring homomorphisms , and we can define the canonical embedding as
This is a ring homomorphism from to , where multiplication and addition in the latter are both componentwise. We define norms and other geometric quantities on simply by identifying field elements with their canonical embeddings , e.g., the norm is .
When dealing with cyclotomic number fields, note that if is a power of with , then , and
for some orthogonal matrix
[14].IiB Error Distribution
Subgaussian vectors
A random vector in is subgaussian with parameter , if for any unit vector and for any ,
As a consequence of Theorem 1 in [15], the following tail inequality holds.
Theorem 1.
Let be a subgaussian vector in with parameter . Then :
The following two propositions describe the sum and pointwise product behavior of subgaussians:
Proposition 1 ([16], Corollary 2.3).
Let be independent subgaussian vectors over with parameters . Then is subgaussian with parameter .
Proposition 2 ([16], Claim 2.4).
Let be a subgaussian vector in of parameter , and another random vector. Then the pointwise multiplication vector is subgaussian of parameter .
Gaussianlike distribution
When dealing with ringLWE defined below, we work with a Gaussianlike error distribution over the number field . We first define the
dimensional i.i.d. Gaussian distribution
with zero mean and covariance . Then we define the Gaussian distribution over to output an element for which has Gaussian distribution with parameter . In our application, we discretize to using coordinatewise randomized rounding [16] and denote the resulting distribution by .Proposition 3 ([16], Lemma 8.2).
If is a continuous Gaussian with parameter , and we use coordinatewise randomized rounding, then is subgaussian with parameter , where is the product of all distinct primes dividing .
IiC RingLWE
A function is negligible if for any constant .
Two ensembles and are computationally indistinguishable if for all efficient distinguisher algorithms , is negligible in .
We define the notion of key encapsulation mechanism (KEM).
Following [7], a KEM with ciphertext space and (finite) key space is given by efficient algorithms Setup, Gen,
Encaps and Decaps, having the following structure:

Setup() outputs a public parameter .

Gen() outputs a public encapsulation key and secret decapsulation key .

Encaps(; ) outputs a ciphertext and a key .

Decaps(; ) outputs some .
A KEM satisfies INDCPA security, if the outputs of the following “real” and “ideal” games are computationally indistinguishable:
Real Game  Ideal Game 

RingLWE
We state the ringLWE problem in its discretized form. First of all, let’s define the ringLWE distribution:
Definition 1.
For a distribution on and , a sample from the ringLWE distribution over is generated by choosing uniformly at random, choosing , and outputting .
Definition 2 (RingLWE, Decision).
The decision version of the ringLWE problem, denoted , is to distinguish with nonnegligible advantage between independent samples from , where is chosen once and for all, and the same number of uniformly random and independent samples from .
Theorem 2 ([16], Theorem 2.22).
Let be the th cyclotomic ring, having dimension . Let , and let be a polybounded prime such that . There is a polytime quantum reduction from approximate SIVP (or SVP) on ideal lattices in to solving DLWE given only samples, where is the Gaussian distribution for .
The following result extends the hardness guarantees to the case of discrete error. We make use of what is called a valid discretization from Section 2.4.2 in [16]:
Theorem 3 ([16], Lemma 2.24).
Let be a coordinatewise randomized rounding to . If DLWE is hard given samples, then so is the variant of DLWE in which the secret is sampled from given samples.
Remark 1.
To apply Theorem 3 with two samples, we let . Hence is a continuous Gaussian with parameter .
Iii BarnesWall lattices and the MicciancioNicolosi BDD decoder
The BarnesWall lattice is an dimensional lattice over the Gaussian integers [11, 10]. Note that can be seen as a real lattice contained in . Moreover, and Vol.
Micciancio and Nicolosi [11] give a polynomial time algorithm to solve the bounded distance decoding (BDD) for BarnesWall lattices: given a vector within distance from some lattice point in , find . This algorithm called ParBW has complexity and we can prove that it is linear in the following sense:
Theorem 4.
Let . For a fixed , we have
This means that the operation ParBW induces a partition of into fundamental cells. The proof of Theorem 4 can be found in the Appendix.
We can also scale the BarnesWall lattice by an matrix to obtain
. For invertible matrix
and we define the operation as:and . It is not hard to prove that for and , , and .
Proposition 4.
For any , we have that , where .
Iv Key generation algorithm
We give here the key generation algorithm below between Alice and Bob.
Parameters are ; and error distribution on  
Alice (Server)  Bob (Client)  
We start by considering the lattice , a scaled rotation of where with a power of . After that, , thence is identified to . Note that induces an isomorphism between the additive quotient groups and . With slight abuse of notation, in the rest of the paper we identify the two quotient groups. For the remaining two lattices, we choose (quantization lattice) and (coding lattice) with partitions into fundamental sets such that the operation can be done in polynomial time for and such that performs BDD, i.e. given within distance from , . We will use the notation when there is no ambiguity about the chosen partition. Moreover, we impose that and .
The reconciliation rate og the protocol is , and the key rate .
We suppose that the error terms , and the secret terms , are taken independently from the distribution on , which is subgaussian with parameter (see Proposition 3). We define the modulus to noise ratio as the quotient between the modulus and the parameter of the error distribution . A smaller modulus to noise ratio provides stronger concrete security against known attacks. Moreover, since all the exchanged messages are modulo , the size of affects the overhead of the protocol.
Referring to Table I, the KEM algorithm consists of the following steps:

Setup() : Alice chooses a random element from and outputs .

Gen() : She then chooses in , computes , and outputs a public key and a secret key .

Encaps() : Bob chooses independent . He then computes and . He outputs with
(1) and in such that
(2) 
Decaps() : Alice computes and outputs .
Remark 2.
This algorithm can essentially be seen as a generalization of the KEM in [7] and [8], where the reconciliation step is also latticebased. For instance, in [8] the functions HelpRec and Rec can be written in the form (1) and (2) by taking , and the product lattices , . Note that unlike [7, 8], a dither is not required in our algorithm.
Construction using BarnesWall lattices
For an explicit construction we choose and , where and a power of . By this choice, all the operations with in Table I can be deduced from Section III. The operation corresponds to a quantization operation induced by a partition of the BarnesWall lattice: (see Theorem 4). Since , we obtain that For the inclusion , we must have , or . By Proposition 4, this is true when
(3) 
Note that the key rate of the protocol is .
V Error probability
Here we give a general estimation for the error probability
, and then specialize to the case when is a BarnesWall lattice. We start by observing that and , therefore with . Define the quantization error as . Hence, . In the expressions of the shared keys we obtainNote that if and .
To simplify the analysis
we suppose from now on that so that
^{2}^{2}2More generally, to deal with the quantization error one could impose the condition that
vanishes exponentially fast.
Due to the BDD assumption for , we have that if .
Now we want to estimate
(4) 
For any constant
, by the law of total probability the term (
4) can be bounded by(5) 
Assuming that , and is subgaussian with parameter , then by Proposition 2, we can say that is subgaussian with parameter ; and so Following the same argument, given that we get that Since is subgaussian with parameter , then under the condition that and we obtain using Proposition 1:
Therefore, by Theorem 1 if we set , and , then
(6) 
Choose . The above conditions become
(7) 
Using Proposition 3 with a power of , one can say So . Note that in LWE we need that , and (see Theorem 2).
When dealing with our explicit construction in paragraph IVa, the condition on becomes for large :
(8) 
In order to satisfy condition (8) and to minimize the error probability, we choose according to (3) which is equal to if . Hence, equation (8) becomes
(9) 
For example, we can choose
It is not hard to see that these are the only values of and , up to logarithmic factors, that satisfy the LWE conditions and equation (9). With this choice, it follows from the bound (6) that the error probability can be as small as for . Note that the modulus to noise ratio of our scheme is of order , i.e. the same as in [7].
Vi Security
We will prove that the algorithm is INDCPA secure, assuming the hardness of given two samples. This proof is generic and holds in the setting of the key generation protocol in Section IV independently of the choice of the lattices and as long as can be done efficiently. We follow the same argument as Section 4.2 in [7]. We consider the adjacent games below:
Game 1  Game 1 ’ 

Game 2  Game 3 

Notice that Game 1 is the “real” game defined in Section II, and Game 1’ is the “ideal” one. Our aim is to prove that Game 1 and Game 1’ are computationally indistinguishable. We’ll do so sequentially.
Clearly Game 1 and Game 2 are computationally indistinguishable under the assumption of hardness of .
To prove that Game 2 and Game 3 are computationally indistinguishable, we use the following Theorem which is essentially a consequence of the Crypto Lemma [17, Lemma 4.1.1]. It guarantees uniformity of the key without a dither.
Theorem 5.
If is uniformly random, then is uniformly random, given .
Proof:
For fixed , we define ,
Notice that , ; then . Hence, is a permutation of by Lemma 1. The proof of Theorem 5 results from these lemmas:
Lemma 2.
and we have .
Proof:
∎ 
∎
Lemma 3.
Suppose that , then we have
Proof:
∎ 
∎
Corollary 1.
and , there exist such that and
We conclude the proof of Theorem 5 by showing that is uniform and independent of when is uniform:
∎ 
∎
Returning to Game 2 and Game 3, we construct an efficient reduction as follows: it takes as input two pairs , and outputs
After that, we will take two indistinguishable inputs, and hence, by efficiency of , get two indistinguishable outputs.
First suppose that the inputs are drawn from ; i.e. and for independent ; and then are uniformly random and independent from and respectively (because is an isomorphism). Hence, the output of will be exactly as in Game 2. Now suppose that the inputs given to are uniformly random in and independent, then the outputs
of
are exactly as in Game 3. In fact, are uniform, and hence by Theorem 5, is uniformly random conditioned on .
To show that Game 3 and Game 1’ are indistinguishable, we modify Game 1 and Game 2 by choosing and output it instead of . In this case Game 1 becomes Game 1’. Let Game 2’ be the modified version of Game 2. By the same reasoning as above, we can prove that Game 1’ is computationally indistinguishable from Game 2’ and Game 3.
Remark 3.
Following the steps in [7, Section 5], we can construct a passively secure encryption scheme based on our passively secure KEM, which yields an actively secure encryption scheme and an actively secure key transport protocol.
Acknowledgments
The work of C. Saliba and L. Luzzi is supported by the INEX ParisSeine AAP 2017. The authors would like to thank J.P. Tillich for helpful comments.
[Proof of Theorem 4]
In the following we refer to the functions ParBW, SeqBW, RMdec in Algorithms 1,2 and 3 of Micciancio and Nicolosi’s paper [11].
a Modification
We modify Algorithm 3 in [11] in the case as follows: if , then return .
It means that we choose the output vector based on the first bit of . Note that the decoder is still BDD with this modification.
B Linearity of ParBW
In this subsection we will prove the following proposition:
Proposition 5.
Let and a target, where
Comments
There are no comments yet.