Assessing and countering reaction attacks against post-quantum public-key cryptosystems based on QC-LDPC codes

08/06/2018 ∙ by Paolo Santini, et al. ∙ UnivPM 0

Code-based public-key cryptosystems based on QC-LDPC and QC-MDPC codes are promising post-quantum candidates to replace quantum vulnerable classical alternatives. However, a new type of attacks based on Bob's reactions have recently been introduced and appear to significantly reduce the length of the life of any keypair used in these systems. In this paper we estimate the complexity of all known reaction attacks against QC-LDPC and QC-MDPC code-based variants of the McEliece cryptosystem. We also show how the structure of the secret key and, in particular, the secret code rate affect the complexity of these attacks. It follows from our results that QC-LDPC code-based systems can indeed withstand reaction attacks, on condition that some specific decoding algorithms are used and the secret code has a sufficiently high rate.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

footnotetext: The work of Paolo Santini was partially supported by Namirial S.p.A.

Research in the area of post-quantum cryptography, that is, the design of cryptographic primitives able to withstand attacks based on quantum computers has known a dramatic acceleration in recent years, also due to the ongoing NIST standardization initiative of post-quantum cryptosystems [19]. In this scenario, one of the most promising candidates is represented by code-based cryptosystems, that were initiated by McEliece in 1978 [17]. Security of the McEliece cryptosystem relies on the hardness of decoding a random linear code: a common instance of this problem is known as syndrome decoding problem (SDP) and no polynomial-time algorithm exists for its solution [8, 16]. In particular, the best SDP solvers are known as information set decoding (ISD) algorithms [20, 22, 7], and are characterized by an exponential complexity, even considering attackers provided with quantum computers [9].

Despite these security properties, a large-scale adoption of the McEliece cryptosystem has not occurred in the past, mostly due to the large size of its public keys: in the original proposal, the public key is the generator matrix of a Goppa code with length and dimension , requiring more than kB of memory for being stored. Replacing Goppa codes with other families of more structured codes may lead to a reduction in the public key size, but at the same time might endanger the system security because of such an additional structure. An overview of these variants can be found in [3].

Among these variants, a prominent role is played by those exploiting quasi-cyclic low-density parity-check (QC-LDPC) [2, 6] and quasi-cyclic moderate-density parity-check (QC-MDPC) codes [18] as private codes, because of their very compact public keys. Some of these variants are also at the basis of post-quantum primitives that are currently under review for possible standardization by NIST [1, 5]. QC-LDPC and QC-MDPC codes are decoded through iterative algorithms that are characterized by a non-zero decryption failure rate (DFR), differently from classical bounded distance decoders used for Goppa codes. The values of DFR achieved by these decoders are usually very small (in the order of or less), but are bounded away from zero.

In the event of a decoding failure, Bob must acknowledge Alice in order to let her encrypt again the plaintext. It has recently been shown that the occurrence of these events might be exploited by an opponent to recover information about the secret key [14, 11, 12]. Attacks of this type are known as reaction attacks, and exploit the information leakage associated to the dependence of the DFR

on the error vector used during encryption and the structure of the private key. These attacks have been shown to be successful against some cryptosystems based on

QC-LDPC and QC-MDPC codes, but their complexity has not been assessed yet, to the best of our knowledge.

In this paper, we consider all known reaction attacks against QC-LDPC and QC-MDPC code-based systems, and provide closed form expressions for their complexity. Based on this analysis, we devise some instances of QC-LDPC code-based systems that are able to withstand these attacks. The paper is organized as follows. In Section 2 we give a description of the QC-LDPC and QC-MDPC code-based McEliece cryptosystems. In Section 3 we describe known reaction attacks. In particular, we generalize existing procedures, applying them to codes having whichever parameters and take the code structure into account, with the aim to provide complexity estimations for the attacks. In Section 4 we make a comparison between all the analyzed attacks, and consider the impact of the decoder on the feasibility of some attacks. We show that QC-LDPC McEliece code-based cryptosystems have an intrinsic resistance to reaction attacks. This is due to the presence of a secret transformation matrix that implies Bob to decode an error pattern that is different from the one used during encryption. When the system parameters are properly chosen, recovering the secret key can hence become computationally unfeasible for an opponent.

2 System description

Public-key cryptosystems and key encapsulation mechanisms based on QC-LDPC codes [2, 5] are built upon a secret QC-LDPC code with length and dimension , with being a small integer and being a prime. The latter choice is recommended to avoid reductions in the security level due to the applicability of folding attacks of the type in [21]. The code is described through a parity-check matrix in the form:

(1)

where each block is a circulant matrix, with weight equal to .

2.1 Key generation

The private key is formed by and by a transformation matrix , which is an matrix in quasi-cyclic (QC) form (i.e., it is formed by circulant blocks of size ). The row and column weights of are constant and equal to . The matrix is generated according to the following rules:

  • the weights of the circulant blocks forming can be written in an circulant matrix whose first row is , such that ; the weight of the -th block in corresponds to the -th element of ;

  • the permanent of

    must be odd for the non-singularity of

    ; if it is also , then is surely non-singular.

In order to obtain the public key from the private key, we first compute the matrix as:

(2)

from which the public key is obtained as:

(3)

where

is the identity matrix of size

. The matrix is the generator matrix of the public code and can be in systematic form since we suppose that a suitable conversion is adopted to achieve indistinguishability under adaptive chosen ciphertext attack (CCA2) [15].

2.2 Encryption

Let be a -bit information message to be encrypted, and let be an -bit intentional error vector with weight . The ciphertext is then obtained as:

(4)

When a CCA2 conversion is used, the error vector is obtained as a deterministic transformation of a string resulting from certain public operations, including one-way functions (like hash functions), that involve the plaintext and some randomness generated during encryption. Since the same relationships are used by Bob to check the integrity of the received message, in the case with CCA2 conversion performing an arbitrary modification of the error vector in (4) is not possible. Analogously, choosing an error vector and computing a consistent plaintext is not possible, because it would require inverting a hash function. As we will see next, this affects reaction attacks, since it implies that the error vector cannot be freely chosen by an opponent. Basically, this turns out into the following simple criterion: in the case with CCA2 conversion, the error vector used for each encryption has to be considered as a randomly extracted -tuple of weight .

2.3 Decryption

Decryption starts with the computation of the syndrome as:

(5)

which corresponds to the syndrome of an expanded error vector , computed through . Then, a syndrome decoding algorithm is applied to , in order to recover . A common choice to decode is the bit flipping (BF) decoder, firstly introduced in [13], or one of its variants. In the special setting used in QC-LDPC code-based systems, decoding can also be performed through a special algorithm named Q-decoder [5], which is a modified version of the classical BF decoder and exploits the fact that is obtained as the sum of rows from

. The choice of the decoder might strongly influence the probability of success of reaction attacks, as it will be discussed afterwards.

QC-MDPC code-based systems introduced in [18] can be seen as a particular case of the QC-LDPC code-based scheme, corresponding to . Encryption and decryption work in the same way, and syndrome decoding is performed through BF. We point out that the classical BF decoder can be considered as a particular case of the Q-decoder, corresponding to .

2.4 Q-decoder

The novelty of the Q-decoder, with respect to the classical BF decoder, is in the fact that it exploits the knowledge of the matrix to improve the decoding performance. A detailed description of the Q-decoder can be found in [5]. In the Q-decoder, decisions about error positions are taken on the basis of some correlation values that are computed as:

(6)

where denotes the integer inner product and . In a classical BF decoder, the metric used for the reliability of the bits is only based on , which is a vector collecting the number of unsatisfied parity-check equations per position. In QC-LDPC code-based systems, the syndrome corresponds to the syndrome of an expanded error vector : this fact means that the error positions in

are not uniformly distributed, because they depend on

. The Q-decoder takes into account this fact through the integer multiplication by [5, section 2.3], and the vector is used to estimate the error positions in (instead of ).

In the case of QC-MDPC codes, a classical BF decoder is used, and it can be seen as a special instance of the Q-decoder, corresponding to . As explained in [5, section 2.5], from the performance standpoint the Q-decoder approximates a BF decoder working on . However, by exploiting and separately, the Q-decoder achieves lower complexity than BF decoding working on . The aforementioned performance equivalence is motivated by the following relation:

(7)

where the approximation comes from the sparsity of both and . Thus, equation (2.4) shows how the decision metric considered in the Q-decoder approximates that used in a BF decoder working on .

3 Reaction attacks

In order to describe recent reaction attacks proposed in [11, 12, 14], let us introduce the following notation.

Given two ones at positions and in the same row of a circulant block, the distance between them is defined as . Given a vector , we define its distance spectrum as the set of all distances between any couple of ones. The multiplicity of a distance is equal to the number of distinct couples of ones producing that distance; if a distance does not appear in the distance spectrum of , we say that it has zero multiplicity (i.e., ), with respect to that distance. Since the distance spectrum is invariant to cyclic shifts, all the rows of a circulant matrix share the same distance spectrum; thus, we can define the distance spectrum of a circulant matrix as the distance spectrum of any of its rows (the first one for the sake of convenience).

The main intuition behind reaction attacks is the fact that the DFR depends on the correlation between the distances in the error vector used during encryption and those in and . In fact, common distances produce cancellations of ones in the syndrome, and this affects the decoding procedure [10], by slightly reducing the DFR. In general terms, a reaction attack is based on the following stages:

  1. The opponent sends queries to a decryption oracle. For the -th query, the opponent records the error vector used for encryption () and the corresponding oracle’s answer (). The latter is in the case of a decoding failure, otherwise.

  2. The analysis of the collected couples provides the opponent with some information about the distance spectrum of the secret key.

  3. The opponent exploits this information to reconstruct the secret key (or an equivalent representation of it).

We point out that these attacks can affect code-based systems achieving security against both chosen plaintext attack (CPA) and CCA2. However, in this paper we only focus on systems with CCA2 security, which represent the most interesting case. Therefore, we assume that each decryption query uses an error vector randomly picked among all the -tuples with weight (see Section 2).

3.1 Matrix reconstruction from the distance spectrum

In [11, section 3.4] the problem of recovering the support of a vector from its distance spectrum has been defined as Distance Spectrum Reconstruction (DSR) problem, and can be formulated as follows:

Distance Spectrum Reconstruction (DSR)
Given , with being a -bit vector with weight , find a set of integers such that is the support of a -bit vector and .

This problem is characterized by the following properties:

  • each vector obtained as the cyclic shift of is a valid solution to the problem; the search for a solution can then be made easier by setting and ;

  • the elements of must satisfy the following property:

    (8)

    since it must be .

  • for every solution , there always exists another solution such that:

    (9)
  • the DSR problem can be represented through a graph , containing nodes with values : there is an edge between any two nodes and if and only if . In the graph , a solution (and ) is represented by a size- clique.

Reaction attacks against QC-MDPC code-based systems are based on the DSR problem. Instead, in the case of QC-LDPC code-based systems, an attacker aiming at recovering the secret QC-LDPC code has to solve the following problem:

Distance Spectrum Distinguishing and Reconstruction (DSDR)
Given , where each is a -bit vector with weight , find sets such that each is the support of a -bit vector and .

Also in this case, the problem can be represented with a graph, where solutions of the DSDR problem are defined by cliques of proper size and are coupled as described by (9). On average, solving these problems is easy: the associated graphs are sparse (the number of edges is relatively small), so the probability of having spurious cliques (i.e., cliques that are not associated to the actual distance spectrum), is in general extremely low. In addition, the complexity of finding the solutions is significantly smaller than that of the previous steps of the attack, so it can be neglected [11, 14]. From now on, we conservatively assume that these problems always have the smallest number of solutions, that is equal to for the DSR case and to for the DSDR case.

3.2 GJS attack

The first reaction attack exploiting decoding failures has been proposed in [14], and is tailored to QC-MDPC code-based systems. Therefore, we describe it considering , , and we refer to it as the GJS attack. In this attack, the distance spectrum recovery is performed through Algorithm 1. The vectors and estimated through Algorithm 1 are then used by the opponent to guess the multiplicity of each distance in the spectrum of . Indeed, the ratios follow different and distinguishable distributions, with mean values depending on the multiplicity of . This way, the analysis of the values allows the opponent to recover .

zero initialized vector of length
zero initialized vector of length
for  do
      ciphertext encrypted with the error vector
     Divide as , where each has length
      distance spectrum of
     for  do
         
         
     end for
end for
Algorithm 1 GJS distance spectrum recover

Solving the DSR problem associated to allows the opponent to obtain a matrix , with being an unknown circulant permutation matrix. Decoding of intercepted ciphertexts can be done just with . Indeed, according to (3), the public key can be written as , with:

(10)

The opponent can then compute the products:

(11)

in order to obtain a matrix . This matrix can be used to efficiently decode the intercepted ciphertexts, since:

(12)

Applying a decoding algorithm on , with the parity-check matrix , will return as output. The corresponding plaintext can then be easily recovered by considering the first positions of .

As mentioned in Section 3.1, the complexity of solving the DSR problem can be neglected, which means that the complexity of the GJS attack can be approximated with the one of Algorithm 1. First of all, we denote as the number of operations that the opponent must perform, for each decryption query, in order to compute the distance spectrum of and update the estimates and . The -bit block can have weight between and ; let us suppose that its weight is , which occurs with probability

(13)

We can assume that in there are no distances with multiplicity (this is reasonable when is sparse). The average number of distances in can thus be estimated as , which also gives the number of operations needed to obtain the spectrum of . Each of these distances is associated to two additional operations: the update of , which is performed for each decryption query, and the update of , which is performed only in the case of a decryption failure. Thus, if we denote as the DFR of the system and as and the complexities of one encryption and one decryption, respectively, the average complexity of each decryption query can be estimated as:

(14)

Thus, the complexity of the attack, in terms of work factor, can be estimated as:

(15)

3.3 Fhs attack

More recently, a reaction attack specifically tailored to QC-LDPC code-based systems has been proposed in [11], and takes into account the effect of the matrix . We refer to this attack as the FHS attack. The collection phase in the FHS attack is performed through Algorithm 2. We point out that we consider a slightly different (and improved) version of the attack in [11].

zero initialized vector of length
zero initialized vector of length
zero initialized vector of length
zero initialized vector of length
for  do
      ciphertext encrypted with the error vector
     for  do
         Divide as , where each has length
          distance spectrum of
     end for
     
     for  do
         
         
     end for
     for  do
         
         
     end for
end for
Algorithm 2 FHS distance spectrum recover

As in the GJS attack, the estimates are then used by the opponent to guess the distances appearing in the blocks of . In particular, every block gets multiplied by all the blocks , so the analysis based on reveals . In the same way, the estimates are used to guess the distances appearing in the blocks belonging to the last block row of . Indeed, the block gets multiplied by all the blocks . Since a circulant matrix and its transpose share the same distance spectrum, the opponent is indeed guessing distances in the first block column of . In other words, the analysis based on reveals .

The opponent must then solve two instances of the DSDR problem in order to obtain candidates for and , for . As described in Section 3.1, we can conservatively suppose that the solution of the DSDR problem for is represented by two sets and , with each couple satisfying (9). Each solution (as well as the corresponding ) represents a candidate for one of the blocks in , up to a cyclic shift. In addition, we must also consider that the opponent has no information about the correspondence between cliques in the graph and blocks in : in other words, even if the opponent correctly guesses all the circulant blocks of , he does not know their order and hence must consider all their possible permutations. Considering the well-known isomorphism between binary circulant matrices and polynomials in , the matrix can be expressed in polynomial form as:

(16)

with , being a permutation of (so that denotes the position of the element in ), and each is the polynomial associated to the support or . In the same way, solving the DSDR problem for gives the same number of candidates for the last column of , which are denoted as in polynomial notation. This means that for the last column of we have an expression similar to (16), with coefficients and a permutation . The opponent must then combine these candidates, in order to obtain candidates for the last block of the matrix , which is denoted as in polynomial form. Indeed, once is known, the opponent can proceed as in the GJS attack for recovering . Taking into account that , the polynomial can be expressed as:

(17)

Because of the commutative property of the addition, the opponent can look only for permutations of the polynomials . Then, (17) can be replaced by:

(18)

which can be rearranged as:

(19)

with . Since whichever row-permuted version of can be used to decode intercepted ciphertexts, we can write:

(20)

with and .

We must now consider the fact that, in the case of blocks having weight (we suppose that the weights of the blocks are all ), the number of candidates for is reduced. Indeed, let us suppose that there are and blocks with weights and , respectively. Let us also suppose that there is no null block in . These assumptions are often verified for the parameter choices we consider. For blocks with weight there is no distance to guess, which means that the associated polynomial is just . In the case of a block with weight , the two possible candidates are in the form and . However, since , the opponent can consider only one of the two solutions defined by (9).

Hence, the number of possible choices for the polynomials and in (3.3) is equal to . In addition, the presence of blocks with weight reduces the number of independent configurations of : if we look at (18), it is clear that any two permutations and that differ only in the positions of the polynomials with weight lead to two identical sets of candidates. Based on these considerations, we can compute the number of different candidates in (3.3) as:

(21)

The complexity for computing each of these candidates is low: indeed, the computations in (3.3) involve sparse polynomials, and so they require a small number of operations. For this reason, we neglect the complexity of this step in the computation of the attack work factor. After computing each candidate, the opponent has to compute the remaining polynomials forming through multiplications by the polynomials appearing in the non-systematic part of the public key (see (3)). In fact, it is enough to multiply any candidate for by the polynomials included in the non-systematic part of (see (3)). When the right candidate for is tested, the polynomials resulting from such a multiplication will be sparse, with weight . The check on the weight can be initiated right after performing the first multiplication: if the weight of the first polynomial obtained is , then the candidate is discarded, otherwise the other polynomials are computed and tested. Thus, we can conservatively assume that for each candidate the opponent performs only one multiplication. Considering fast polynomial multiplication algorithms, complexity can be estimated in . Neglecting the final check on the weights of the vectors obtained, the complexity of computing and checking each one of the candidates of can be expressed as:

(22)

The execution of Algorithm 2 has a complexity which can be estimated in a similar way as done for the GJS attack (see eq. (15)). However, unless the DFR of the system is significantly low (such that is in the order of the work factor expressed by (22)), collecting the required number of cyphertexts for the attack is negligible from the complexity standpoint [11], so (22) provides a (tight) lower bound on the complexity of the attack.

3.4 FHZ attack

The FHZ attack has been proposed in [12], and is another attack procedure specifically tailored to QC-LDPC code-based systems. The attack starts from the assumption that the number of decryption queries to the oracle is properly bounded, such that the opponent cannot recover the spectrum of (this is the design criterion followed by the authors of LEDApkc [4]). However, it may happen that such a bounded amount of ciphertexts is enough for recovering the spectrum of : in such a case, the opponent might succeed in reconstructing a shifted version of , with the help of ISD. The distance spectrum recovery procedure for this attack is described in Algorithm 3.

for  do
      zero initialized vector of length
      zero initialized vector of length
end for
for  do
      ciphertext encrypted with the error vector
     Divide as , where each has length
     for  do
          distance spectrum of
         for  do
              
              
         end for
     end for
end for
Algorithm 3 FHZ distance spectrum recovery

The estimates are then used to guess distances in ; solving the related DSDR problems gives the opponent proper candidates for the blocks of . These candidates can then be used to build sets of candidates for , which will be in the form:

(23)

where each polynomial is obtained through the solution of the DSDR problem (in order to ease the notation, the polynomial entries of in (23) have been put in sequential order, such that we can use only one subscript to denote each of them). Let us denote as the sequence of weights defining the first row of , as explained in Section 2. The solutions of the DSDR problem for the first row of will then give two polynomials for each weight. The number of candidates for depends on the distribution of the weights in : let us consider, for the sake of simplicity, the case of , while all the other weights in are distinct. In this situation, the graph associated to the DSDR problem will contain (at least) two couples of cliques with size (see Section 3.1). For the sake of simplicity, let us look at the first row of : in such a case, the solution is represented by the sets and , where each couple of cliques and is described by (9). In order to construct a candidate for , as in (23), the opponent must guess whether is associated to (and is associated to ) or to ; then, he must pick one clique from each . The number of candidates for the first row of is hence . Since there are rows, the number of possible choices for the polynomials in (23) is then equal to . If we have , then this number is equal to . In order to generalize this reasoning, we can suppose that contains distinct integers , with multiplicities , that is:

(24)

Thus, also taking into account the fact that for polynomials with weight we have only one candidate (instead of ), the number of different choices for the entries of in (23) can be computed as:

(25)

with and being the number of entries of that are equal to and , respectively. Considering (23), the -th row of can be expressed as , with:

(26)

and being a diagonal matrix:

(27)

Let denote the public key; then, the matrix is a generator matrix of the secret code. In particular, we have:

(28)

where denotes the polynomial representation of the circulant obtained as . The multiplication of every row of by whichever polynomial returns a matrix which generates the same code as . In particular, we can multiply the first row of by , the second row by , and so on. The resulting matrix can then be expressed as:

(29)

Taking into account (27), we can define:

(30)

which holds for . We can now express as: