Moderate-density parity-check (MDPC) codes  have been recently proposed as underlying coding scheme for McEliece-like cryptosystems [2, 3]. The family of MDPC codes admit a parity-check matrix of moderate density111The existence of a MDPC matrix for a binary linear block code does not rule out the possibility that the same code fulfills a (much) sparser parity-check matrix. As in most of the literature, we neglect the probability that a code defined by a randomly-drawn moderate parity check matrix admits a sparser parity-check matrix., yielding codes with large minimum distance. In , a McEliece cryptosystem based on quasi cyclic (QC)-MDPC codes that defeats information set decoding attacks on the dual code due to the moderate density parity-check matrix, is presented. For a given security level, the QC-MDPC cryptosystem allows for very small key sizes compared to other McEliece variants.
In this paper, we introduce a variation on the scheme of . In particular, we investigate the adoption of MDPC ensembles based on protographs  to improve the error correction capability of the code underlying the cryptosystem. We focus on protographs containing state variable nodes. The use of state VNs allows designing codes that can be decoded over an extended Tanner graph that is sparser than the reduced Tanner graph associated with the code’s (moderate-density) parity-check matrix.
For the introduced ensembles, we relate the density of the extended Tanner graph to the density of the reduced Tanner graph. The lower density of the extended Tanner graph allows remarkable gains in terms of error correction capability with respect to the ensembles of . The improvement is analyzed both in the asymptotic setting via density evolution (DE)  and at finite block length via Monte Carlo simulations, under two decoding algorithms: The sum-product algorithm (SPA) and the less complex Algorithm E from . For one of the proposed ensembles, a gain of in the weight of error patterns (decodable with a target error probability) is observed with respect to the ensembles of , rendering the scheme more resilient to information set decoding (ISD) attacks .
Ii-a Circulant Matrices
A binary circulant matrix of size is a matrix with coefficients in obtained by cyclically shifting its first row to the right, yielding
The set of circulant matrices together with the matrix multiplication and addition forms a commutative ring and it is isomorphic to the polynomial ring . In particular, we can associate to the circulant a polynomial . Consider two circulants and and their associated polynomials and . Denote by and . Then, the circulants and are associated to the polynomials and
. We indicate the vector of coefficients of a polynomialas . The weight of a polynomial is the number of its non-zero coefficients. We indicate both weights with the operator , i.e., .
Ii-B QC-MDPC-based Cryptosystems
A binary MDPC code of length , dimension and row weight is defined by a binary parity-check matrix whose rows have (moderate) Hamming weight . For , dimension , redundancy222We assume here parity-check matrices without redundant rows. with for some integer , the parity-check matrix of a QC-MDPC code in polynomial form is a matrix. Without loss of generality we consider in the following codes with . This family of codes covers a wide range of code rates and is of particular interest for cryptographic applications since the parity-check and generator matrices can be described in a very compact way. The parity-check matrix of QC-MDPC codes with has the form
Let be an efficient decoder for the code defined by the parity-check matrix . The cryptosystem operates in the following manner.
Ii-B1 Key Generation
The private key is generated as a parity-check matrix of the form (2) with for . The matrix with row weight is the private key. The public key is the corresponding binary generator matrix in systematic form333In  the use of a CCA2-secure conversion is proposed, which allows using in systematic-form without leaking information. (note that the generator matrix can be described by bits, yielding a small public key size).
To encrypt a plaintext a user computes the cyphertext using the public key as where and is an error vector uniformly chosen from all vectors from of Hamming weight .
To decrypt a cyphertext the authorized recipient uses the private key to obtain . Since is in systematic form the plaintext corresponds (in case of correct decoding) to the first bits of .
Ii-C Protograph-based Codes
A protograph  is a small bipartite graph comprising a set of VNs (also referred to as VN types) and a set of check nodes (i.e., CN types) . A VN type is connected to a CN type by edges. A protograph can be equivalently represented in matrix form by an matrix . The th column of is associated to VN type and the th row of is associated to CN type . The element of is . A larger graph (derived graph) can be obtained from a protograph by applying a copy-and-permute procedure. The protograph is copied times, and the edges of the different copies are permuted preserving the original protograph connectivity: If a type- VN is connected to a type- CN with edges in the protograph, in the derived graph each type- VN is connected to distinct type- CNs (observe that multiple connections between a VN and a CN are not allowed in the derived graph). The derived graph is a Tanner graph with VNs and CNs that can be used to represent a binary linear block code. A protograph defines a code ensemble . For a given protograph , consider all its possible derived graphs with VNs. The ensemble is the collection of codes associated to the derived graphs in the set.
Protographs allow specifying graphs which contain VNs which are associated to codeword symbols, as well as VNs which are not associated to codeword symbols. The latest class of VNs are often referred to as state or punctured VNs. The term “punctured” is used since the code associated with the derived graph can be seen as a punctured version of a longer code associated with the same graph for which all the VNs are associated to codeword bits. The introduction of state VNs in a code graph allows designing codes with a remarkable performance . The QC-MDPC codes introduced in  admit a protograph representation. In particular, for the rate case, the base matrix has the form
Here, and . The expansion of the protograph can be performed in a structured manner yielding a QC-MDPC with parity-check matrix in polynomial form with . In the remainder of the paper, we will focus on rate codes, and we will denote the ensemble defined by (3) as the reference ensemble or . In particular, we will consider as a study case the shortest public key size considered in  for bit security where is the public key size, and is the MDPC code block length. For this specific case, the reference ensemble is obtained by setting yielding a total row weight of . This choice of parameters was introduced in  to obtain the security level of bits.
Ii-D Decoding Algorithms
Denote by the VNs in the code Tanner graph, and by then CNs. The neighborhood of a VN is , and similarly denotes the neighborhood of the CN . The message from to is denoted by , and the message from to is denoted by . The channel message is . If a codeword bit is associated to (i.e., is not a state VN), then we denote it with a slight abuse of notation by , with stands for a and stands for a . The bit in the cyphertext associated to is , with if an error is introduced, otherwise. For state VNs, no cyphertext bits are produced at the encryption stage. We consider next two decoding algorithms. The first algorithm is the classical SPA, for which we introduce a generalization allowing an attenuation of the extrinsic information produced at the CNs
. As we shall see, the attenuation can be used as a heuristic method to improve the performance at low error rates. The second algorithm is the Algorithm E introduced in, which reduces the decoding complexity by limiting the message alphabet to the ternary set . As observed in , Algorithm E benefits also from the introduction of a heuristic scaling parameter. Here, nevertheless, the parameter is used to amplify the channel message, and it plays a role not only at low error rates, but also in the so-called waterfall performance of the code. While in  it was suggested to vary the scaling parameter with the iteration number, here we will keep the scaling parameter fixed throughout the iterations. In both cases, the scaling parameter is indicated by .
Ii-D1 Scaled Sum-Product Algorithm
Ii-D2 Algorithm E
while at the CNs
Observe that the scaling parameter is de-activated (i.e., it is set to ) in the final decision.
Iii Protograph-based MDPC Ensembles
where the column on the left of the vertical delimiter is associated to state VNs. The expansion of the protograph can be performed in a structured manner yielding a QC-MDPC. In particular, we first obtain a binary matrix in polynomial form
with . The parity-check matrix is
with and . In this case, and , yielding a Tanner graph (associated with the matrix ) with VNs. By comparing with the block length , one finds that the code defined by (12) can be described as punctured version of a longer code which has as its parity-check matrix, where the first bits in each codeword are punctured. The following proposition established a relationship between the weights of the rows of (11) and the weight of the rows of (12).
The Hamming weight of each row of is upper bounded by .
Follows by triangle’s inequality. ∎
In the remainder of the paper, coefficients of the base matrix are chosen such that upper bound given by Proposition 1 matches the weight of the rows of the parity check matrix of the reference ensemble, i.e., .
The size of the key space for different ensembles is given in Table I. We provide next a DE analysis of protograph-based ensembles.444The codes that will be adopted in practice are QC, and hence represent a sub-ensemble of the (larger) protograph-based MDPC ensemble. Anyhow, we consider the results of the analysis as accurate in predicting the error correction capability of the corresponding protograph-based QC-MDPC ensembles.
Iii-1 Sum-Product Algorithm
When the SPA is used, we resort to quantized DE. We refer to  for the details. The extension to protograph ensembles is straightforward and follows the footsteps of [9, 10]. Simplified approaches based on the Gaussian approximation are discarded due to the large CN degrees  used by the MDPC code ensembles.
Iii-2 Algorithm E
In the following, we provide an extension of the DE analysis of protograph ensembles for Algorithm E, which was originally introduced in  for unstructured ensembles. Rather than stating the complete DE, we sketch the analysis by showing how message probabilities are updated at VNs and CNs. Let us consider the transmission over a binary symmetric channel (BSC) with error probability . We make use of the conventional assumption of the all-zero codeword transmission. It follows that all the message probabilities derived next are conditioned to the transmission of a zero value. Due to the mapping at the decoder input, we have that messages exchanged by VNs and CNs take values in , whereas the messages associated with the channel observations take values in . In either case, is the value associated with an erasure (which replaces the channel observation in the case of a state VN). We consider first a degree- CN of a given type. We assume the CN to be connected to VNs, each of different type (having some VNs of the same type being a particular case). For ease of notation, we assume the VNs to be . At a given iteration, we denote by the probability that the message from to takes value . Similarly, we denote by the probability that the message from to takes value , after the CN elaboration. We have that
It follows that (14), (15) and (16) fully describe the evolution of the message probabilities at the CNs. Let us consider now a degree- VN of a given type. We assume the VN to be connected to CNs, each of different type (having some CNs of the same type being a particular case). Again, for ease of notation, we assume the CNs to be . We shall introduce next the probability vectors
Moreover, we introduce the channel message probability vector with and for all if is a state VN, whereas , and for all otherwise. We introduce the intermediate probability vector i.e., is the convolution of the probability vectors associated to the messages at the input of , with the exception of the one received from . We have that
Observe that (18) fully describe the evolution of the message probabilities at the CNs. By iterating them with (14), (15) and (16) for all VN/CN types specified by the protograph, one can track the evolution of the message probabilities. Denote by the final a posteriori probability (APP
) estimate probability vector at(obtained after a given number of iterations). We have that
Iii-3 Asymptotic Error Correction Capability Estimates
To estimate the error correction capability of the ensembles under consideration, we computed the iterative decoding threshold over a BSC with error probability . The thresholds are computed under the SPA and Algorithm E decoding. We denote the former as and the latter as . Table I provides the decoding threshold for some ensembles with base matrix in the form (10), together with the reference ensemble from . In particular, we provide the products and as rough estimates of the error correction capability of codes drawn from ensembles with finite block length , for the study case of . The values represent a first estimate of the maximum error pattern weights for which decoding succeeds with high probability. The ensemble that exhibits the largest iterative decoding threshold under both SPA and Algorithm E decoding is ensemble . The gain in the weight of error patterns (decodable with high probability) with respect to the reference ensemble is here in the order of under the SPA, while under Algorithm E the gain reduces to .
Iii-a Error Correction Capability at Finite Length
Monte Carlo simulations have been performed to measure the actual error correction capability of the codes with base matrices in the form (10). In particular, for a given ensemble and the given block length , QC-MDPC codes have been obtained by expanding the base matrix with circulant matrices of suitable weight, with the circulants picked uniformly at random. The results, in terms of block error rate vs. weight of the error pattern, for the reference ensemble and for the ensemble are depicted in Figure 1. The performance measured with Algorithm E are in good accordance with the gain predicted by the iterative decoding threshold analysis. At a block error rate of the code from allows operating with around errors more than what is allowed by the code drawn from the reference ensemble.
Under SPA decoding, the codes drawn from both ensembles showed performance curves with signs of slope change at moderate-high error floors, preventing the achievement of low block error rates with reasonably-high error pattern weights. We conjecture that the reason for this behavior might be found in the numerous trapping sets affecting the dense graphs (recall that the code construction does not leverage on any girth optimization technique, due to the need of generating the private key uniformly at random). To mitigate this effect, we made use of the attenuation parameter in (5) to reduce the magnitude of the extrinsic estimated provided at the output of the CNs.555The use of soft information scaling was used in a similar manner in  to improve the performance of block turbo codes in the error floor region. This heuristic approach turns out to be effective in improving the error correction capability at low block error rates. The choice of the scaling factors has been carried out by searching via simulations the largest value of for which no sign of error floor appears at a block error rate greater than . Surprisingly, the use of a scaling parameter with codes from the reference ensemble allows attaining remarkable gain at low error rates without sacrificing the waterfall region performance (the later result in accordance with the DE analysis). For the code designed from ensemble , the introduction of the scaling coefficient entails a visible loss in the waterfall region (again, in accordance with the DE analysis). We expect that a further optimization of the algorithm (e.g., by allowing variable scaling coefficients across iterations/edge types) may reduce the loss in the waterfall region. Even accounting for the loss introduced by the scaling coefficient, codes drawn from ensemble show gains in error correction capability in order of compared to .
In this section we estimate the security level of the proposed protograph-based MDPC McEliece cryptosystem. For the analysis we use the pessimistic assumption that the system is broken as soon as the QC-MDPC parity-check matrix in (12) is reconstructed. This means that we assume that the effort for obtaining the extended matrix (see (11)) from is below the security level of the cryptosystem.
We denote by the cost of decoding an error pattern of Hamming weight with a linear code (ISD is assumed here). The cost of distinguishing a key, i.e. to recover one weight- row of the (sparse) parity check matrix is denoted by . For the QC case the cost of recovering the whole secret key equals . We compute the work factors of the key distinguishing attack and the work factor for the key recovery attack and the decoding attack for the QC-MDPC McEliece cryptosystem according to [2, Tab. 1], i.e., and . The work factor estimates include the possible gains obtained by using the decode-one-out-of-many approach . We use the non-asymptotic results from [14, Sec. 3.3] to estimate the work factor for the May-Meurer-Thomae variant of ISD . Consider the QC-MDPC code ensemble for bit security with from . For decoding we consider Algorithm E. The work factor for the key distinguishing attack for and is . Figure 1 shows that for we have a block error rate of for whereas for we have . It follows that the work factor of the decoding attack for ensemble is, according to [2, Tab. 1], for the reference ensemble, and for the scheme based on the ensemble .
Protograph-based moderate-density parity-check (MDPC) code ensembles are introduced and analyzed in the context of a McEliece-like cryptosystem. The proposed ensembles significantly improve the error correction capability of the MDPC code ensembles that are currently considered for post-quantum cryptosystems, without increasing the public key size. The enhanced error correction capability remarkably improves the robustness with respect to decoding attacks.
-  S. Ouzan and Y. Be’ery, “Moderate-Density Parity-Check Codes,” arXiv preprint arXiv:0911.3262, 2009.
-  R. Misoczki, J. P. Tillich, N. Sendrier, and P. S. L. M. Barreto, “MDPC-McEliece: New McEliece variants from Moderate Density Parity-Check codes,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Istanbul, Turkey, Jul. 2013, pp. 2069–2073.
-  M. Baldi, QC-LDPC code-based cryptography. Springer Science & Business, 2014.
-  J. Thorpe, “Low-density parity-check (LDPC) codes constructed from protographs,” NASA JPL, Pasadena, CA, USA, IPN Progress Report 42-154, Aug. 2003.
-  T. Richardson and R. Urbanke, “The Capacity of Low-Density Parity-Check Codes Under Message-Passing Decoding,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 599 – 618, Feb. 2001.
-  J. Stern, “A method for finding codewords of small weight,” in Proc. International Colloquium on Coding Theory and Applications, Toulon, France, Nov. 1988, pp. 106–113.
-  D. Divsalar, S. Dolinar, C. Jones, and K. Andrews, “Capacity-approaching protograph codes,” IEEE J. Sel. Areas Commun., vol. 27, no. 6, pp. 876–888, August 2009.
-  S.-Y. Chung, G. D. Forney, T. J. Richardson, and R. Urbanke, “On the design of low-density parity-check codes within dB of the Shannon limit,” IEEE Commun. Lett., vol. 5, no. 2, pp. 58–60, Feb. 2001.
-  G. Liva and M. Chiani, “Protograph LDPC code design based on EXIT analysis,” in Proc. IEEE Global Telecommun. Conf. (Globecom), Washington, US, Dec. 2007, pp. 3250–3254.
-  P. Pulini, G. Liva, and M. Chiani, “Unequal Diversity LDPC Codes for Relay Channels,” IEEE Trans. Wireless Commun., vol. 12, no. 11, pp. 5646–5655, Nov. 2013.
-  S.-Y. Chung, T. J. Richardson, and R. L. Urbanke, “Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 657–670, Feb. 2001.
-  R. M. Pyndiah, “Near-optimum decoding of product codes: block turbo codes,” IEEE Trans. Commun., vol. 46, no. 8, pp. 1003–1010, Aug. 1998.
-  N. Sendrier, “Decoding One Out of Many,” Post-quantum cryptography, pp. 51–67, 2011.
-  Y. Hamdaoui and N. Sendrier, “A Non Asymptotic Analysis of Information Set Decoding,” IACR Cryptology ePrint Archive, vol. 2013, p. 162, 2013.
-  A. May, A. Meurer, and E. Thomae, “Decoding Random Linear Codes in ̃.” in Proc. International Conference on the Theory and Application of Cryptology and Information Security (ASIACRYPT), Seoul, South Korea, Dec. 2011, pp. 107–124.