In 1978, Rivest-Shamir-Adleman (RSA) proposed a pubilic-key cryptosystem whose security is based on the hard problem of factoring large integers. Since then, the RSA cryptosystem is used in most state-of-the art communication systems and is included in many communication standards. In 1999, Shor presented a factorization algorithm for quantum computers that is able to factor large integers in polynomial time . Thus, assuming that quantum computer of sufficient scale can be built one day, the RSA cryptosystem can be broken in polynomial time rendering most of today’s communication systems insecure. This result gives rise to developing cryptosystems that are post-quantum secure.
In the same year as RSA, McEliece proposed a cryptosystem based on error-correcting codes . The security of the scheme relies on the hardness of decoding an unknown linear code and thus is resilient against efficient factorization attacks by quantum algorithms like Shor’s algorithm. One drawback of the scheme is the large key size and the rate-loss compared to the RSA cryptosystem. Variants of the McEliece cryptosystem based on different code families were considered in the past (e.g. rank-metric codes , random codes ). In particular, McEliece cryptosystems based on low-density parity-check (LDPC) allow for very small keys but suffer from feasible attacks on the low-weight dual code due to the sparse parity-check matrix . Variants based on quasi-cyclic (QC)-LDPC codes that use row and column scrambling matrices to increase the density of the public code parity-check matrix  allow for structural attacks . The family of moderate-density parity-check (MDPC) codes admit a parity-check matrix of moderate density,111The existence of a moderate-density parity-check matrix for a binary linear block code does not rule out the possibility that the same code fulfills a (much) sparser parity-check matrix. As in most of the literature, we neglect the probability that a code defined by a randomly-drawn moderate parity check matrix admits a sparser parity-check matrix. Guarantees in this sense shall be derived based on random code ensemble arguments. yielding codes with large minimum distance . In  a McEliece cryptosystem based on QC-MDPC codes that defeats information set decoding attacks on the dual code due to the moderate density parity-check matrix is presented. For a given security level, the QC-MDPC cryptosystem allows for very small key sizes compared to other McEliece variants.
Recently, Guo, Johansson and Stankovski (GJS) presented a reaction-based key-recovery attack on the QC-MDPC system . This attack reveals the parity-check matrix by observing the decoding failure probability for chosen ciphertexts that are constructed with error patterns which have a specific structure. A modified version of the attack can break a system that uses CCA-2 secure conversions .
In this paper we analyze different decoding algorithms for (QC-) MDPC codes with respect to their error-correction capability and their resilience against the GJS attack . In particular, we present novel hard-decision message-passing (MP) algorithms that are resilient against the GJS key-recovery attack from  and have an improved error-correction capability compared to existing hard-decision decoding schemes. We derive the density evolution (DE) for the novel decoding schemes which allows to predict decoding thresholds as well as to optimize the parameters of the algorithm.
The paper is structured as follows. Section 2 gives basic definitions, describes classical decoding schemes for LDPC/MDPC codes and analyzes their resilience against the GJS attack by simulations. In Section 3 we propose new MP decoding schemes that are able to defeat the GJS
attack. To estimate the decoding threshold we perform density evolution analysis of the novel schemes. Finally, Section4 concludes the paper.
Denote the binary field by and let the set of matrices over be denoted by
. The set of all vectors of lengthover is denoted by . Vectors and matrices are denoted by bold lower-case and upper-case letters such as and , respectively. A binary circulant matrix of size is a matrix with coefficients in obtained by cyclically shifting its first row to right, yielding
The set of circulant matrices together with the matrix multiplication and addition forms a commutative ring and it is isomorphic to the polynomial ring . In particular, there is a bijective mapping between a circulant matrix and a polynomial . We indicate the vector of coefficients of a polynomial as . The weight of a polynomial is the number of its non-zero coefficients, i.e., it is the Hamming weight of its coefficient vector . We indicate both weights with the operator , i.e., . In the remainder of this paper we use the polynomial representation of circulant matrices to provide an efficient description of the structure of the codes.
2.1 QC MDPC-based Cryptosystems
A new variant of the McEliece public-key cryptosystem that is based on QC-MDPC codes was proposed in  The QC-MDPC McEliece cryptosystem allows for a very simple description without the need for row and column scrambling matrices. Due to the moderate density of the parity-check matrix, known decoding attacks on the dual code  are defeated. The parity-check matrix consists of blocks of circulant matrices which allows for very small key sizes due to the compact description of the circulant blocks.
A binary MDPC code of length , dimension and row weight is defined by a binary parity-check matrix that contains a moderate number of ones per row. For , dimension , redundancy with for some integer , the parity-check matrix of a QC-MDPC222As in most of the recent literature on codes constructed from arrays of circulants, we loosely define a code to be QC if there exists a permutation of its coordinates such that the resulting (equivalent) code has the following property: if is a codeword, then any cyclic shift of by positions is a codeword. For example, a code admitting a parity-check matrix as an array of circulants does not fulfill the property above. However the code is QC in the loose sense, since it is possible to permute its coordinates to obtain a code for which every cyclic shift of a codeword by positions yields another codeword. code in polynomial form is a matrix.
Without loss of generality we consider in the following codes with (i.e. ). This family of codes covers a wide range of code rates and is of particular interest for cryptographic applications since the parity check matrices can be characterized in a very compact way. The parity-check matrix of QC-MDPC codes with has the form
Let be an efficient decoder for the code defined by the parity-check matrix .
Randomly generate a parity-check matrix of the form (2) with for . The matrix with row weight is the private key.
The public key is the corresponding binary generator matrix in systematic form, i.e.,
The generator matrix can be described by bits (public key size).
To encrypt a plaintext a user computes the ciphertext using the public key as
where is an error vector uniformly chosen from all vectors from of Hamming weight .
To decrypt a ciphertext the authorized recipient uses the private key to obtain
Since is in systematic form the plaintext corresponds to the first bits of .
2.2 A Reaction-Based Attack on the QC-MDPC McEliece Cryptosystem
Beside the conventional key-recovery and decoding attacks based on information set decoding, GJS proposed a reaction-based key-recovery attack on the QC-MDPC McEliece cryptosystem  which is currently the most critical attack against the scheme . Efficient iterative decoding of LDPC/MDPC codes comes at the cost of decoding failures. For example, the MDPC codes proposed in  are operated with a target decoding failure probability lower than .333The MDPC code parameters chosen in  showed to empirically attain the target. An interesting question is whether a randomly generated parity-check matrix would yield the target decoding failure probability for the given set of code parameters. A possible direction to address the question is by analyzing the MDPC code ensemble concentration properties  in the finite block length regime under the given decoding algorithm.
The GJS attack exploits the observation that the decoding failure probability for some particularly chosen error patterns is correlated with the structure of the secret key, i.e., the parity-check matrix . We now describe briefly how the attack proceeds.
The Lee distance between two entries at position and of a binary vector is defined as 
The Lee distance profile444We use the term “Lee distance profile” instead of “distance spectrum” as in  to avoid the confusion with the distance spectrum (i.e., wieght enumerator) in Hamming metric of linear block codes. of a binary vector of length is defined as
where the maximum distance in is . The multiplicity is defined as the number of occurrences of distance in the vector . A binary vector is fully specified by its distance profile and thus can be reconstructed with high probability from  (up to cyclic shifts).
Let be a set containing all binary vectors of length with exactly ones that are placed as pairs with Lee distance in the first positions of the vector. By limiting the errors to the first positions, only the first circulant block of the matrix will determine the result of the decoding procedure. The GJS attack proceeds as follows:
For generate error sets of size each (with being a parameter defining, together with , the number of attempts used by the attacker).
Since the decoding failure probability is lower for with , i.e. if , for sufficiently large the measured FER can be used to determine the distance profile . The vector can then be reconstructed from the distance profile using the methods from .
The remaining blocks of in (2) can then be reconstructed via the generator matrix using linear algebraic relations. The success on the attack depends on how the systems deals with decoding failures since the FER can only be measured if retransmissions are requested. Another important factor is which decoding scheme is used. In [9, 14] it is shown that the GJS attack succeeds if bit-flipping (BF) or belief propagation (BP) decoding algorithms are used.
In key exchange protocols the attack can be defeated by using ephemeral keys (i.e. a new key pair for every key exchange) . However, this protocol-based fix can only be applied in very special scenarios.
2.3 Classical Decoding Algorithms
In the following we describe classical decoding algorithms for LDPC codes and analyze their error-correction capability for MDPC codes as well as their resilience against the GJS attack. For decoding we map each ciphertext bit to if and if yielding (with some abuse of notation) a ciphertext . We consider next iterative MP decoding on the Tanner graph  of the code. A Tanner graph is a bipartite graph consisting of variable nodes and check nodes. A VN is connected to a CN if the corresponding entry in the parity-check matrix is equal to . We consider next only regular Tanner graphs, i.e, graphs for which the number of edges emanating from each VN equals and the number of edges emanating from each CN equals . We refer to and as variable and check node degree, respectively. The neighborhood of a variable node is , and similarly denotes the neighborhood of the check node . We denote the messages from VN to CN by and the messages from to by . In the following we omit the indices of VNs and CNs whenever they are clear from the context.
For decryption in the QC-MDPC cryptosystem  an efficient BF algorithm for LDPC codes (see e.g. [17, Alg. 5.4]) is considered. This algorithm is often referred to as “Gallager’s bit-flipping” algorithm although it is different from the algorithm proposed by Gallager in .
Given a ciphertext , a threshold and a maximum number if iterations , the BF algorithm proceeds as follows. Each VN is initialized with the corresponding ciphertext bit and sends the message to all neighboring CNs . The CNs send the messages
to all neighboring VNs . Note, that (8) is equivalent to the modulo two sum of all incoming messages considered over . Each variable nodes counts the number of unsatisfied check equations (i.e the number of messages ) and sends to its neighbors the “flipped” ciphertext bit if at least parity-check equations are unsatisfied, i.e.
The algorithm terminates if either all checks are satisfied or the maximum number of iterations is reached.
The error-correction capability of the BF algorithm depends on the choice of the threshold . In  the threshold is selected as the maximum number of unsatisfied parity-check equations at each iteration which is denoted by . Note, that with the BF algorithm is no longer purely a MP algorithm on the Tanner graph of the code since has to be obtained by a global entity.
In  it is suggested to compute according to [18, p. 46, Eq. 4.16] which will lead to suboptimal results since the BF decoder is different from the decoder analyzed in [18, Sec. 4]. To reduce the average number of iterations the threshold in  is chosen as , where is a small integer that is determined empirically (see [8, Sec. 4]).
2.3.2 Gallager B
An efficient binary MP decoder for LDPC codes, often referred to as Gallager B, was presented an analyzed in . Each VN is initialized with the corresponding ciphertext bit . The VN send the messages
Comparing the CN operations (8) and (11), and the VN operations (9) and (10), one can see the before mentioned difference between the BF algorithm an Gallager B. For fixed the average error correction capability over the binary symmetric channel (BSC) for the ensemble of LDPC codes can be analyzed, in the limit of large block lengths, using the DE analysis [18, 12]. Following this approach, the optimal value (in the large block length limit) for the parameter can be determined by [18, Eq. 4.16].
2.3.3 Miladinovic-Fossorier (MF) Algorithm
Two probabilistic variants of Gallager’s algorithm B that improve upon the original version were proposed by Miladinovic and Fossorier in [20, Sec. III.A]. We refer next to the two algorithms as Miladinovic and Fossorier (MF) algorithms. At each iteration the VN to CN messages (10) in Gallager B are modified with a certain probability . By defining an initial value and a decrement , one can compute by
The VNs are initialized with the corresponding ciphertext bit .
Variant 2 (MF-2): With respect MF-1, we shall now introduce the iteration counter for the messages that are output by VNs and by CNs. At iteration , is the number of message at the input of a VN sent by its neighboring CNs exceeds the threshold , i.e. if , the VN sends the message
The check node operation as well as the final decision remains the same as in Gallager B (see (11) and (12)). In general, the second variant improves upon the first variant in terms of the number of correctable errors . By definition the probability
has two degrees of freedom, namelyand , which are subject to optimization. In general there is no close form optimization of these two parameters except for using the DE analysis from  as a guideline.
2.3.4 Algorithm E
A generalization of the Gallager B algorithm that exploits erasures, which we further refer to as Algorithm E, was introduced and analyzed in [12, 21]. To incorporate erasures the decoder requires a ternary message alphabet , where indicates an erasure. The VNs are initialized with the corresponding ciphertext bit and send the messages
is a heuristic weighting factor that was proposed in improve the performance of Algorithm E. In  was allowed to change over iterations (to account for the increase of reliability of the CN messages as the iteration number grows). We consider next the simple case where is kept constant through all iterations. The check nodes operate the same way as in Gallager B, i.e the CNs send the messages according to (11). After iterating (11) and (16) at most times, the final decision is made as
2.3.5 Belief Propagation (BP) Decoding
where is ciphertext bit corresponding to . The VNs send the messages
2.4 Simulation Results
We now present simulation results of the GJS attack on variants of the QC-MDPC cryptosystem using the above described schemes. We consider next an QC-MDPC code ensemble with and and parity-check matrix in the form
where and are two polynomials of degree less than and . The ensemble was proposed in  for bit security. To analyze the resilience against the GJS attack, we performed Monte Carlo simulations for codes randomly picked from collecting up to decoding failures (frame errors) with iterations. For each multiplicity in , different error sets (simulation points) were simulated. As in  the weight of the error patterns was chosen such that the FER is high enough to be easily observable in the simulations.
Figure 1 shows the simulation results for one code from . The results show that except the MF decoding algorithm, all considered schemes are vulnerable against GJS attack. For the MF decoding scheme the probability was chosen such that the FER for all multiplicities appearing in are similar. Hence, the distance profile can not be reconstructed if the MF decoding scheme with the appropriate choice of is used. Since simulations of different codes from show very similar results we conjecture that the choice of rather depends on the ensemble than on the code.
3 Secret Key Concealment via Modified Iterative Decoding
In this section we propose new methods to modify MP decoding algorithms that admit erasures. The methods allow to modify MP decoding algorithms in a probabilistic manner to make them resilient against the GJS attack for an appropriate choice of the decoding parameters. The main idea is, that similar to the MF decoding scheme (see Sec. 2.3.3), we modify the VN to CN messages at each iteration with a given probability. In particular, we modify the MP decoder such that the messages are erased (i.e., set to ) under certain conditions with a given probability . Remarkably, we will see how this results also in an improved error-correction capability. In the following we will refer to this approach as random erasure message-passing (REMP) decoding and we apply it to modify Algorithm E.
3.1 First Modification of Algorithm E (Remp-1)
We modify Algorithm E such that any nonzero message in iteration is erased with probability . At the VNs we first compute a temporary output message
If the message is not an erasure, i.e. if , the VN sends
and else. At the CNs we perform the same operation as in Algorithm E (see (11)). The final decision, after iterating (11) and (24) at most times, is given by (30). As for the MF algorithm, the probability may be decreased as grows following (13).
3.1.1 Density Evolution Analysis
Based on the analysis of Algorithm E in , we derive the DE analysis of our modified algorithm from Sec. 3.1. Let denote the probability that a VN to CN message sent at iteration is equal to . Similarly, let denote the probability that a CN to VN message sent at iteration is equal to . The encryption step (4) can be considered as the transmission of a codeword over a binary symmetric channel with crossover probability . For the analysis we assume w.l.o.g. that all ciphertext bits are equal to (all-zero codeword). Hence, we initialize the probabilities , and .
The probability can be expressed as
The probability is given by
Finally, the probability is given by
Note that since in our scenario we do not have erasures in the ciphertext we have which allows to simplify the expressions above.
3.2 Second Modification of Algorithm E (Remp-2)
In the second modification of Algorithm E from Sec. 2.3.4 the messages at iteration are erased (i.e. set to ) with probability if they contradict the corresponding ciphertext bit . At the VNs we first compute a temporary output message
If the message contradicts the ciphertext bit , i.e. if we have , the VN sends
3.2.1 Density Evolution
The probability is given by
Finally, the probability can then be expressed as
As before, note that since in our scenario we do not have erasures in the ciphertext we have which allows to simplify the expressions above.
3.3 Masked Belief Propagation (MBP) Decoding
where is ciphertext bit corresponding to . The VNs first compute the temporary messages
If the sign of is not equal to the sign of , i.e. if , then the VN sends the message
and otherwise. In other words, if the sign of a message that is supposed so be sent by VN is different from the sign of the corresponding initial value , then with probability the initial value is sent. The CNs operation remains the same as in (20). After iterating (33), (20) at most times, the final decision at each VN is made according to (21). For masked belief propagation (MBP) decoding we do not provide an explicit description on how DE has to be modified since the analysis can be carried out by applying minor changes to quantized DE .
We shall see next that, due to the modified operation at the VNs, the MBP algorithm allows to conceal the structure of by tuning the probability . We empirically verified that the idea of introducing random erasures as in Sec. 3.1 and Sec. 3.2 does not conceal the structure of for BP decoding. Moreover, we will see that, differently from the REMP modifications of Algorithm E, the modification of BP decoding comes at the cost of a reduced error correction performance. Thus, the decoding algorithms from Sec. 3.1 and Sec. 3.2 are preferable since they show a similar performance at a lower decoding complexity.
3.4 Performance Analysis & Simulation Results
3.4.1 Density Evolution Analysis
We first analyze the error-correction capability of the two modifications of Algorithm E from Sec 3.1 and Sec. 3.2. As first estimate of the code performance, we employ the DE analysis  to determine the iterative decoding threshold of a unstructured LDPC code ensemble over a BSC with error probability . The decoding threshold is denoted as and represents the largest channel error probability for which, in the limit of large and large , the bit error probability of code picked randomly from the ensemble becomes vanishing small . We then get a rough estimate on the error correction capability as555Note that at the decoding threshold a vanishing small bit error probability may not imply a vanishing small block error probability. However, for the regular MDPC ensembles under consideration the threshold on the bit error probability and the one on the block error probability do coincide over binary-input output-symmetric memoryless channel under BP decoding . In our estimate, we implicitly assume that the result extends to Algorithm E and its variants.
Note that, for a moderate block length , provides only a coarse estimate to the number of errors at which we expect the FER to rapidly decrease (so-called waterfall region), with the accuracy of the prediction improving as grows large. With a slight abuse of the wording, we refer to as decoding threshold as well. We further denote the decoding threshold under Algorithm E, REMP-1 and REMP-2 as , and , respectively. The decoding thresholds do not only depend on the selected algorithm, but also on the algorithm parameters. The results for the MDPC ensemble with and are summarized in Table 1. For Algorithm E, the value of has been chosen to maximize the decoding threshold. Remarkably, the variants REMP-1 and REMP-2 do not yield a threshold degradation, and in some cases they even provide slight gains for suitable choices of the parameters .
3.4.2 Simulation Results
To validate the performance estimates obtained through DE, we simulated the error-correction capability of the decoding schemes from Section 2.3 and Section 3. The results in terms of FER as a function of the error pattern weight are depicted in Figure 2. The results confirm the trend predicted by the DE analysis. In particular, the error-correction capability improves upon existing decoding algorithms. Even for erasure probability values chosen to conceal the structure of (yielding a suboptimal choice with respect to the error correction performance), REMP-2 outperforms Algorithm E and the BF/MF algorithms.
3.5 Resilience against the GJS Attack
We now analyze the resilience of the proposed decoding schemes against the GJS attack. For the REMP variants of Algorithm E as well as for the MBP decoder we performed Monte Carlo simulations for codes randomly picked from collecting up to decoding failures (frame errors) with iterations. For each multiplicity in , different error sets (simulation points) were simulated. The simulation results in Figure 3 show that, for an appropriate choice of parameters, the REMP-1 and REMP-2 decoding schemes have a similar FER for all multiplicities appearing in . Hence, the reconstruction of the distance profile from the observed FER is not possible.
Figure 4 shows that for an appropriate choice of parameters also the MBP algorithm is able to conceal the structure of the secret key. For the choice of parameters that conceal the secret key the FER of MBP decoding and REMP-2 decoding at error weight is similar. Hence, due to the higher complexity of MBP, the REMP scheme is preferable.
To conceal the structure of the choice of for a particular error weight is crucial. If is chosen too large the picture is inverted, i.e. higher multiplicities have a higher FER than lower multiplicities. Thus the error weight should be computed after decoding and ciphers generated with an error weight different from should be rejected to prevent attacks that exploit this effect.
We analyzed classical iterative decoding schemes for MDPC codes with respect to their error-correction capability as well as their resilience against a recent key-recovery attack by GJS. The simulation results show that a decoding scheme by MF is able to defeat the attack for an appropriate choice of decoding parameters.
A new decoding method called REMP that allows to improve existing MP decoding algorithms with respect to their error-correction capability as well as their resilience against the GJS attack was proposed. Two REMP variants of an existing MP decoder that have an improved error-correction performance for MDPC codes compared to existing schemes were presented and analyzed. The simulation results show that the proposed REMP schemes are able to defeat the GJS attack for an appropriate choice of decoding parameters.
A new variant of the belief propagation decoding algorithm that is able to resist the GJS attack was presented.
-  P. W. Shor, “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,” SIAM J. Comput., vol. 26, no. 5, pp. 1484–1509, 1997.
-  R. J. McEliece, “A Public-Key Cryptosystem Based on Algebraic Codes,” Deep Space Network Progress Report, vol. 44, pp. 114–116, 1978.
-  E. M. Gabidulin, A. Paramonov, and O. Tretjakov, “Ideals Over a Non-Commutative Ring and Their Application in Cryptology,” in 10th Annual International Conference on Theory and Application of Cryptographic Techniques (EUROCRYPT), Brighton, UK, Apr. 1991, pp. 482–489.
-  C. Monico, J. Rosenthal, and A. Shokrollahi, “Using Low Density Parity Check Codes in the McEliece Cryptosystem,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Sorrento, Italy, Jun. 2000, p. 215.
-  M. Baldi, M. Bianchi, F. Chiaraluce, J. Rosenthal, and D. Schipani, “Enhanced Public Key Security for the McEliece Cryptosystem,” Journal of Cryptology, vol. 29, no. 1, pp. 1–27, Jan. 2016.
-  A. Couvreur, A. Otmani, J.-P. Tillich, and V. Gauthier-Umana, “A Polynomial-Time Attack on the BBCRS Scheme,” in Proc. 18th IACR International Conference on Practice and Theory in Public-Key Cryptography, Gaithersburg, MD, USA, Mar. 2015, pp. 175–193.
-  S. Ouzan and Y. Be’ery, “Moderate-Density Parity-Check Codes,” arXiv preprint arXiv:0911.3262, 2009.
-  R. Misoczki, J. P. Tillich, N. Sendrier, and P. S. L. M. Barreto, “MDPC-McEliece: New McEliece variants from Moderate Density Parity-Check codes,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Istanbul, Turkey, Jul. 2013, pp. 2069–2073.
-  Q. Guo, T. Johansson, and P. Stankovski, “A Key Recovery Attack on MDPC with CCA Security Using Decoding Errors,” in 22nd Annual International Conference on the Theory and Applications of Cryptology and Information Security (ASIACRYPT), Hanoi, Vietnam, Dec. 2016, pp. 789–815.
-  K. Kobara and H. Imai, “Semantically secure McEliece public-key cryptosystems-conversions for McEliece PKC,” in 4th International Workshop on Practice and Theory in Public Key Cryptography (PKC), Cheju Island, South Korea, Feb. 2001, pp. 19–35.
-  N. Sendrier, “Code-Based Cryptography: State of the Art and Perspectives,” IEEE Security & Privacy, vol. 15, no. 4, pp. 44–50, Aug. 2017.
-  T. Richardson and R. Urbanke, “The Capacity of Low-Density Parity-Check Codes Under Message-Passing Decoding,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 599 – 618, Feb. 2001.
-  C. Lee, “Some Properties of Nonbinary Error-Correcting Codes,” IRE Transactions on Information Theory, vol. 4, no. 2, pp. 77–82, Jun. 1958.
-  T. Fabšič, V. Hromada, P. Stankovski, P. Zajac, Q. Guo, and T. Johansson, “A Reaction Attack on the QC-LDPC McEliece Cryptosystem,” in International Workshop on Post-Quantum Cryptography, 2017, pp. 51–68.
-  P. S. L. M. Barreto, S. Gueron, T. Güeneysu, R. Misoczki, E. Persichetti, N. Sendrier, and J.-P. Tillich, “CAKE: Code-Based Algorithm for Key Encapsulation,” Cryptology ePrint Archive, Report 2017/757, 2017, http://eprint.iacr.org/2017/757.
-  M. Tanner, “A recursive approach to low complexity codes,” IEEE Trans. Inf. Theory, vol. 27, no. 5, pp. 533–547, Sep. 1981.
-  W. Ryan and S. Lin, Channel codes – Classical and modern. New York, NY, USA: Cambridge University Press, 2009.
-  R. Gallager, Low-density parity-check codes. Cambridge, MA, USA: MIT Press, 1963.
-  W. C. Huffman and V. Pless, Fundamentals of Error-Correcting Codes. Cambridge University Press, 2010.
-  N. Miladinovic and M. P. Fossorier, “Improved Bit-Flipping Decoding of Low-Density Parity-Check Codes,” IEEE Trans. Inf. Theory, vol. 51, no. 4, pp. 1594–1606, Apr. 2005.
-  M. Mitzenmacher, “A Note on Low Density Parity Check Codes for Erasures and Errors,” SRC Technical Note, vol. 1998, no. 17, 1998.
-  S.-Y. Chung, G. D. Forney, T. J. Richardson, and R. Urbanke, “On the design of low-density parity-check codes within db of the Shannon limit,” IEEE Commun. Lett., vol. 5, no. 2, pp. 58–60, Feb. 2001.
-  M. Lentmaier, D. V. Truhachev, K. S. Zigangirov, and D. J. Costello, “An analysis of the block error probability performance of iterative decoding,” IEEE Trans. Inf. Theory, vol. 51, no. 11, pp. 3834–3855, Nov 2005.