1 Introduction
In 1978, RivestShamirAdleman (RSA) proposed a pubilickey cryptosystem whose security is based on the hard problem of factoring large integers. Since then, the RSA cryptosystem is used in most stateofthe art communication systems and is included in many communication standards. In 1999, Shor presented a factorization algorithm for quantum computers that is able to factor large integers in polynomial time [1]. Thus, assuming that quantum computer of sufficient scale can be built one day, the RSA cryptosystem can be broken in polynomial time rendering most of today’s communication systems insecure. This result gives rise to developing cryptosystems that are postquantum secure.
In the same year as RSA, McEliece proposed a cryptosystem based on errorcorrecting codes [2]. The security of the scheme relies on the hardness of decoding an unknown linear code and thus is resilient against efficient factorization attacks by quantum algorithms like Shor’s algorithm. One drawback of the scheme is the large key size and the rateloss compared to the RSA cryptosystem. Variants of the McEliece cryptosystem based on different code families were considered in the past (e.g. rankmetric codes [3], random codes [4]). In particular, McEliece cryptosystems based on lowdensity paritycheck (LDPC) allow for very small keys but suffer from feasible attacks on the lowweight dual code due to the sparse paritycheck matrix [4]. Variants based on quasicyclic (QC)LDPC codes that use row and column scrambling matrices to increase the density of the public code paritycheck matrix [5] allow for structural attacks [6]. The family of moderatedensity paritycheck (MDPC) codes admit a paritycheck matrix of moderate density,^{1}^{1}1The existence of a moderatedensity paritycheck matrix for a binary linear block code does not rule out the possibility that the same code fulfills a (much) sparser paritycheck matrix. As in most of the literature, we neglect the probability that a code defined by a randomlydrawn moderate parity check matrix admits a sparser paritycheck matrix. Guarantees in this sense shall be derived based on random code ensemble arguments. yielding codes with large minimum distance [7]. In [8] a McEliece cryptosystem based on QCMDPC codes that defeats information set decoding attacks on the dual code due to the moderate density paritycheck matrix is presented. For a given security level, the QCMDPC cryptosystem allows for very small key sizes compared to other McEliece variants.
Recently, Guo, Johansson and Stankovski (GJS) presented a reactionbased keyrecovery attack on the QCMDPC system [9]. This attack reveals the paritycheck matrix by observing the decoding failure probability for chosen ciphertexts that are constructed with error patterns which have a specific structure. A modified version of the attack can break a system that uses CCA2 secure conversions [10].
In this paper we analyze different decoding algorithms for (QC) MDPC codes with respect to their errorcorrection capability and their resilience against the GJS attack [9]. In particular, we present novel harddecision messagepassing (MP) algorithms that are resilient against the GJS keyrecovery attack from [9] and have an improved errorcorrection capability compared to existing harddecision decoding schemes. We derive the density evolution (DE) for the novel decoding schemes which allows to predict decoding thresholds as well as to optimize the parameters of the algorithm.
The paper is structured as follows. Section 2 gives basic definitions, describes classical decoding schemes for LDPC/MDPC codes and analyzes their resilience against the GJS attack by simulations. In Section 3 we propose new MP decoding schemes that are able to defeat the GJS
attack. To estimate the decoding threshold we perform density evolution analysis of the novel schemes. Finally, Section
4 concludes the paper.2 Preliminaries
Denote the binary field by and let the set of matrices over be denoted by
. The set of all vectors of length
over is denoted by . Vectors and matrices are denoted by bold lowercase and uppercase letters such as and , respectively. A binary circulant matrix of size is a matrix with coefficients in obtained by cyclically shifting its first row to right, yielding(1) 
The set of circulant matrices together with the matrix multiplication and addition forms a commutative ring and it is isomorphic to the polynomial ring . In particular, there is a bijective mapping between a circulant matrix and a polynomial . We indicate the vector of coefficients of a polynomial as . The weight of a polynomial is the number of its nonzero coefficients, i.e., it is the Hamming weight of its coefficient vector . We indicate both weights with the operator , i.e., . In the remainder of this paper we use the polynomial representation of circulant matrices to provide an efficient description of the structure of the codes.
2.1 QC MDPCbased Cryptosystems
A new variant of the McEliece publickey cryptosystem that is based on QCMDPC codes was proposed in [8] The QCMDPC McEliece cryptosystem allows for a very simple description without the need for row and column scrambling matrices. Due to the moderate density of the paritycheck matrix, known decoding attacks on the dual code [4] are defeated. The paritycheck matrix consists of blocks of circulant matrices which allows for very small key sizes due to the compact description of the circulant blocks.
A binary MDPC code of length , dimension and row weight is defined by a binary paritycheck matrix that contains a moderate number of ones per row. For , dimension , redundancy with for some integer , the paritycheck matrix of a QCMDPC^{2}^{2}2As in most of the recent literature on codes constructed from arrays of circulants, we loosely define a code to be QC if there exists a permutation of its coordinates such that the resulting (equivalent) code has the following property: if is a codeword, then any cyclic shift of by positions is a codeword. For example, a code admitting a paritycheck matrix as an array of circulants does not fulfill the property above. However the code is QC in the loose sense, since it is possible to permute its coordinates to obtain a code for which every cyclic shift of a codeword by positions yields another codeword. code in polynomial form is a matrix.
Without loss of generality we consider in the following codes with (i.e. ). This family of codes covers a wide range of code rates and is of particular interest for cryptographic applications since the parity check matrices can be characterized in a very compact way. The paritycheck matrix of QCMDPC codes with has the form
(2) 
Let be an efficient decoder for the code defined by the paritycheck matrix .
Key generation:

Randomly generate a paritycheck matrix of the form (2) with for . The matrix with row weight is the private key.

The public key is the corresponding binary generator matrix in systematic form, i.e.,
(3) The generator matrix can be described by bits (public key size).
Encryption:

To encrypt a plaintext a user computes the ciphertext using the public key as
(4) where is an error vector uniformly chosen from all vectors from of Hamming weight .
Decryption:

To decrypt a ciphertext the authorized recipient uses the private key to obtain
(5) 
Since is in systematic form the plaintext corresponds to the first bits of .
2.2 A ReactionBased Attack on the QCMDPC McEliece Cryptosystem
Beside the conventional keyrecovery and decoding attacks based on information set decoding, GJS proposed a reactionbased keyrecovery attack on the QCMDPC McEliece cryptosystem [8] which is currently the most critical attack against the scheme [11]. Efficient iterative decoding of LDPC/MDPC codes comes at the cost of decoding failures. For example, the MDPC codes proposed in [8] are operated with a target decoding failure probability lower than .^{3}^{3}3The MDPC code parameters chosen in [8] showed to empirically attain the target. An interesting question is whether a randomly generated paritycheck matrix would yield the target decoding failure probability for the given set of code parameters. A possible direction to address the question is by analyzing the MDPC code ensemble concentration properties [12] in the finite block length regime under the given decoding algorithm.
The GJS attack exploits the observation that the decoding failure probability for some particularly chosen error patterns is correlated with the structure of the secret key, i.e., the paritycheck matrix . We now describe briefly how the attack proceeds.
The Lee distance between two entries at position and of a binary vector is defined as [13]
(6) 
The Lee distance profile^{4}^{4}4We use the term “Lee distance profile” instead of “distance spectrum” as in [8] to avoid the confusion with the distance spectrum (i.e., wieght enumerator) in Hamming metric of linear block codes. of a binary vector of length is defined as
(7) 
where the maximum distance in is . The multiplicity is defined as the number of occurrences of distance in the vector . A binary vector is fully specified by its distance profile and thus can be reconstructed with high probability from [9] (up to cyclic shifts).
Let be a set containing all binary vectors of length with exactly ones that are placed as pairs with Lee distance in the first positions of the vector. By limiting the errors to the first positions, only the first circulant block of the matrix will determine the result of the decoding procedure. The GJS attack proceeds as follows:

For generate error sets of size each (with being a parameter defining, together with , the number of attempts used by the attacker).

Send ciphertexts (4) with for all and measure the frame error rate (FER).
Since the decoding failure probability is lower for with , i.e. if , for sufficiently large the measured FER can be used to determine the distance profile . The vector can then be reconstructed from the distance profile using the methods from [9].
The remaining blocks of in (2) can then be reconstructed via the generator matrix using linear algebraic relations. The success on the attack depends on how the systems deals with decoding failures since the FER can only be measured if retransmissions are requested. Another important factor is which decoding scheme is used. In [9, 14] it is shown that the GJS attack succeeds if bitflipping (BF) or belief propagation (BP) decoding algorithms are used.
In key exchange protocols the attack can be defeated by using ephemeral keys (i.e. a new key pair for every key exchange) [15]. However, this protocolbased fix can only be applied in very special scenarios.
2.3 Classical Decoding Algorithms
In the following we describe classical decoding algorithms for LDPC codes and analyze their errorcorrection capability for MDPC codes as well as their resilience against the GJS attack. For decoding we map each ciphertext bit to if and if yielding (with some abuse of notation) a ciphertext . We consider next iterative MP decoding on the Tanner graph [16] of the code. A Tanner graph is a bipartite graph consisting of variable nodes and check nodes. A VN is connected to a CN if the corresponding entry in the paritycheck matrix is equal to . We consider next only regular Tanner graphs, i.e, graphs for which the number of edges emanating from each VN equals and the number of edges emanating from each CN equals . We refer to and as variable and check node degree, respectively. The neighborhood of a variable node is , and similarly denotes the neighborhood of the check node . We denote the messages from VN to CN by and the messages from to by . In the following we omit the indices of VNs and CNs whenever they are clear from the context.
2.3.1 BitFlipping
For decryption in the QCMDPC cryptosystem [8] an efficient BF algorithm for LDPC codes (see e.g. [17, Alg. 5.4]) is considered. This algorithm is often referred to as “Gallager’s bitflipping” algorithm although it is different from the algorithm proposed by Gallager in [18].
Given a ciphertext , a threshold and a maximum number if iterations , the BF algorithm proceeds as follows. Each VN is initialized with the corresponding ciphertext bit and sends the message to all neighboring CNs . The CNs send the messages
(8) 
to all neighboring VNs . Note, that (8) is equivalent to the modulo two sum of all incoming messages considered over . Each variable nodes counts the number of unsatisfied check equations (i.e the number of messages ) and sends to its neighbors the “flipped” ciphertext bit if at least paritycheck equations are unsatisfied, i.e.
(9) 
The algorithm terminates if either all checks are satisfied or the maximum number of iterations is reached.
The errorcorrection capability of the BF algorithm depends on the choice of the threshold . In [19] the threshold is selected as the maximum number of unsatisfied paritycheck equations at each iteration which is denoted by . Note, that with the BF algorithm is no longer purely a MP algorithm on the Tanner graph of the code since has to be obtained by a global entity.
In [8] it is suggested to compute according to [18, p. 46, Eq. 4.16] which will lead to suboptimal results since the BF decoder is different from the decoder analyzed in [18, Sec. 4]. To reduce the average number of iterations the threshold in [8] is chosen as , where is a small integer that is determined empirically (see [8, Sec. 4]).
2.3.2 Gallager B
An efficient binary MP decoder for LDPC codes, often referred to as Gallager B, was presented an analyzed in [18]. Each VN is initialized with the corresponding ciphertext bit . The VN send the messages
(10) 
This means that in the first iteration VN sends the message to all neighboring CNs . The CNs send the messages
(11) 
to the neighboring VNs. After iterating (10), (11) at most times, the final decision is given by
(12) 
Comparing the CN operations (8) and (11), and the VN operations (9) and (10), one can see the before mentioned difference between the BF algorithm an Gallager B. For fixed the average error correction capability over the binary symmetric channel (BSC) for the ensemble of LDPC codes can be analyzed, in the limit of large block lengths, using the DE analysis [18, 12]. Following this approach, the optimal value (in the large block length limit) for the parameter can be determined by [18, Eq. 4.16].
2.3.3 MiladinovicFossorier (MF) Algorithm
Two probabilistic variants of Gallager’s algorithm B that improve upon the original version were proposed by Miladinovic and Fossorier in [20, Sec. III.A]. We refer next to the two algorithms as Miladinovic and Fossorier (MF) algorithms. At each iteration the VN to CN messages (10) in Gallager B are modified with a certain probability . By defining an initial value and a decrement , one can compute by
(13) 
The VNs are initialized with the corresponding ciphertext bit .
Variant 1 (MF1): If the number of incoming CN messages different from that do not agree with exceeds the threshold , i.e. if , the VNs send the messages
(14) 
and otherwise.
Variant 2 (MF2): With respect MF1, we shall now introduce the iteration counter for the messages that are output by VNs and by CNs. At iteration , is the number of message at the input of a VN sent by its neighboring CNs exceeds the threshold , i.e. if , the VN sends the message
(15) 
while otherwise.
The check node operation as well as the final decision remains the same as in Gallager B (see (11) and (12)). In general, the second variant improves upon the first variant in terms of the number of correctable errors [20]. By definition the probability
has two degrees of freedom, namely
and , which are subject to optimization. In general there is no close form optimization of these two parameters except for using the DE analysis from [20] as a guideline.2.3.4 Algorithm E
A generalization of the Gallager B algorithm that exploits erasures, which we further refer to as Algorithm E, was introduced and analyzed in [12, 21]. To incorporate erasures the decoder requires a ternary message alphabet , where indicates an erasure. The VNs are initialized with the corresponding ciphertext bit and send the messages
(16) 
Here,
is a heuristic weighting factor that was proposed in
[12] improve the performance of Algorithm E. In [12] was allowed to change over iterations (to account for the increase of reliability of the CN messages as the iteration number grows). We consider next the simple case where is kept constant through all iterations. The check nodes operate the same way as in Gallager B, i.e the CNs send the messages according to (11). After iterating (11) and (16) at most times, the final decision is made as(17) 
2.3.5 Belief Propagation (BP) Decoding
BP decoding is a softdecision decoding algorithm that is optimum in the maximum a posteriori (MAP) sense over a cyclefree Tanner graph. Each VN is initialized with the loglikelihood ratios
(18) 
where is ciphertext bit corresponding to . The VNs send the messages
(19) 
to the CNs. In turn, the CNs send the messages
(20) 
After iterating (19), (20) at most times, the final decision at each VN is made as
(21) 
2.4 Simulation Results
We now present simulation results of the GJS attack on variants of the QCMDPC cryptosystem using the above described schemes. We consider next an QCMDPC code ensemble with and and paritycheck matrix in the form
(22) 
where and are two polynomials of degree less than and . The ensemble was proposed in [8] for bit security. To analyze the resilience against the GJS attack, we performed Monte Carlo simulations for codes randomly picked from collecting up to decoding failures (frame errors) with iterations. For each multiplicity in , different error sets (simulation points) were simulated. As in [14] the weight of the error patterns was chosen such that the FER is high enough to be easily observable in the simulations.
Figure 1 shows the simulation results for one code from . The results show that except the MF decoding algorithm, all considered schemes are vulnerable against GJS attack. For the MF decoding scheme the probability was chosen such that the FER for all multiplicities appearing in are similar. Hence, the distance profile can not be reconstructed if the MF decoding scheme with the appropriate choice of is used. Since simulations of different codes from show very similar results we conjecture that the choice of rather depends on the ensemble than on the code.
3 Secret Key Concealment via Modified Iterative Decoding
In this section we propose new methods to modify MP decoding algorithms that admit erasures. The methods allow to modify MP decoding algorithms in a probabilistic manner to make them resilient against the GJS attack for an appropriate choice of the decoding parameters. The main idea is, that similar to the MF decoding scheme (see Sec. 2.3.3), we modify the VN to CN messages at each iteration with a given probability. In particular, we modify the MP decoder such that the messages are erased (i.e., set to ) under certain conditions with a given probability . Remarkably, we will see how this results also in an improved errorcorrection capability. In the following we will refer to this approach as random erasure messagepassing (REMP) decoding and we apply it to modify Algorithm E.
3.1 First Modification of Algorithm E (Remp1)
We modify Algorithm E such that any nonzero message in iteration is erased with probability . At the VNs we first compute a temporary output message
(23) 
If the message is not an erasure, i.e. if , the VN sends
(24) 
and else. At the CNs we perform the same operation as in Algorithm E (see (11)). The final decision, after iterating (11) and (24) at most times, is given by (30). As for the MF algorithm, the probability may be decreased as grows following (13).
3.1.1 Density Evolution Analysis
Based on the analysis of Algorithm E in [12], we derive the DE analysis of our modified algorithm from Sec. 3.1. Let denote the probability that a VN to CN message sent at iteration is equal to . Similarly, let denote the probability that a CN to VN message sent at iteration is equal to . The encryption step (4) can be considered as the transmission of a codeword over a binary symmetric channel with crossover probability . For the analysis we assume w.l.o.g. that all ciphertext bits are equal to (allzero codeword). Hence, we initialize the probabilities , and .
The CN operation of REMP1 remains the same as in Algorithm E (see [12]) and thus we have
(25)  
(26)  
(27) 
The probability can be expressed as
The probability is given by
Finally, the probability is given by
Note that since in our scenario we do not have erasures in the ciphertext we have which allows to simplify the expressions above.
3.2 Second Modification of Algorithm E (Remp2)
In the second modification of Algorithm E from Sec. 2.3.4 the messages at iteration are erased (i.e. set to ) with probability if they contradict the corresponding ciphertext bit . At the VNs we first compute a temporary output message
(28) 
If the message contradicts the ciphertext bit , i.e. if we have , the VN sends
(29) 
and otherwise. At the check nodes we perform the same operation as in Algorithm E (see (11)). The final decision, after iterating (11) and (29) at most times, is given by
(30) 
Again, as for the MF algorithm, the probability may be decreased as grows following (13).
3.2.1 Density Evolution
Based on the analysis of Algorithm E in [12], we derive the DE analysis of our modified algorithm from Sec. 3.2.
Since the CN operation is the same as in Algorithm E, we can compute and using (25), (26) and (27), respectively. The probability can be expressed as
The probability is given by
Finally, the probability can then be expressed as
As before, note that since in our scenario we do not have erasures in the ciphertext we have which allows to simplify the expressions above.
3.3 Masked Belief Propagation (MBP) Decoding
Using the ideas from the MF algorithm we now modify the classical BP decoding algorithm (see Sec. 2.3.5) in order to counteact the GJS attack. We set
(31) 
where is ciphertext bit corresponding to . The VNs first compute the temporary messages
(32) 
If the sign of is not equal to the sign of , i.e. if , then the VN sends the message
(33) 
and otherwise. In other words, if the sign of a message that is supposed so be sent by VN is different from the sign of the corresponding initial value , then with probability the initial value is sent. The CNs operation remains the same as in (20). After iterating (33), (20) at most times, the final decision at each VN is made according to (21). For masked belief propagation (MBP) decoding we do not provide an explicit description on how DE has to be modified since the analysis can be carried out by applying minor changes to quantized DE [22].
We shall see next that, due to the modified operation at the VNs, the MBP algorithm allows to conceal the structure of by tuning the probability . We empirically verified that the idea of introducing random erasures as in Sec. 3.1 and Sec. 3.2 does not conceal the structure of for BP decoding. Moreover, we will see that, differently from the REMP modifications of Algorithm E, the modification of BP decoding comes at the cost of a reduced error correction performance. Thus, the decoding algorithms from Sec. 3.1 and Sec. 3.2 are preferable since they show a similar performance at a lower decoding complexity.
3.4 Performance Analysis & Simulation Results
3.4.1 Density Evolution Analysis
We first analyze the errorcorrection capability of the two modifications of Algorithm E from Sec 3.1 and Sec. 3.2. As first estimate of the code performance, we employ the DE analysis [12] to determine the iterative decoding threshold of a unstructured LDPC code ensemble over a BSC with error probability . The decoding threshold is denoted as and represents the largest channel error probability for which, in the limit of large and large , the bit error probability of code picked randomly from the ensemble becomes vanishing small [12]. We then get a rough estimate on the error correction capability as^{5}^{5}5Note that at the decoding threshold a vanishing small bit error probability may not imply a vanishing small block error probability. However, for the regular MDPC ensembles under consideration the threshold on the bit error probability and the one on the block error probability do coincide over binaryinput outputsymmetric memoryless channel under BP decoding [23]. In our estimate, we implicitly assume that the result extends to Algorithm E and its variants.
Note that, for a moderate block length , provides only a coarse estimate to the number of errors at which we expect the FER to rapidly decrease (socalled waterfall region), with the accuracy of the prediction improving as grows large. With a slight abuse of the wording, we refer to as decoding threshold as well. We further denote the decoding threshold under Algorithm E, REMP1 and REMP2 as , and , respectively. The decoding thresholds do not only depend on the selected algorithm, but also on the algorithm parameters. The results for the MDPC ensemble with and are summarized in Table 1. For Algorithm E, the value of has been chosen to maximize the decoding threshold. Remarkably, the variants REMP1 and REMP2 do not yield a threshold degradation, and in some cases they even provide slight gains for suitable choices of the parameters .
REMP1  REMP2  Alg. E  

Security Level  ()  ()  ()  
80  9602  90  45  0.001  0  107(13)  0.1  0  108(13)  106(14) 
128  19714  142  71  0.1  0.001  153(18)  0.76  0  157(14)  153(18) 
256  65542  274  137  0.002  0.0002  296(27)  0.65  0  301(23)  294(26) 
3.4.2 Simulation Results
To validate the performance estimates obtained through DE, we simulated the errorcorrection capability of the decoding schemes from Section 2.3 and Section 3. The results in terms of FER as a function of the error pattern weight are depicted in Figure 2. The results confirm the trend predicted by the DE analysis. In particular, the errorcorrection capability improves upon existing decoding algorithms. Even for erasure probability values chosen to conceal the structure of (yielding a suboptimal choice with respect to the error correction performance), REMP2 outperforms Algorithm E and the BF/MF algorithms.
3.5 Resilience against the GJS Attack
We now analyze the resilience of the proposed decoding schemes against the GJS attack. For the REMP variants of Algorithm E as well as for the MBP decoder we performed Monte Carlo simulations for codes randomly picked from collecting up to decoding failures (frame errors) with iterations. For each multiplicity in , different error sets (simulation points) were simulated. The simulation results in Figure 3 show that, for an appropriate choice of parameters, the REMP1 and REMP2 decoding schemes have a similar FER for all multiplicities appearing in . Hence, the reconstruction of the distance profile from the observed FER is not possible.
Figure 4 shows that for an appropriate choice of parameters also the MBP algorithm is able to conceal the structure of the secret key. For the choice of parameters that conceal the secret key the FER of MBP decoding and REMP2 decoding at error weight is similar. Hence, due to the higher complexity of MBP, the REMP scheme is preferable.
To conceal the structure of the choice of for a particular error weight is crucial. If is chosen too large the picture is inverted, i.e. higher multiplicities have a higher FER than lower multiplicities. Thus the error weight should be computed after decoding and ciphers generated with an error weight different from should be rejected to prevent attacks that exploit this effect.
4 Conclusions
We analyzed classical iterative decoding schemes for MDPC codes with respect to their errorcorrection capability as well as their resilience against a recent keyrecovery attack by GJS. The simulation results show that a decoding scheme by MF is able to defeat the attack for an appropriate choice of decoding parameters.
A new decoding method called REMP that allows to improve existing MP decoding algorithms with respect to their errorcorrection capability as well as their resilience against the GJS attack was proposed. Two REMP variants of an existing MP decoder that have an improved errorcorrection performance for MDPC codes compared to existing schemes were presented and analyzed. The simulation results show that the proposed REMP schemes are able to defeat the GJS attack for an appropriate choice of decoding parameters.
A new variant of the belief propagation decoding algorithm that is able to resist the GJS attack was presented.
References
 [1] P. W. Shor, “PolynomialTime Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,” SIAM J. Comput., vol. 26, no. 5, pp. 1484–1509, 1997.
 [2] R. J. McEliece, “A PublicKey Cryptosystem Based on Algebraic Codes,” Deep Space Network Progress Report, vol. 44, pp. 114–116, 1978.
 [3] E. M. Gabidulin, A. Paramonov, and O. Tretjakov, “Ideals Over a NonCommutative Ring and Their Application in Cryptology,” in 10th Annual International Conference on Theory and Application of Cryptographic Techniques (EUROCRYPT), Brighton, UK, Apr. 1991, pp. 482–489.
 [4] C. Monico, J. Rosenthal, and A. Shokrollahi, “Using Low Density Parity Check Codes in the McEliece Cryptosystem,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Sorrento, Italy, Jun. 2000, p. 215.
 [5] M. Baldi, M. Bianchi, F. Chiaraluce, J. Rosenthal, and D. Schipani, “Enhanced Public Key Security for the McEliece Cryptosystem,” Journal of Cryptology, vol. 29, no. 1, pp. 1–27, Jan. 2016.
 [6] A. Couvreur, A. Otmani, J.P. Tillich, and V. GauthierUmana, “A PolynomialTime Attack on the BBCRS Scheme,” in Proc. 18th IACR International Conference on Practice and Theory in PublicKey Cryptography, Gaithersburg, MD, USA, Mar. 2015, pp. 175–193.
 [7] S. Ouzan and Y. Be’ery, “ModerateDensity ParityCheck Codes,” arXiv preprint arXiv:0911.3262, 2009.
 [8] R. Misoczki, J. P. Tillich, N. Sendrier, and P. S. L. M. Barreto, “MDPCMcEliece: New McEliece variants from Moderate Density ParityCheck codes,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Istanbul, Turkey, Jul. 2013, pp. 2069–2073.
 [9] Q. Guo, T. Johansson, and P. Stankovski, “A Key Recovery Attack on MDPC with CCA Security Using Decoding Errors,” in 22nd Annual International Conference on the Theory and Applications of Cryptology and Information Security (ASIACRYPT), Hanoi, Vietnam, Dec. 2016, pp. 789–815.
 [10] K. Kobara and H. Imai, “Semantically secure McEliece publickey cryptosystemsconversions for McEliece PKC,” in 4th International Workshop on Practice and Theory in Public Key Cryptography (PKC), Cheju Island, South Korea, Feb. 2001, pp. 19–35.
 [11] N. Sendrier, “CodeBased Cryptography: State of the Art and Perspectives,” IEEE Security & Privacy, vol. 15, no. 4, pp. 44–50, Aug. 2017.
 [12] T. Richardson and R. Urbanke, “The Capacity of LowDensity ParityCheck Codes Under MessagePassing Decoding,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 599 – 618, Feb. 2001.
 [13] C. Lee, “Some Properties of Nonbinary ErrorCorrecting Codes,” IRE Transactions on Information Theory, vol. 4, no. 2, pp. 77–82, Jun. 1958.
 [14] T. Fabšič, V. Hromada, P. Stankovski, P. Zajac, Q. Guo, and T. Johansson, “A Reaction Attack on the QCLDPC McEliece Cryptosystem,” in International Workshop on PostQuantum Cryptography, 2017, pp. 51–68.
 [15] P. S. L. M. Barreto, S. Gueron, T. Güeneysu, R. Misoczki, E. Persichetti, N. Sendrier, and J.P. Tillich, “CAKE: CodeBased Algorithm for Key Encapsulation,” Cryptology ePrint Archive, Report 2017/757, 2017, http://eprint.iacr.org/2017/757.
 [16] M. Tanner, “A recursive approach to low complexity codes,” IEEE Trans. Inf. Theory, vol. 27, no. 5, pp. 533–547, Sep. 1981.
 [17] W. Ryan and S. Lin, Channel codes – Classical and modern. New York, NY, USA: Cambridge University Press, 2009.
 [18] R. Gallager, Lowdensity paritycheck codes. Cambridge, MA, USA: MIT Press, 1963.
 [19] W. C. Huffman and V. Pless, Fundamentals of ErrorCorrecting Codes. Cambridge University Press, 2010.
 [20] N. Miladinovic and M. P. Fossorier, “Improved BitFlipping Decoding of LowDensity ParityCheck Codes,” IEEE Trans. Inf. Theory, vol. 51, no. 4, pp. 1594–1606, Apr. 2005.
 [21] M. Mitzenmacher, “A Note on Low Density Parity Check Codes for Erasures and Errors,” SRC Technical Note, vol. 1998, no. 17, 1998.
 [22] S.Y. Chung, G. D. Forney, T. J. Richardson, and R. Urbanke, “On the design of lowdensity paritycheck codes within db of the Shannon limit,” IEEE Commun. Lett., vol. 5, no. 2, pp. 58–60, Feb. 2001.
 [23] M. Lentmaier, D. V. Truhachev, K. S. Zigangirov, and D. J. Costello, “An analysis of the block error probability performance of iterative decoding,” IEEE Trans. Inf. Theory, vol. 51, no. 11, pp. 3834–3855, Nov 2005.
Comments
There are no comments yet.