Low-density parity-check (LDPC) codes, first discovered by Gallager , had not been used in practice for several decades due to lack of efficient decoding algorithms. It was rediscovered by Luby and MacKay et al. [7, 8]. Thanks to LDPC’s capacity-approaching performance and low iterative decoding complexity, it has been applied in many wireless communication systems, e.g., in the Digital Video Broadcasting Satellite - Second Generation (DVB-S.2) Standard. With the efforts from both academia and industry, LDPC codes have also been adopted as an eMBB traffic-channel coding scheme of the fifth generation wireless communications.
To trace the performance of iterative decoding, Richardson and Urbanke 
proposed an algorithm called Density Evolution (DE) to calculate the probability density function of variable nodes and check nodes in each iteration. The DE algorithm shows that LDPC codes with a certain degree distribution would have an arbitrarily small bit-error rate (BER) when the code length tends to infinity and the level of channel noise is below a threshold. Otherwise, the BER would be larger than a positive constant. Density evolution is useful for finding a theoretically good degree distribution, which is fundamental for the construction of practical LDPC codes. With the DE algorithm, Chung et al. found a rate-1/2 code with a good degree distribution that achieved 0.0045 dB within the Shannon limit for additive white Gaussian noise (AWGN) channel.
One of the most important assumption of density evolution algorithm is the local-tree assumption, namely, the subgraph generated after the -th iteration remains a tree. However, realistic codes usually have cycles in the Tanner graph representation which would render the assumption invalid after sufficient number of iterations. Intuitively, the cycles, especially those with small girths, obstruct the flow of extrinsic information among the nodes. There are several successful algorithms to construct large-girth LDPC codes. One such algorithm is the Tanner-graph based on progressive-edge-growth (PEG) algorithm proposed by Hu et al. , which aims to maximize the minimum girth. However, girth is not the only factor affecting the performance of LDPC codes. In , Tian et al. pointed out that the connectivity among nodes is also important. The extrinsic message degree (EMD) measures variable node connectivity in the bipartite graph of LDPC codes. The approximate cycle EMD (ACE) is defined as the upper bound on the EMD of all variable nodes in a given cycle. Combining the PEG and the ACE algorithm, Xiao and Banihashemi proposed an improved PEG algorithm .
Although LDPC codes have capacity approaching performance, it has been proved that LDPC codes cannot reach the Shannon limit without infinite variable degree even with infinite code length.
Polar codes, proposed by Arikan , have been mathematically proved to be able to achieve the Shannon limit on the binary symmetric channel (BSC) with infinite code length. Compared to LDPC codes, polar codes have better error-correcting performance in the short code length or low rate situations.
However, compared with LDPC codes, polar codes have higher decoding complexity, due to the fact that the parity check matrix is not sparse.
It is an interesting question whether we could combine LDPC codes with polar codes. Inspired by the concatenated coding , it has been proposed to concatenate a polar code as outer code with an LDPC code as inner code in . For practical encoding over finite length, the ideally polarized channels of polar code are only semi-polarized. For these cases, it may be a good way to use LDPC codes to further protect the bits transferred on the channels . However, the concatenation of polar codes and LDPC codes does not address the problem of high decoding complexity of polar codes. A practical low-complexity soft decoding algorithm for polar codes remains yet to be found. Instead of improving polar codes with LDPC codes, it may be a good idea to improve LDPC codes with polar codes.
In this paper, we propose a new method of constructing LDPC codes with inspiration from polar codes. Through judicious placement of the edges connecting the variable and parity-check nodes, we achieve polarization of the variable and parity-check nodes. With slight increase of decoding complexity, the new code enjoys lower BER and faster convergence on the BSC. Also we have not observed error floor in our simulated cases, which is a known problem for conventional LDPC codes.
The organization of the paper is as follows. In the next section, we review the basic concept like Tanner graph and degree polynomials in LDPC codes. Inspired by polar codes, we propose a new method constructing LDPC codes called polarized LDPC codes. And the improvements on density evolution and decoding algorithm are discussed. In Section III, using improved PEG algorithm, we construct realistic polarized LDPC codes and present simulation results comparing the performance of polarized and standard LDPC codes in the binary symmetric channel. Finally some conclusions are drawn in Section IV.
Ii Polarized LDPC
Tanner graph, proposed by Tanner  in 1981, with original purpose of constructing long error-correcting codes from sub-codes, is one of the most important tools describing LDPC codes. A Tanner graph is a bipartite graph. One type of nodes is called variable nodes representing the bits in the codeword. The other type is called check nodes, representing the parity-check equations. The edges between variable nodes and check nodes represent the coded bits that a check equation involves.
The number of edges linked to a node is defined as the degree of the node. Tanner graph is in accordance with parity-check matrix . A variable node corresponds to a column of the matrix, and a check node corresponds to a row of the matrix. If there is an edge between the -th check node and -th variable node, the -th element of is set to 1. A random irregular LDPC code can be defined by two degree polynomials:
where is variable degree polynomial and is check degree polynomial, and represent the fraction of edges connected to degree- variable nodes and those connected to degree- check nodes, respectively. Viewed from another perspective, can be interpreted as the probability of any check node having a common edge with a degree- variable node, and is the probability of any variable node having a common edge with a degree- check node.
Given a received symbol and a transmitted bit , we define log-likelihood ratio (LLR) in the form of
Let be the LLR message from variable node to check node . And be the LLR message from the check node to variable node . Let be the LLR message from the channel. Let denote the set of check nodes that are connected to variable node . Similarly, define as the set of variable nodes connected to check node . According to the sum-product decoding algorithm, e.g., , is updated by:
and is updated by the “tanh” rule:
Define the -calculation  as
We can rewrite a “tanh” rule such as
If we view the LLR messages as random variables due to the stochastic nature of the channel, based on the sum-product decoding rules (4) and (5), the corresponding rules of the transformation on the probability density functions (PDF) of LLR can be derived. The update rule for the PDF of LLR on the variable node side is basically convolution, and the update rule for PDF on the check node side can be expressed in a form similar to convolution; for details see e.g., .
Ii-a LDPC polarization: observation
Assuming that all-0 word is sent and the crossover probability of BSC is , the initial density function of for any is
We have the following important observation on the update rules (4) and (5). Under the assumption of all-0 codeword, adding an extra edge to a neighborhood set will tend to increase the LLR . On the other hand, because , removing an edge from the neighborhood set will tend to increase the LLR . Loosely speaking, variable nodes with higher degrees are more reliable. On the other hand, check nodes with lower degrees are more reliable. Thus, if we connect higher-degree variable nodes with lower-degree check nodes, and connect lower-degree variable nodes and higher degree check nodes, we will create a polarization effect: higher-degree variables nodes are more reliable and lower-degree ones are less reliable.
In standard LDPC codes, under the restriction of degree polynomials, the edges between variable nodes and check nodes are established randomly. In contrast to this random connectivity, polar codes have connections that are structured and polarized. Taking a rank-4 polar matrix as an example:
Unlike the H matrix of LDPC code, the columns of can be thought of as check nodes (the polarized channel), while the rows as variable nodes (the original channel). The polarized channel with higher capacity has lower “check” degree, while the original channel having higher “variable” degree has more probability to correct errors. Using serial interference cancellation, the decoding procedure would decode first the most reliable bits, cancel their interference, and then detect the weaker bits with the prior information.
A polarized LDPC code may be preferable for the decoding performance and complexity. Intuitively, polarized bits that are reliable can be stabilized and decoded quickly, which is helpful to cancel their interference on the check nodes connected to them. This in turn is helpful for decoding other variable bits with lower reliability. Such benefits of polarization are well established in polar codes and support our following polarized LDPC design.
Ii-B LDPC polarization: code construction
A polarized LDPC code can be constructed by connecting low-degree variable nodes to high-degree check nodes, and low-degree check nodes to high-degree variable nodes. Fig. 1 gives an example of polarized LDPC codes. We divide the nodes into two layers. The high layer contains variable nodes with higher degree (degree 3) and check nodes with lower degree (degree 4). The low layer contains degree-2 variable nodes and degree-6 check nodes. We have used dashed lines to indicate the connections between the high-degree variable nodes and high-degree check nodes (the inter-layer connections).
In general, we divide both variable nodes and check nodes into layers, with the degree of variable nodes ordered in descending order from top layer to bottom layer, and the degree of check nodes ordered in ascending order. We connect variable nodes in a layer to check nodes in the same layer and all layers below it.
In standard LDPC codes, variable nodes and check nodes are treated as two different sets because the connectivity between variable nodes and check nodes is independent of their degrees. So the node in the same set would get the same information. For example, the output LLR of variable nodes would be the incoming LLR for any degree check nodes. But in polarized LDPC codes, variable nodes with different degrees must be treated as different sets because they would have different probabilities to have connections with degree- check node for any given . And so do the check nodes. Then nodes with different degrees would have different degree polynomials. Let denote the check degree polynomial of a variable node of degree , and the variable degree polynomial of a check node of degree . Different from standard LDPC codes, our polarized LDPC codes design would create layers having different rate according to the degree of variable nodes, in a way similar to that of the polar code. Different layers of variable nodes would exchange information by their common check nodes, through the inter-layer connections, as indicated by dashed lines in the example in Fig. 1.
After the higher degree variable nodes are decoded, the LLR sent to the common check nodes would be set to infinity (assuming all 0-codeword is sent). So the next layer variable nodes could cancel the interference in the decoding procedure to get more capability to correct errors. Because traditional methods of analyzing LDPC codes are based on the random link between nodes, we proposed an improved density evolution analysis method to analyze the polarized LDPC codes; see Algorithm 1.
In the standard DE algorithm, all output LLR of variable/check nodes are assumed to be independent and identically distributed (i.i.d) variable, which is not true in the polarized LDPC. The first step of our algorithm is calculating the check degree polynomial of degree- variable node and regarding the coefficient as , which expresses the probability of the degree- variable node having connection with the degree- check node. We calculate degree-j check node’s variable degree polynomial and regard the coefficient as in the same way. In the iterations, each node with a different degree would have a unique LLR mixture as input and a different output, which are calculated and stored respectively.
Ii-C LDPC polarization: decoding
The decoding procedure of polarized LDPC codes proceeds layer by layer. The higher layer bits would have errors corrected in a few iterations while the lower layer bits need more iterations. When the probability of error of high layer tends to zero, the information passed from the lower degree variable nodes which might have uncorrected bit would interfere with the already corrected bits. So after the density of the degree- variable nodes tending to the “point mass at infinity”, which means the probability of error tends to zero, we would change making it only pass information to the uncorrected nodes instead of receiving information from them. However, this modification of the standard decoding procedure is not critical.
To see the effect of polarization, we use an LDPC code with variable degree polynomial and check degree polynomial , and construct the bipartite graph using three layers according to our polarized code construction rules. Assuming that the all-0 word is sent, Fig. 2 correct probability of different degree check nodes with polarized and standard density evolution in 10 iterations for BSC. In this simulation, the crossover probability is higher than the correct-decoding threshold. The correct probability of standard LDPC code remains constant with iterations which is consistent with the threshold phenomenon of LDPC.
The simulation result indicates that the check nodes in standard LDPC codes behave similarly due to the random connection nature. For polarized LDPC codes, the degree-12 check node curve has a larger growth rate than the other two curves thanks to the structured nature of polarized connections. As a result, polarized LDPC codes can be decoded layer by layer. The lower-degree check nodes get more reliable LLR from the higher-degree variable nodes, so the high layer would get the greatest gain which is the aim when designing polarized LDPC codes. So using the same degree polynomials, polarized LDPC codes can decode part of information bits instead of throwing away all the code word like in standard LDPC codes. This property of partial decoding can be useful for certain applications.
Iii Simulation Results
In this section, we construct realistic codes to check the performance on the BSC. We choose the Improved Progressive Edge Growth (IPEG) algorithm  to construct polarized LDPC codes. Different from the original IPEG algorithm, to generate polarized LDPC codes, the variable node degree sequence should be in non-increasing order while the check node degree sequence should be in non-decreasing order. So the edges between high-degree variable nodes and the low-degree check nodes would be generated first. The IPEG algorithm would make connections from variable nodes in the higher layer to check nodes in neighbor layers as much as possible.
Using the IPEG algorithm, we construct four irregular polarized LDPC codes with the same code length, rate, and the same numerical value of variable and check degrees. All four codes’ length are equal to 16384, the code rate equals 5/6. The only difference among the four LDPC codes is the coefficients of their variable and check degree polynomials, which decide the correlation between layers. Table I gives the coefficients of the degree polynomials. We simulate on the BSC with iterative belief propagation decoding. The maximal number of iterations is 50. And there are frames for each crossover probability simulated.
|Variable/Check degree value||2/12||4/24||6/36|
Fig. 3 gives the FER performance of four polarized LDPC codes and DVB-S.2 short LDPC code. In the high crossover probability region, the polarized LDPC codes and the DVB-S.2 code have close performance. When it comes to low crossover probability region, the DVB-S.2 LDPC code has a clear error-floor phenomenon. However, the FER of code C and code D have a much lower error-floor, while code A and code B continue decreasing to zero. We also note that the coefficients of degree polynomial have great influence on the code performance. It seems a smaller percentage of low-degree variable nodes would lead to a lower error-floor while a larger percentage of low-degree nodes may have some advantage in the high crossover probability region. The exact effect of coefficients of degree polynomials on the code performance deserves further investigation.
Fig. 4 gives the FER performance of different layers of polarized LDPC codes. As we can see, the degree-4 variable nodes has poorer FER performance than the degree-6 variable nodes. And the degree-4 variable nodes contribute almost all the frame errors especially in the low crossover probability region, which is consistent with the simulation result of the DE algorithm. Based on the FER performance, we should put information bits into different layers according to their importance and QoS requirements to reduce the probability of re-transmission, making good use of the error correction capability offered by polarization. The coefficients of degree polynomials still have great influence on the performance, especially on the degree-4 variable nodes. Although the degree-4 variable nodes have worse FER, code D has more degree-6 variable nodes. So the tradeoff between the performance of lower layer and the code length of higher layer should be taken into consideration.
All 4 codes we considered thus far have two layers (not including degree-2 variable nodes). A natural question is whether the number of layers in the LDPC codes has influence on the performance. Fig. 5 shows the FER curves of LDPC codes having different number of layers. The 2-layer LDPC code has degree polynomials
The 3-layer LDPC code has degree polynomials
The 4-layer LDPC code has degree polynomials
In general, the LDPC codes having more layers seems to offer better performance, especially in the high crossover probability regime. But the polarized LDPC codes with more layers may have the error propagation problem. If the higher layer of a codeword has an erroneous bit, the error would interfere with the decoding of the lower layer bits.
We proposed a polarized LDPC code design that introduces polarization in the reliability of the variable and check nodes through judicious connectivity between the bipartite nodes. The polarized LDPC codes offer a great advantage in FER on the binary symmetric channel with a slightly increased cost of decoding complexity per iteration. The polarized LDPC codes could reach a much lower error-floor which is useful in scenarios where re-transmission of erroneous frames is costly or impossible, such as in satellite communications. And the different error correction capability offered by polarization gives more flexibility to satisfying the different QoS requirements.
This work was supported by NSF of China Grant No. 61571412 and NSF of USA Grant No. 1711922.
-  (Jun. 2009) Channel polarization: a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Trans. Inform. Theory 55 (7), pp. 3051–3073. Cited by: §I.
-  (Feb. 2001) On the design of low-density parity-check codes within 0.0045 db of the shannon limit. IEEE Commun. Lett. 5 (2), pp. 58–60. Cited by: §I, §II, §II.
-  (Mar. 2013) On finite-length performance of polar codes: stopping sets, error floor, and concatenated design. IEEE Trans. Commun. 61 (3), pp. 919–929. Cited by: §I.
-  (1965) Concatenated codes.. Ph.D. Thesis, MIT. Cited by: §I.
-  (Jan. 1962) Low-density parity-check codes. IRE Trans. Inform. Theory 8 (1), pp. 21–28. Cited by: §I.
-  (Jan. 2005) Regular and irregular progressive edge-growth Tanner graphs. IEEE Trans. Inform. Theory 51 (1), pp. 386–398. Cited by: §I.
-  (1998) Analysis of low density codes and improved designs using irregular graphs. In Proc. 30th Annu. ACM Symp. Theory Comput. (STOC), Vol. 98, pp. 249–258. Cited by: §I.
-  (Mar. 1999) Good error-correcting codes based on very sparse matrices. IEEE Trans. Inform. Theory 45 (2), pp. 399–431. Cited by: §I.
-  (Feb. 2001) The capacity of low-density parity-check codes under message-passing decoding. IEEE Trans. Inform. Theory 47 (2), pp. 599–618. Cited by: §I, §II.
-  (Sept. 1981) A recursive approach to low complexity codes. IEEE Trans. Inform. Theory 27 (5), pp. 533–547. Cited by: §II.
-  (Aug. 2004) Selective avoidance of cycles in irregular LDPC code construction. IEEE Trans. Commun. 52 (8), pp. 1242–1247. Cited by: §I.
-  (Dec. 2004) Improved progressive-edge-growth (peg) construction of irregular LDPC codes. IEEE Commun. Lett. 8 (12), pp. 715–717. Cited by: §I, §III.
-  (Jun. 2018) An improved belief propagation decoding of concatenated polar codes with bit mapping. IEEE Commun. Lett. 22 (6), pp. 1160–1163. Cited by: §I.