Progressive Bit-Flipping Decoding of Polar Codes Over Layered Critical Sets

12/09/2017
by   Zhaoyang Zhang, et al.
NetEase, Inc
0

In successive cancellation (SC) polar decoding, an incorrect estimate of any prior unfrozen bit may bring about severe error propagation in the following decoding, thus it is desirable to find out and correct an error as early as possible. In this paper, we first construct a critical set S of unfrozen bits, which with high probability (typically >99%) includes the bit where the first error happens. Then we develop a progressive multi-level bit-flipping decoding algorithm to correct multiple errors over the multiple-layer critical sets each of which is constructed using the remaining undecoded subtree associated with the previous layer. The level in fact indicates the number of independent errors that could be corrected. We show that as the level increases, the block error rate (BLER) performance of the proposed progressive bit flipping decoder competes with the corresponding cyclic redundancy check (CRC) aided successive cancellation list (CA-SCL) decoder, e.g., a level 4 progressive bit-flipping decoder is comparable to the CA-SCL decoder with a list size of L=32. Furthermore, the average complexity of the proposed algorithm is much lower than that of a SCL decoder (and is similar to that of SC decoding) at medium to high signal to noise ratio (SNR).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

11/30/2021

Successive Syndrome-Check Decoding of Polar Codes

A two-part successive syndrome-check decoding of polar codes is proposed...
12/02/2019

Deep-Learning-Aided Successive-Cancellation Decoding of Polar Codes

A deep-learning-aided successive-cancellation list (DL-SCL) decoding alg...
04/13/2020

Successive Cancellation Inactivation Decoding for Modified Reed-Muller and eBCH Codes

A successive cancellation (SC) decoder with inactivations is proposed as...
06/27/2018

Belief Propagation List Decoding of Polar Codes

We propose a belief propagation list (BPL) decoder with comparable perfo...
12/20/2021

Successive Cancellation Ordered Search Decoding of Modified _N-Coset Codes

A tree search algorithm called successive cancellation ordered search (S...
07/30/2020

Fast Thresholded SC-Flip Decoding of Polar Codes

SC-Flip (SCF) decoding algorithm shares the attention with the common po...
09/22/2021

Efficient Partial Rewind of Polar Codes' Successive Cancellation-based Decoders for Re-decoding Attempts

Successive cancellation (SC) process is an essential component of variou...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I introduction

Polar codes, as the first provable capacity-achieving codes for any symmetric binary-input discrete memoryless channel (B-DMC) with efficient successive cancellation (SC) decoding [1], have been recently adopted as the channel coding scheme for control information in the 5G enhanced Mobile BroadBand (eMBB) scenario [2]. Different from data packets, the block-length for control messages is typically short or moderate due to coding granularity. However, the performance of such finite block-length polar codes is still far from satisfactory.

To improve the performance of polar codes in finite block-length case, Tal and Vardy presented a successive cancellation list (SCL) decoder in [3], which helps polar codes successfully compete with low-density parity-check (LDPC) codes. Subsequently, adaptive SCL decoding and cyclic redundancy check (CRC) aided SCL (CA-SCL) decoding were proposed in [4, 5]. Moreover, the performance of SCL decoding was theoretically analyzed in [6]. Although SCL decoder significantly improves the block error rate (BLER) of finite block-length polar codes, it suffers from large storage overhead and high computational complexity, both of which grow linearly with the list size. To address this issue, the authors in [7] put forward a SC flip decoder trying to correct the first erroneous estimate of an unfrozen bit, and indicated that the decoding performance could be dramatically improved if the first incorrect hard decision was flipped. This decoder was further modified in [8] to recover two incorrect hard decisions, which induced significant gains in terms of decoding performance and competed with the CA-SCL decoder with list size . Furthermore, [8] defined a new metric to determine the flipping positions, which yielded reduced complexity compared to the log likelihood ratio (LLR) metric exploited in [7]. Nonetheless, by using such metric, the search scope for the first erroneous hard decision is still the entire unfrozen set.

In this paper, by investigating the distribution of the first erroneous hard decision in SC decoding, we find it possible to narrow down the search scope to an unfrozen bit subset , which is much smaller than the unfrozen set. For ease of exposition, the subset is referred to as critical set through the rest of this paper. It can be proven that if SC decoding fails, the first incorrect hard decision is almost surely included this critical set. As such, the decoder only needs to consider for the flipping position, thus further reducing the computational complexity. In addition, since there might exist several other errors besides the first erroneous hard decision, it is desirable to flip multiple incorrect bits rather than only the first one. For this purpose, we propose to iteratively modify the critical set and correct the errors progressively, aiming to achieve superior decoding performance. Numerical results show that, the proposed progressive decoder can compete with the CA-SCL decoder in terms of BLER performance, e.g., a level 4 progressive bit-flipping decoder is comparable to the CA-SCL decoder with a list size of , while having an average decoding complexity similar to that of the standard SC decoding at medium to high SNR.

To summarize, our main contributions are as follows:

  • The critical set, which with high probability includes the first incorrect hard decision, is proposed. Because of the smaller search scope, the computational complexity is reduced significantly.

  • A progressive multi-level bit-flipping decoding algorithm based on iteratively modified critical set is proposed. It has the ability to correct multiple errors and achieve a BLER performance much better than the conventional SC decoding and comparable to the CA-SCL decoding.

The rest of this paper is organized as below. In Section II, a short background on polar codes is presented and our analytical framework is briefly described. Section III provides some important results about the critical set. Section IV shows the proposed progressive algorithm that correct multiple errors. Simulation results are provided in Section V and Section VI concludes this paper.

Ii Preliminaries

Ii-a Polar codes

We use

to denote a vector

. For polar codes with block-length and kernel , we denote as the information sequence, and a polar codeword is obtained by , where ‘’ denotes the Kronecker product while is a permutation matrix. A coding rate means that a set of cardinality is selected as the information set (see [1]), and thus consists of unfrozen bits and frozen bits (all frozen bits are assumed to be zero if not specified). The split channel is defined as , and the Bhattacharyya parameter is computed to select the most reliable split channels to transmit unfrozen bits. Interested readers are referred to [1] for more details.

Ii-B Analytical framework

The framework in [9] is adopted for the ensuing analysis. To facilitate understanding, let us consider a toy example of polar codes with block-length and information sequence . and are chosen as the unfrozen bits, thus inducing a coding rate .

Fig. 1: Full binary tree for .

To proceed, a full binary tree with leaf nodes is constructed in Fig. 1 (left). The leaf nodes correspond to the information bits , respectively. Since and are unfrozen bits, nodes and are denoted by black circles for the sake of clarity, see Fig. 1 (middle) for illustration. Furthermore, for each non-leaf node in the tree, if its two descendants are of the same color, then it is marked with that color as well. Otherwise, it is indicated by a gray circle. This process starts from the bottom non-leaf nodes until the root node is reached, as shown in Fig. 1 (right).

To implement polar encoding, a constituent code is assigned to each node in Fig. 1 (right). Suppose that , i.e., , , , , where denotes the -th component at node . On this basis, the constituent code at node is obtained by , which gives . Similarly, we have and . Next, invoking the expressions and , the polar codeword is obtained as . One can also check that can be obtained by . As for SC decoding, it starts from the root node , which possesses the LLRs received from the underlying channel, and uses[1, Eq. 75] and [1, Eq. 76] to calculate LLRs recursively. In the meantime, polarization can also be interpreted based on this tree. One can check that node has four independent copies of the underlying channel , while node has two independent copies of the synthetic channel and node has two independent copies of the synthetic channel . Finally, a leaf node has a unique copy of the split channel, e.g., node has . We refer the reader to [9] for more details. It is worth noting that this framework can be extended to polar codes with arbitrary block-length.

Iii The critical set

Iii-a SC decoding from a subblock-by-subblock perspective

Let us focus on a more complicated example as shown in Fig. 2, where and (the information set may not be reasonable).

Fig. 2: Full binary tree for .

In our framework, SC decoding is not viewed as a bit-by-bit process, but from a subblock-by-subblock perspective. Once the full binary tree corresponding to the current specific polar code is constructed, the entire polar code is divided into multiple sub-polar codes (also called subblocks), which all have coding rate . In Fig. 2, there exist four such subblocks, which are denoted by the corresponding root nodes , , and , respectively (they are also polar codes but with shorter block-length). The subblock consists of only unfrozen bits, e.g, node has unfrozen bits . In particular, node has an unfrozen bit , and it can be viewed as a special subblock which has itself as both the codeword (root node) and information sequence (leaf node).

Now, consider a general subblock ( denotes its root node) which has unfrozen bits. We use and to denote its information sequence and codeword, respectively. Then the following proposition is derived, which sheds light on our main results.

Proposition 1

For a binary erasure channel (BEC), the entire subblock is correctly decoded if and only if is correctly decoded.

Proof:

The proof is straightforward. Recall that , and one can check that we always have . If is correctly decoded, then it means that there must be no erasure symbols involved in , and thus . On the other hand, if the entire subblock is correctly decoded, i.e., every takes a value either or , it is obvious that can be correctly estimated as well.

Now, we extend the above arguments to other channels. According to our framework, node has independent copies of some synthetic channel, which is denoted by , and we further denote the split channel experienced by (within this subblock) as . Provided that all the prior subblocks are correct, we assume the error probabilities of and are and , respectively. Under this condition, we obtain the following proposition.

Proposition 2

Denote the error probability of the entire subblock as . Then, for , we have .

Proof:

According to the number of errors occurred in the codeword, can be computed as . Although this is not BEC, however, no frozen bits are involved, and thus no parity check needs to be satisfied. Then, the estimate of , denoted as , can still be computed by , where denotes the hard decision. Thus, is incorrectly decoded if and only if the number of errors in

is odd, which gives

. As such, it is obtained that

which completes the proof.

Remarks: The difference represents the probability that two or more errors occur. As , this value approaches . This implies that if is reliable enough, is quite close to the error probability of the entire subblock.

Iii-B Constructing the critical set

Capitalizing on the results above, there is a high probability that the first incorrectly estimated unfrozen bit happens to be the first unfrozen bit within the subblock. Inspired by this, we provide a method to construct a set that almost surely includes the first incorrect hard decision in SC decoding. The corresponding algorithm is summarized as Algorithm 1.

  1. Establish the full binary tree corresponding to the current polar codes;

  2. Divide the polar codes into multiple subblocks with coding rate and put the first unfrozen bit of each subblock into set .

Algorithm 1 A method to construct the critical set

Taking Fig. 2 for instance, we have . Note that the number of elements in set is exactly the same as that of subblocks, which is rather small compared with the information set of cardinality . Furthermore, is uniquely determined once the construction of polar codes is completed.

Iii-C Validation of set under Gaussian approximation

To validate the above method, we first focus on the evaluation of the difference . To the best of our knowledge, the exact values of and are rather difficult to compute. Thereby, we exploit the Gaussian approximation method to provide some insightful results. Gaussian approximation was introduced in [10] and adopted for the analysis of polar codes in [11]. In the following analysis, we restrict our attention to binary phase shift keying (BPSK) modulation, i.e., for a given AWGN channel, the received symbol is expressed as , where , and

represents a Gaussian random variable with mean zero and variance

. Without loss of generality, we assume that the all-zero codeword is transmitted. In this sense, one can check that the received LLR can be written as , which can be viewed as a Gaussian random variable with mean and variance . By rewriting formulae [1, Eq.75] and [1, Eq.76] in an LLR form, one can find that the operations involved in SC decoding are exactly the same as those in belief propagation decoding. Thus, as suggested in [10], by assuming that the symmetry condition is always satisfied, all the LLRs involved in SC decoding can be viewed as Gaussian random variables with the form . We only need to calculate the mean .

For a given subblock , suppose that the LLR corresponding to the synthetic channel satisfies . Then the LLR corresponding to the split channel takes a mean , where is defined as

Due to the all-zero codeword, the probability that is incorrectly estimated is calculated as , where . Similarly, the error probability of the entire subblock is .

Fig. 3: vs. under Gaussian approximation.

The numerical comparison between and using Gaussian approximation is depicted in Fig. 3. It can be observed that for large length and small , there still exists some obvious difference between and . This is practical because, firstly, small means that the synthetic channel is not quite reliable, and thus it is more likely to introduce more than one error; secondly, large increases the probability to include two or more errors as well. Thus, the difference becomes noticeable. However, we conjecture that for any given underlying channel to implement polarization, a subblock with large in general has a large as well. For larger , the split channel is further degraded compared with the synthetic channel . As is selected to transmit an unfrozen bit, thus should be sufficiently reliable as well, since otherwise will turn into a frozen bit. It can be seen in Fig. 3 that if is large enough, the difference between and can usually be neglected.

, simulation times
(dB) 1 1.5 2 2.5 3
Included in set 675840 296391 73789 10888 1007
Incorrect blocks 677211 296573 73810 10888 1007
Accuracy () 99.80 99.94 99.97 100 100
Size of set 110 112 117 124 129
TABLE I: EVALUATION of ALGORITHM 1

To further evaluate the Algorithm 1, we focus on the probability that the first incorrect hard decision falls into set through Monte Carlo simulations, which is shown in Table I. The “Included in set ” denotes the number that the first incorrectly estimated unfrozen bit falls into the critical set , and “Incorrect blocks” denotes the total number of blocks that are not correctly recovered, while “Accuracy” simply computes their ratio. It can be observed that the probability that the first error is included in the critical set approaches , even for low signal to noise ratios (SNRs). Furthermore, the performance of Algorithm 1 improves as SNR increases, which is consistent with our prior analysis.

Iv Progressively correcting multiple errors

In this section, invoking the derived critical set, the bit-flipping methodology is adopted to correct the first erroneous unfrozen bit. Furthermore, by iteratively modifying the critical set, a progressive multi-level bit-flipping decoding algorithm, which can correct multiple errors in SC decoding, is proposed.

Iv-a Progressive bit-flipping decoding

Now suppose that is the first incorrect hard decision and is flipped to based on the bit-flipping method. Under this condition, all the elements in can be viewed as frozen bits. The reason is that, for the split channel , the estimate of is determined once the sequence is provided. And therefore, whether some with is a frozen bit or unfrozen bit no longer makes any difference.

By considering as frozen bits, a new full binary tree similar to Fig. 2 can be established immediately. However, at this time, all nodes corresponding to are white nodes, while the colors of the following nodes corresponding to still depend on whether it is a frozen bit or unfrozen bit. Based on this new tree, we can construct a modified critical set using Algorithm 1. This modified critical set implies that if errors occur in estimating under SC decoding, the first incorrect hard decision should be almost surely included in such a set. By adopting the bit-flipping operation as done for , this error is promised to be corrected, thus further improving the performance. Note that, including , the above scheme has corrected two errors during SC decoding.

Fig. 4: Implementation of progressive bit-flipping decoding for Fig. 2.

Interestingly, this scheme can be extended to correct more errors based on a tree structure. Taking the polar code in Fig. 2 for example, the tree structure based implementation of progressive bit-flipping decoding is depicted in Fig. 4, where each node denotes an estimate of as a candidate sequence and the edge indicates the unfrozen bit that is flipped. The tree is built via the following steps: first, conventional SC decoding is employed to obtain the root node at level , which denotes the candidate sequence without flipping any bits; next, the critical set is constructed to obtain nodes at level , i.e., every unfrozen bit in corresponds to an edge extended from the root node; then, for each node at level , e.g., node , it constructs a modified critical set by building a full binary tree similar to Fig. 2, with the leaf nodes corresponding to being white and those corresponding to being black, thus inducing the edges and nodes at level . Intuitively, repeating the steps above gives rise to the following levels of the tree.

On the basis of such tree structure, the bit-flipping decoding scheme is implemented in a level-order traversal, starting from the root node. In particular, for each node, the entire edges that start from the root node constitute the unfrozen bits that should be flipped. For instance, the node in Fig. 4 means that: SC decoding is first implemented to compute , but is flipped; then SC decoding is continued to compute , but is flipped as well; finally SC decoding is implemented to compute and thus a candidate sequence is obtained at this node. The “level” in fact indicates the number of unfrozen bits that are flipped, and specifically level denotes the conventional SC decoding without flipping any unfrozen bits. Therefore, the progressive bit-flipping decoding can be viewed as a tree search process.

Iv-B Pruning technique

To further reduce the search complexity, the current node should not generate any child node if it contains some incorrectly flipped unfrozen bits. According to Gaussian approximation, if all the prior flipped unfrozen bits are correct, the LLRs at should not be too small compared with their mean values. Based on this observation, we can assign a threshold to the -th level in the tree structure, and design a metric , where and is an optimized value derived through numerical simulations. By counting the number of unfrozen bits in as , while the number of unfrozen bits whose LLRs fail in achieving as (note that unfrozen bits belonging to the critical set are excluded when counting and ), we define as the event , if is true, then the estimate sequence is supposed to contain at least one error, thus the branches extended from the current node can be pruned. Otherwise, the current node is allowed to generate child nodes.

If the current node is determined to generate its child nodes, then the unfrozen bits which are likely to be correct should not be selected as child nodes. Recall that under Gaussian approximation, given that , if is larger than its mean value, then is supposed to be correct. On this basis, we design a threshold , where and is a constant. We define as the event , if event is true, then is not selected to be the child node.

The proposed progressive multi-level bit-flipping decoding algorithm using the above pruning rules is summarized as Algorithm 2, where denotes the set of unfrozen bits that should be flipped at level .

Input: the received vector , unfrozen set
Output: recovered sequence
1,   //initialization
2 while   do
3      
4       generate critical sets at level to form
5       select some in an increasing order of
6      
7       while   do
8             if   then
9                   construct of current node
10                   if  then
11                         remove from
12                         generate child nodes using
13                  
14            else
15                   if every has been flipped then
16                         go to step 3
17                  else
18                         go to step 5
19                  
20            
21      
return
Algorithm 2 Progressive bit-flipping decoding

V Simulation results

In this section, the BLER performance and the computational complexity of the proposed progressive bit-flipping algorithm are investigated. Specifically, we focus on transmissions with BPSK modulation over AWGN channel (details have been given in Section III-C). Polar codes are constructed with parameters and using Gaussian approximation as in [11] and then concatenated with a -CRC with generator polynomial (see [12]). In this regard, the coding rate for polar codes is while the effective information rate is .

Fig. 5: BLER performance of Algorithm 2 with level and level .

In Fig. 5, we compare the BLER performance of Algorithm 2 with and CA-SCL decoder with list size . In particular, the pruning rules introduced in Section IV-B are not used here, i.e., each node at always chooses to generate its child nodes. We also use the genie-aided SC decoder (also called Oracle-Assisted SC Decoder), as in [7, 8], to predict the theoretical optimal performance, which serve as lower bounds on the BLER results for practical SC flip decoders. “Genie-aided SC Decoder ” means that it can always correct the first incorrect hard decisions met by SC decoder, but no more errors can be corrected. As shown in Fig. 5, the BLER performance of Algorithm 2 with outperforms the CA-SCL decoder with list size , but with only computational complexity at medium to high SNR region (see Fig. 7), and as the level increases to levle, Algorithm 2 outperforms the CA-SCL decoder with , but with only computational complexity at medium to high SNR region. Furthermore, Algorithm 2 can achieve almost the same performance as the Genie-aided SC decoder if they are designed to correct the same number of incorrect hard decisions in SC decoding.

Fig. 6: BLER performance of Algorithm 2 with level vs. CA-SCL decoder with and .

In Fig. 6, we compare the BLER performance of Algorithm 2 with and CA-SCL decoder with list size . The detailed pruning rules and corresponding parameters are listed in Table II, where the notation means that the corresponding pruning rule introduced in Section IV-B is not used. For instance, is for all SNRs, which implies that for each node (in fact only one) at level 0, it always chooses to generate its child nodes. We observe that for higher decoding level, such as level , the proposed bit-flipping decoder can achieve superior BLER performance, which competes with the CA-SCL decoder with a list size and outperforms CA-SCL decoder with . Moreover, the computational complexity is dramatically reduced and even degrades to that of SC decoding at medium to high SNR region (see Fig. 7).

(dB) 1.5 1.75 2 2.25 2.5
(3.6, 2) (3.6, 2) (3.6, 2) (4, 3) (6, 5)
0.5 0.5 0.5 0.6 0.6
0.25 0.25 0.25 0.3 0.3
TABLE II: The parameters used in Algorithm 2 with level
Fig. 7: Average complexity of Algorithm 2 normalized by the complexity of standard SC decoding

The average computational complexity of Algorithm 2 is investigated in Fig. 7. It can be seen that the average complexity decreases rapidly as SNR increases. The reason is that as the underlying channel turns to be more reliable, it is sufficient to flip only one or two unfrozen bits to obtain the correct estimate for most cases, and the search stops at an early time. Note that, in low SNR regime, the complexity of the proposed algorithm grows rapidly since more paths need to be searched when error becomes more random, while we also note that in practical system, the low SNR region is not a region of interest because the decoding procedure is usually not activated at a low SNR region due to the high BLER.

Vi conclusion

A critical set which with high probability includes the first incorrect hard decision in SC decoding is proposed. By iteratively modifying the critical set, multi-layer critical sets are established. On this basis, a progressive multi-level bit-flipping decoder which can correct multiple errors in SC decoding is proposed. We show that as the level increases, the BLER performance of the proposed progressive bit-flipping decoder competes with the corresponding CA-SCL decoder. Furthermore, the average complexity of the proposed algorithm is much lower than that of a SCL decoder (and is similar to that of SC decoding) at medium to high SNR.

Acknowledgement

This work was supported in part by National Hi-Tech R&D Program of China (No. 2014AA01A702), National Natural Science Foundation of China (No. 61371094, No. 61401391), National Key Basic Research Program of China (No. 2012CB316104), Zhejiang Provincial Natural Science Foundation (No. LR12F01002), the open project of Zhejiang Provincial Key Laboratory of Information Proc., Commun. & Netw., China, HIRP Flagship Projects from Huawei Technologies Co., Ltd (YB2013120029 and YB2015040053), Natural Science Foundation of Fujian Province (No. 2017J01106), and Key Project of Natural Science Fund for Young Scholars in Universities and Colleges of Fujian Province (No. JZ160489).

References

  • [1] E. Ar⁢ıkan, “Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels,” IEEE Trans. Inf. Theory, vol. 55, no. 7, pp. 3051-3073, Jul. 2009.
  • [2] Chairman s notes, 3GPP TSG RAN WG1 , 2016.
  • [3] I. Tal and A. Vardy, “List decoding of polar codes,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), pp. 1-5, Aug. 2011.
  • [4] B. Li, H. Shen, and D. Tse, “An adaptive successive cancellation list decoder for polar codes with cyclic redundancy check,” IEEE Commun. Lett., vol. 16, no. 12, pp. 2044-2047, Dec. 2012.
  • [5] K. Chen, K. Niu, and J. R. Lin, “Improved successive cancellation decoding of polar codes,” IEEE Trans. Commun., vol. 61, no. 8, pp. 3100-3107, Aug. 2013.
  • [6] M. Mondelli, S. H. Hassani and R. Urbanke, “Scaling exponent of list decoders with applications to polar codes,” IEEE Trans. Inf. Theory, vol. 61, no. 9, pp. 4838-4851, Sep. 2015.
  • [7] O. Afisiadis, A. Balatsoukas-Stimming, and A. Burg, “A low-complexity improved successive cancellation decoder for polar codes,” in Proc. 48th Asilomar Conf. Signals, Systems and Computers, pp. 2116-2120, Nov. 2014.
  • [8] L. Chandesris, V. Savin, and D. Declercq, “An Improved SCFlip Decoder for Polar Codes,” in Proc. Global Commun. Conf. (GLOBECOM), pp. 1-6, Dec. 2016.
  • [9] A. Alamdar-Yazdi and F. R. Kschischang, “A simplified successive-cancellation decoder for polar codes,” IEEE Commun. Lett., vol. 15, no. 12, pp. 1378-1380, Dec. 2011.
  • [10] S.-Y. Chung, T. J. Richardson, and R. Urbanke, “Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 657-670, Feb. 2001.
  • [11] P. Trifonov, “Efficient design and decoding of polar codes,” IEEE Trans. Commun., vol. 60, no. 11, pp. 3221-3227, Nov. 2012.
  • [12] J. G. Proakis, Digital Communications. McGraw Hill, 1995.