Combating Error Propagation in Window Decoding of Braided Convolutional Codes

01/10/2018 ∙ by Min Zhu, et al. ∙ New Mexico State University University of Notre Dame LUNDS TEKNISKA HÖGSKOLA Xidian University 0

In this paper, we study sliding window decoding of braided convolutional codes (BCCs) in the context of a streaming application, where decoder error propagation can be a serious problem. A window extension algorithm and a resynchronization mechanism are introduced to mitigate the effect of error propagation. In addition, we introduce a soft bit-error-rate stopping rule to reduce computational complexity, and the tradeoff between performance and complexity is examined. Simulation results show that, using the proposed window extension algorithm and resynchronization mechanism, the error performance of BCCs can be improved by up to three orders of magnitude with reduced computational complexity.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Braided convolutional codes, first introduced in [1], are a counterpart to braided block codes (BBCs) [2] which can be regarded as a diagonalized version of product codes [3] or expander codes [4]. In contrast to BBCs, BCCs use short constraint length convolutional codes as component codes. The encoding of BCCs can be described by a two-dimensional sliding array, where each symbol is protected by two component convolutional codes. BCCs are a type of parallel-concatenated convolutional code in which the parity outputs of one component encoder are fed back and used as inputs to the other component encoder at the succeeding time unit. Two variants of BCCs were considered in [1]. Tightly braided convolutional codes (TBCCs) are obtained if a dense array is used to store the information and parity symbols. This construction is deterministic and simple but performs relatively poorly due to the absence of randomness. Alternatively, sparsely braided convolutional codes (SBCCs) that employ random permutors have low density, resulting in improved iterative decoding performance [1]. Moloudi et al. considered SBCCs as spatially coupled turbo-like codes and showed that threshold saturation occurs for SBCCs over the binary erasure channel [5, 6]. SBCCs can operate in bitwise or blockwise modes, according to whether convolutional or block permutors are employed. It was also shown numerically that the free (minimum) distance of bitwise (blockwise) SBCCs grows linearly with the overall constraint length, leading to the conjecture that SBCCs are asymptotically good [1, 6].

Due to their turbo-like structure, BCCs can be decoded with iterative decoding. Analogous to LDPC convolutional codes [7, 8], SBCCs can employ sliding window decoding for low latency operation. Unlike window decoding of LDPC convolutional codes, which typically uses an iterative belief-propagation (BP) message passing algorithm, window decoding of SBCCs is based on the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm. It has been shown that blockwise SBCCs with sliding window decoding have capacity-approaching performance [9], but for large frame lengths or streaming applications, SBCCs can suffer from decoder error propagation. That is, once a block decoding error occurs, decoding of the following blocks is affected, which can cause a continuous string of block errors, resulting in unacceptable performance loss.

In this paper, we study several error propagation mitigation techniques for SBCCs. Specifically, a window extension algorithm and a resynchronization mechanism are introduced to combat error propagation. In addition, a soft bit-error-rate (BER) stopping rule is proposed to reduce decoding complexity and, the resulting tradeoff between decoding performance and decoding complexity is explored.

Ii Continuous Transmission of Braided Convolutional Codes

In this section, we briefly review continuous encoding and sliding window decoding of blockwise SBCCs. For details, please refer to [1] and [9].

Ii-a Continuous Encoding

Sparsely braided convolutional codes are constructed using an infinite two-dimensional array consisting of one horizontal and one vertical encoder. These two encoders are linked through parity feedback. In this manner, the systematic and parity symbols are “braided” together. In this paper, we limit ourselves to rate blockwise SBCCs as an example. In this case, the information sequence enters the encoder in a block-by-block manner with a relatively large block size. Fig. 1 is a conceptual illustration of the continuous encoding process for a rate blockwise SBCC, which utilizes two recursive systematic convolutional (RSC) component encoders each of rate , where , , and are each block permutors of length . The information sequence is divided into blocks of length symbols, i.e., , where , at time is interleaved using to form , and and enter the component encoders. The parity outputs , , at time are delayed by one time unit, interleaved using , and , and then enter the component encoders as the input sequences , , at time . The information sequence , the parity output sequence of encoder 1, and the parity output sequence of encoder 2 are sent over the channel. For initialization, at time instant 0, we assume that and .

Fig. 1: Continuous encoder chain for a rate blockwise SBCC.

Transmission can be terminated after a frame consisting of blocks, resulting in a slight rate loss, or proceed in an unterminated (streaming) fashion, in which case the rate is given by


Ii-B Sliding Window Decoding

In order to describe the proposed error propagation mitigation methods, the structure of the sliding window decoder [9] is shown in Fig. 2. The window size is denoted as . The block at time instant is the target block for decoding in the window containing the blocks received at times to . Briefly, the decoding process in a window beings with turbo, or vertical, iterations on the target block at time , during which the two component convolutional codes pass soft messages on the information bits in that block to each other. Then, soft messages on the parity bits are passed forward, and vertical iterations are performed on the block at time . This continues until vertical iterations are performed on the last received block in the window. Then the process is repeated in the backward direction (from the last block to the first block in the window) with soft messages being passed back through the 2 BCJR decoders. This round trip of decoding is called a horizontal iteration. After horizontal iterations, the target symbols are decoded, and the window shifts forward to the next position, where the symbols at time become the target symbols.

Fig. 2: Continuous sliding window decoder for blockwise SBCCs [9].

Ii-C Error Propagation

Since an encoded block in a blockwise BCC affects the encoding of the next block (see Fig. 1), each time a block of target symbols is decoded, the log-likelihood ratios (LLRs) associated with the decoded symbols also affect the decoding of the next block. Hence, if, after the maximum number of decoding iterations, some unreliable LLRs remain in the target block, causing a block decoding error, those unreliable LLRs can trigger a string of additional block errors, resulting in error propagation. To illustrate this effect, we consider two identical 4-state RSC component encoders whose generator matrix is given by


where we assume the encoders are left unterminated at the end of each block. The three block permutors , , and were chosen randomly with the same size . We assume that transmission stops after a frame of blocks is decoded and a uniform decoding schedule (see [9] for details) is used. The bit error rate (BER), block error rate (BLER), and frame error rate (FER) performance for transmission over the AWGN channel with BPSK signalling is given in Fig. 3, where the window size , the number of vertical iteration is , the number of horizontal iteration is , and the frame length .

From Fig. 3, we see that the rate blockwise SBCC performs about 0.6 dB away from the Shannon limit. Even so, among the 10000 simulated frames, there were some frames that exhibited error propagation. In order to depict the error propagation phenomenon clearly, we give the bit error distribution per block of one frame with error propagation in Fig. 4(a), for dB. We see that, for , from the 830th block on, the number of error bits is large, and the errors continue to the end of the frame, a clear case of error propagation. For , error propagation starts two blocks later than for , but the overall effect of increasing the number of iterations is minimal. The bit error distribution per block, based on 10000 simulated frames with two different window sizes, is shown in Fig. 4(b), where we see that increasing the window size reduces the number of error propagation frames from 9 to 1, thus significantly improving performance.

Fig. 3: The BER, BLER, and FER performance of rate SBCCs.
(a) One frame with different numbers of iterations, .
(b) 10000 frames with different window sizes, , .
Fig. 4: The error distribution per block for rate blockwise SBCCs.

For larger frame lengths, and particularly for streaming transmission, error propagation will severely degrade the decoding performance illustrated in Fig. 3. Hence, we now introduce two ways of mitigating the error propagation effect in sliding window decoding of SBCCs.

Iii Error Propagation Mitigation

In this section, we propose a window extension algorithm and a resychronization mechanism to mitigate the effect of error propagation in SBCCs.

Iii-a Window Extension Algorithm

In the paper [9] by Zhu et al., the window size is fixed during the decoding process. Based on the results presented in Fig. 4(b), here we introduce a variable window size concept to sliding window decoding. Before describing the window extension algorithm, we give some definitions. Let denote the decision LLRs of the information bits in the th block of the current window after the th horizontal iteration. Then the average absolute LLR of the information bits after the th horizontal iteration is given by .

During the decoding process, when the number of horizontal iterations reaches its maximum value , if any of the average absolute LLRs of the first blocks in the current window, , is lower than a predefined threshold , that is, if


the target block is not decoded, the window size is increased by 1, and the decoding process restarts with horizontal iteration number 0. This process continues until either the target block is decoded or the window size reaches a predefined maximum , in which case the target block is decoded regardless of whether (3) is satisfied. Assuming an initial window size , Fig. 5 illustrates that, if (3) is met, the window size increases to , up to a maximum window size of .111The reason for choosing is that, if decoding speed is not critical, we can reuse the same hardware needed to implement window size w = 3. In other words, instead of providing additional hardware, the available w = 3 hardware can be used twice to emulate a w = 6 window in a serial, rather than parallel, implementation. The details of the window extension process are described in Alg. 1.

Fig. 5: Decoder with the window extension algorithm.

        Algorithm 1: Window Extension Algorithm

  • Initialization
    Assume that the block at time is the target block in a window decoder of size initialized with the channel LLRs of received blocks. Let denote the current number of horizontal iterations, and set initially. , , and are parameters.

  • While

    • Perform vertical decoding and horizontal decoding;

    • Every time a horizontal iteration is finished, ;

    • If

      • Calculate 
        , .

      • If for any and
        (1) ;
        (2) .
        (3) Initialize the decoder with the channel LLRs of blocks.


For the same simulation conditions used in Fig. 3, the BER, BLER, and FER performance of rate blockwise SBCCs with the window extension algorithm is shown in Fig. 6, where , , and . We see that rate blockwise SBCCs with window extension show an order of magnitude improvement in BER, BLER, and FER compared to the results of Fig. 3. We also remark that, even though , the average window size is only slightly larger , e.g. for dB, since window extension is only employed when error propagation is detected.

Fig. 6: BER/BLER/FER performance comparison of rate blockwise SBCCs with and without the window extension algorithm.

Iii-B Resynchronization Mechanism

We see from Fig. 6 that the window extension algorithm greatly reduces the effect of error propagation. However, for very long frames or for streaming, even one occurrence of error propagation can be catastrophic. We now introduce a resynchronization mechanism to address this problem.

As noted above, the first block in a BCC encoder chain has two known input sequences. Therefore, the input LLRs in the first block are more reliable than for the succeeding blocks. Motivated by this observation, and assuming the availability of a noiseless binary feedback channel, we propose that, when the window extension algorithm is unable to stop error propagation, the encoder resets to the initial state and begins the encoding of a new chain. This resynchronization mechanism is described below.

After a target block is decoded in the window extension algorithm, if its average absolute LLRs satisfy


we consider this target block as failed. If we experience consecutive failed target blocks, we declare an error propagation condition and initiate encoder and decoder resychronization using the feedback channel. In other words, the encoder 1) sets the initial register states of the two component convolutional encoders to “0”, and 2) begins encoding the next block with two known (all “0”) input sequences together with the new information block. Meanwhile, the decoder makes decisions based on the current LLRs for the remaining blocks in the current window and restarts decoding once new blocks are received. Alg. 2 gives the detailed description.

        Algorithm 2: Resynchronization Algorithm

  • Let and denote the current number of horizontal iterations and the counter for the average absolute LLRs of the target blocks, respectively. Let denote the current window size, and set and initially.

  • While
    1) Perform vertical decoding and horizontal decoding;
    2) ;

  • Calculate the average absolute LLRs of the target block,

  • If
    else .

  • If
    Resynchronize the encoder and decoder: 1) the initial register state of each component convolutional encoder is “0”; 2) one input sequence of each component encoder is all “0”, the other input sequence is the new information block; 3) the decoder makes decisions based on the current LLRs for the remaining blocks in the current window and restarts decoding once new blocks are received.


To demonstrate the efficiency of the resynchronization mechanism, the BER, BLER, and FER performance of rate blockwise SBCCs with the window extension algorithm and the resynchronization mechanism is shown in Fig. 7 for the same simulation conditions used in Fig. 6 and . We see that, compared to the results of Fig. 3, rate blockwise SBCCs with window extension and resynchronization gain approximately three orders of magnitude in BER and BLER and about one order of magnitude in FER.222Although the resynchronization mechanism terminates error propagation in a frame, thus improving both BER and BLER, it does not further reduce the number of frames in error.

Fig. 7: BER/BLER/FER comparison of rate blockwise SBCCs with and without window extension and resynchronization.

Iv Early Stopping Rule

The decoding complexity of BCCs with sliding window decoding depends mainly on the number of horizontal iterations. Therefore, in order to minimize unnecessary horizontal iterations, we introduce a soft BER stopping rule, which was first proposed for LDPC convolutional codes in [10]

. Every time a horizontal iteration finishes, the average estimated BER

of the target bits in the current window is obtained using the following steps.

  • Calculate the decision LLR (the sum of the channel LLR, the prior LLR, and the extrinsic LLR) of every information bit in the target block, ;

  • The average estimated BER of the target information bits is then given by

If the average estimated BER of the target bits satisfies


decoding is stopped and a decision on the target symbols in the current window is made.

Note that the window extension algorithm, the resynchronization mechanism, and the soft BER stopping rule can operate together in a sliding window decoder. We now give an example to illustrate the tradeoffs between performance and computational complexity when error propagation mitigation is combined with the stopping rule. Fig. 8 shows the performance of rate blockwise SBCCs with window extension, resynchronization, and the stopping rule for the same simulation conditions used in Fig. 7 and . We see that using the stopping rule degrades the BER performance only slightly, but the BLER performance is negatively affected in the high SNR region.333The BLER loss at high SNR can be reduced by using a smaller at a cost of some increased complexity. The average number of horizontal iterations per block is also shown in Fig. 9, where we see that the stopping rule greatly reduces the number of horizontal iterations, especially in the high SNR region.

Fig. 8: BER/BLER comparison of rate blockwise SBCCs with window extension and resynchronization, with and without the stopping rule.

Fig. 9: Number of horizontal iterations of rate blockwise SBCCs with window extension and resynchronization, with and without the stopping rule.

V Conclusion

In this paper we investigated the error propagation problem associated with blockwise BCCs. A window extension algorithm and a resynchronization mechanism were proposed to mitigate error propagation, which can have a catastrophic effect on the performance for large frame lengths and continuous streaming operation. The BER and BLER performance of blockwise SBCCs with these two mitigation methods was shown to outperform the original blockwise SBCCs by about three orders of magnitude. Furthermore, a soft BER stopping rule was introduced and shown to significantly reduce decoding complexity with little effect on BER performance.


  • [1] W. Zhang, M. Lentmaier, K. Sh. Zigangirov, and D. J. Costello, Jr., “Braided convolutional codes: a new class of turbo-like codes,” IEEE Trans. Inf. Theory, vol. 56, no. 1, pp. 316-331, Jan. 2010.
  • [2] A. J. Feltström, M. Lentmaier, D. V. Truhachev, and K. S. Zigangirov, “Braided block codes,” IEEE Trans. Inf. Theory, vol. 55, no. 6, pp. 2640-2658, Jun. 2009.
  • [3] P. Elias, “Error free coding,” IRE Trans. Inf. Theory, vol. 4, no. 4, pp. 29-37, Sep. 1954.
  • [4] M. Slipser and D. A. Spielman, “Expander codes,” IEEE Trans. Inf. Theory, vol. 42, no. 6, pp. 1710-1722, Nov. 1996.
  • [5] S. Moloudi, M. Lentmaier, and A. Graell i Amat, “Spatilly coupled turbo-like codes,” IEEE Trans. Inf. Theory, vol. 63, no. 10, pp. 6199-6215, 2017.
  • [6] S. Moloudi, M. Lentmaier, and A. Graell i Amat, “Finite length weight enumerator analysis of braided convolutional codes,” in Proc. International Symposium on Information Theory and Its Applications, Monterey, CA, USA, Oct. 2016, pp. 488-492.
  • [7] M. Lentmaier, A. Sridharan, D. J. Costello, Jr., and K. S. Zigangirov, “Iterative decoding threshold analysis for LDPC convolutional codes,” IEEE Trans. Inf. Theory, vol. 56, no. 10, pp. 5274-5289, Oct. 2010.
  • [8] A. R. Iyengar, M. Papaleo, P. H. Siegel, J. K. Wolf, A. Vanelli-Coralli, and G. E. Corazza, “Windowed decoding of protograph-based LDPC convolutional codes over erasure channels,” IEEE Trans. Inf. Theory, vol. 58, no. 4, pp. 2303-2320, April 2012.
  • [9] M. Zhu, D. G. M. Mitchell, M. Lentmaier, D. J. Costello, Jr., and B. Bai, “Braided convolutional codes with sliding window decoding,” IEEE Trans. on Communications, vol. 65, no. 9, pp. 3645-3658, Sept. 2017.
  • [10] N. Ul Hassan, A E. Pusane, M. Lentmaier, G. P. Fettweis, and D. J. Costello, Jr., “Non-uniform window decoding schedules for spatially coupled LDPC codes,” IEEE Trans. on Communications, vol. 65, no. 2, pp. 501-510, Nov. 2016.