Error Propagation Mitigation in Sliding Window Decoding of Braided Convolutional Codes

04/28/2020 ∙ by Min Zhu, et al. ∙ New Mexico State University University of Notre Dame LUNDS TEKNISKA HÖGSKOLA Xidian University 0

We investigate error propagation in sliding window decoding of braided convolutional codes (BCCs). Previous studies of BCCs have focused on iterative decoding thresholds, minimum distance properties, and their bit error rate (BER) performance at small to moderate frame length. Here, we consider a sliding window decoder in the context of large frame length or one that continuously outputs blocks in a streaming fashion. In this case, decoder error propagation, due to the feedback inherent in BCCs, can be a serious problem.In order to mitigate the effects of error propagation, we propose several schemes: a window extension algorithm where the decoder window size can be extended adaptively, a resynchronization mechanism where we reset the encoder to the initial state, and a retransmission strategy where erroneously decoded blocks are retransmitted. In addition, we introduce a soft BER stopping rule to reduce computational complexity, and the tradeoff between performance and complexity is examined. Simulation results show that, using the proposed window extension algorithm, resynchronization mechanism, and retransmission strategy, the BER performance of BCCs can be improved by up to four orders of magnitude in the signal-to-noise ratio operating range of interest, and in addition the soft BER stopping rule can be employed to reduce computational complexity.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

page 21

page 23

page 25

page 27

page 28

page 29

page 32

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Braided convolutional codes (BCCs), first introduced in [1], are a counterpart to braided block codes (BBCs) [2],111A type of BBC, braided Bose-Chaudhuri-Hocqenghem (BCH) codes [3], and the closely related staircase codes [4, 5] have been investigated for high speed optical communication. which can be regarded as a diagonalized version of product codes [6] or expander codes [7]. In contrast to BBCs, BCCs use short constraint length convolutional codes as component codes. The encoding of BCCs can be described by a two-dimensional sliding array of encoded symbols, where each symbol is protected by two component convolutional codes. In this sense, BCCs are a type of parallel-concatenated (turbo) code in which the parity outputs of one component encoder are fed back and used as inputs to the other component encoder at the succeeding time unit. Two variants of BCCs, tightly and sparsely braided codes, were considered in [1]. Tightly braided convolutional codes (TBCCs) are obtained if a dense array is used to store the information and parity symbols. This construction is deterministic and simple to implement but performs relatively poorly due to the absence of randomness. Alternatively, sparsely braided convolutional codes (SBCCs) employ random permutors and have “turbo-like” code properties, resulting in improved iterative decoding performance [1]. SBCCs can operate in either a bitwise or blockwise mode, depending on whether convolutional or block permutors are employed. Moloudi et al. characterized SBCCs as a type of spatially coupled turbo code with a regular graph structure and showed that threshold saturation occurs for iterative decoding of SBCCs over the binary erasure channel [8, 9], and Farooq et al. proposed a technique to compute the thresholds of SBCCs on the additive white Gaussian noise (AWGN) channel [10]. It was also shown numerically that the free (minimum) distance of bitwise and blockwise SBCCs grows linearly with the overall constraint length, leading to the conjecture that SBCCs, unlike parallel or serial concatenated codes, are asymptotically good [1, 9, 11].

Due to their turbo-like structure, SBCCs can be decoded with iterative decoding. Analogous to LDPC convolutional codes [12, 13], SBCCs can employ sliding window decoding (SWD) for low latency operation [14]. Unlike SWD of LDPC convolutional codes, which typically uses an iterative belief-propagation (BP) message passing algorithm, SWD of SBCCs is based on the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm. It has been shown that blockwise SBCCs with SWD have excellent performance [14], but for large frame lengths or streaming (continuous transmission) applications, it has been observed that SBCCs are susceptible to infrequent but severe decoder error propagation [15]. That is, once a block decoding error occurs, decoding of the following blocks can be affected, which in turn can cause a continuous string of block errors and result in unacceptable performance loss. Although streaming codes have been widely investigated [16, 17, 18, 19], our paper focuses only on the use of capacity-approaching codes and the desire to limit latency in such cases by employing SWD. To our knowledge, the only other work to consider the error propagation problem with SWD of capacity-approaching codes is the recent paper by Klaiber et al. ([20]). That paper considered spatially coupled LDPC codes and the mitigation methods developed there, including adapting the number of iterations and window shifting, are different from the ones we propose for BCCs.

In this paper, we examine the causes of error propagation in SWD of SBCCs and propose several error propagation mitigation techniques. Specifically, based on a prediction of the reliability of a decoded block, a window extension algorithm, a resynchronization mechanism, and a retransmission strategy are introduced to combat the error propagation. In addition, a soft bit-error-rate stopping rule is proposed to reduce decoding complexity and the resulting tradeoff between decoding performance and decoding complexity is explored.

Ii Review of Braided Convolutional Codes

In this section, we briefly review the encoding and SWD of blockwise SBCCs. For further details, please refer to [1] and [14].

Ii-a Encoding

SBCCs are constructed using a turbo-like parallel concatenation of two component encoders. However, unlike turbo codes, the two encoders share parity feedback. In this manner, the systematic and parity symbols are “braided” together. In this paper, we restrict our discussion to rate blockwise SBCCs, but generalization to other rates and to bitwise SBCCs is straightforward. In this case, the information sequence enters the encoder in a block-by-block manner, typically with a relatively large block size. Fig. 1 shows the encoding process for a rate blockwise SBCC, which utilizes two recursive systematic convolutional (RSC) component encoders each of rate , where , , and are each block permutors of length . The information sequence is divided into blocks of length symbols, i.e., , where . At time , is interleaved using to form , and and enter the component encoders. The parity outputs from encoder , , at time are delayed by one time unit, interleaved using and , respectively, and then enter the component encoders as the input sequences , , at time . The information block , the parity output block of encoder 1, and the parity output block of encoder 2 are sent over the channel as the encoded block at time .


Fig. 1: Encoder for a rate blockwise SBCC.

In order to depict the encoding process conceptually, a chain of encoders that operates at different time instants is illustrated in Fig. 2. At each time instant, there is a turbo-like encoder which consists of two parallel concatenated RSC component encoders. These turbo-like encoders are coupled by feeding the parity sequence generated at the current time instant to the encoders at the next time instant, so that the coupling memory is 1 in this case. For initialization, at time instant 0, we assume that and .


Fig. 2: Encoder chain for a rate blockwise SBCC.

Transmission can be terminated after a frame consisting of encoded blocks by inserting a small number of additional blocks (typically ), in which case the rate is given by and we suffer a slight rate loss, or unterminated (in a continuous streaming fashion), in which case the rate is given by .

Ii-B Sliding Window Decoding

In order to help describe the proposed error propagation mitigation methods, the structure of the sliding window decoder [14] is shown in Fig. 3. The window size is denoted as . The block at time instant is the target block for decoding in the window containing the blocks received at times to . The decoding process in a window beings with turbo, or vertical, iterations on the target block at time , during which the two component convolutional codes pass soft messages on the information bits in that block to each other. Then, soft messages on the parity bits are passed forward, and vertical iterations are performed on the block at time . This continues until vertical iterations are performed on the last received block in the window. Then the process is repeated in the backward direction (from the last block to the first block in the window) with soft messages being passed back through the 2 BCJR decoders. This round trip of decoding is called a horizontal iteration. After horizontal iterations, the target symbols are decoded, and the window shifts forward to the next position, where the symbols at time become the target symbols.222Other decoding schedules were proposed in [14], but those do not affect the general discussion in this paper.


Fig. 3: Sliding window decoder for blockwise SBCCs [14].

Iii Error Propagation

Since an encoded block in a blockwise SBCC affects the encoding of the next block (see Fig. 2), each time a block of target symbols is decoded, the log-likelihood ratios (LLRs) associated with the decoded symbols also affect the decoding of the next block. Hence if, after a fixed maximum number of decoding iterations, some unreliable LLRs remain in the target block and cause a block decoding error, those unreliable LLRs can potentially trigger a string of additional block errors, resulting in error propagation.

Iii-a Motivation

Example 1: To illustrate this effect, we consider an example of two identical 4-state RSC component encoders whose generator matrix is given by

(1)

where we assume the encoders are left unterminated at the end of each block. The three block permutors , , and are assumed to be chosen randomly with the same size , and we also assume that transmission stops after a frame of blocks is decoded and a uniform decoding schedule is used (see [14] for details).333In a uniform decoding schedule, the same number of vertical iterations in the forward message passing process (from the first block to the last block in the decoding window) are performed on each block. Likewise, in the backward message passing process (from the last block to the first block in the window), the same number of vertical iterations are performed. The bit error rate (BER), block error rate (BLER), and frame error rate (FER) performance for transmission over the AWGN channel with BPSK signalling are plotted in Fig. 4 as functions of the channel signal-to-noise ratio (SNR) , where the window size , the number of vertical iterations is , the number of horizontal iteration is , and the frame length is blocks.

From Fig. 4, we see that the rate blockwise SBCC performs about 0.5 dB away from the Shannon limit and 0.4 dB away from the finite-length bound[21] at a BER of . Even so, among the 10000 simulated frames, several were observed to exhibit error propagation. For example, 9 such frames were observed at dB. In order to depict the error propagation phenomenon clearly, we show the bit error distribution per block of one frame with error propagation in Fig. 5. We see that, for , from block 830 on, the number of error bits is large, and the errors continue to the end of the frame, a clear case of error propagation. For , error propagation starts two blocks later than for , but we see that the overall effect of increasing the number of iterations is minimal.444In related work on spatially coupled LDPC codes with sliding window decoding, Klaiber et al.[20] have also noted a problem with error propagation and have successfully employed an adaptive number of iterations and window shifting to improve performance in that case. On the other hand, the bit error distribution per block, based on 10000 simulated frames with two different window sizes, is shown in Fig. 5, where we see that increasing the window size from 3 to 4 reduces the number of error propagation frames from 9 to 1, thus significantly improving performance.


Fig. 4: The BER, BLER, and FER performance of a rate SBCC with and .
Fig. 5: The bit error distribution per block for a rate blockwise SBCC with : (a) one frame with different numbers of iterations, , and (b) 10000 frames with different window sizes, , .

Iii-B A Decoder Model of Error Propagation

Assuming that information is transmitted in frames of length , a significant number of blocks could be affected by error propagation if is large, thus severely degrading the BLER performance. We now give a brief analysis of how error propagation affects the BLER performance of SWD.555A similar analysis was presented in a recent paper [23] on SWD of spatially coupled low-density parity-check (LDPC) codes.

Assume that, in any given frame, the decoder operates in one of two states: a random error state

in which block errors occur independently with probability

, and an error propagation state in which block errors occur with probability 1. Also assume that, at each time unit the decoder transitions from state to state independently with probability (typically, ) and that, once in state , the decoder remains there for the rest of the frame.666A given frame can (1) operate entirely in state , where error propagation never occurs, (2) start in state and then at some time transition to state , or (3) operate entirely in state , where the very first block is decoded incorrectly and block errors continue throughout the rest of the frame. A state diagram describing this situation is shown in Fig. 6.

Fig. 6: The state diagram describing the operation of a decoder subject to error propagation.

Consider a simulation scenario in which the information block size and , the total number of blocks to be simulated, are fixed, where is the total number of simulated frames, , and is the total number of simulated symbols. Under normal (random error) decoder operating conditions, the simulated BLER should be independent of the particular combination of and chosen. When decoder error propagation is possible, however, we now show that, for fixed , the values of and can affect the simulated BLER.

For a frame of length , we express the probability that the decoder first enters state at time (and thus stays in state until time ) as

(2)

where the notation denotes the set of time units from to . Similarly, we can write the probability that the decoder stays in state throughout the entire frame as

(3)

Now, given that a frame enters state at time , we can express the average BLER as

(4a)
(4b)

where we note that state must be preceded by at least one correctly decoded block. Finally, we can write the overall average BLER as

(5)

Looking at (5), it is clear that, if , i.e., we never enter state , then , independent of the frame length . This is the normal condition under which Monte Carlo simulations are conducted. However, under error propagation conditions, the simulated BLER will increase as a function of the frame length. We also note that the model parameters and will depend both on the channel SNR and the decoder window size . In general, both lower SNRs and smaller values of will result in larger values of (random block error probability) and (error propagation probability), making the performance more sensitive to large values of . By contrast, high SNR and large will reduce and , making performance less sensitive to the value of .

Example 2: Consider a rate blockwise BCC with information block size , window size , and different frame lengths and numbers of simulated frames such that the total number of simulated blocks is , or simulated symbols. The BER, BLER, and FER performance is shown in Fig. 7.

From Fig. 7, we see that the simulation runs with a larger frame length and a smaller number of simulated frames exhibit higher errors rates than those with smaller and larger , for the same total number of simulated blocks . Also note that, in a true streaming environment (), the BER will tend to 0.5 and both the BLER and FER will tend to 1.0!

Fig. 7: The BER, BLER, and FER performance of a BCC with =1000, =5, and different values of and such that .

This example makes clear that, for , and particularly for streaming transmission, error propagation will severely degrade the decoding performance illustrated in Fig. 4. In the next section, we look more carefully at the error propagation statistics, and then in Section V we introduce three ways of mitigating error propagation in sliding window decoding of SBCCs.

Iv What Causes Error Propagation?

In this section, we investigate the causes of error propagation during sliding window decoding of SBCCs. To this end, we introduce the concept of a superstate to describe the complete state of the encoder at a given time , i.e., the information needed to generate the -symbol encoded block from the -symbol information block . From Fig. 1, with the 4-state RSC component encoders of (1), we see that the superstate consists of the two T-bit parity input sequences from the previous block plus the four component encoder register bits at the beginning of a block, which together determine the output block for a given input block .777The component RSC encoders are not terminated at the end of a block, so the register bits at the beginning of block are the same as those at the end of block .

In the following, we give two examples with different permutor sizes to illustrate the causes of error propagation.

Example 3: We first consider the case of large permutor (block) size . 10000 frames of the rate blockwise SBCC from Example 1 were simulated at dB (corresponding to a BER of about ) with , , , , and . LLRs are capped at . The simulated frames consisted of correct frames, frames with short bursts of one or two block errors, and error-propagation frames.888 When consecutive error blocks continue to the end of a frame, we call it error propagation. When the last block in a sequence of one or more consecutive error blocks does not coincide with the end of a frame, we call it a burst error. The frequency of the burst-error frames and error-propagation frames among the 10000 simulated frames, along with the mean burst length, is shown in Fig. 8.999Note that, since there may be multiple burst errors in a frame, or a frame may contain burst errors along with error propagation, the total number of burst errors may exceed the number of frames containing burst errors.

Fig. 8: The frequency of the error frames in a rate SBCC with , , , and at dB.

Fig. 9 shows the bit error distribution per block for an example error-propagation frame selected from the 10000 simulated frames.101010The example frames demonstrate the typical behavior of all the recorded error frames of a given type. Here, we see that the error propagation starts at block 606, which has 354 errors, and continues to the end of the frame. The number of bit errors in block 607 increases to around 1200. Then, in the remaining blocks, the number of bit errors is around 1500.


Fig. 9: The bit error distribution per block in an example error-propagation frame from a rate SBCC with .

In Fig. 11, we show the decoded LLRs of blocks 605 (0 errors), 606 (354 errors), and 607 (1224 errors) of an example burst-error frame. We see that the LLR magnitudes of block 605 are mostly around 20, while the LLRs of block 606 range from about -10 to +10 almost uniformly, and the LLR magnitudes of block 607 are mostly around zero. This indicates that when error propagation begins, the average LLR magnitudes in a block quickly deteriorate to around zero, resulting in a sequence of unreliable blocks.

Fig. 10: The LLRs of blocks 605, 606, and 607 in an example error-propagation frame from a rate SBCC with .
Fig. 11: The LLRs of blocks 187, 188, and 189 in an example erroneous frame that does not display error propagation from a rate SBCC with .

We now examine the bit error distribution per block in a typical erroneous frame that does not exhibit error propagation, selected from the same 10000 simulated frames. The example frame selected contains a total of 3 error bits confined to block 188. Fig. 11 shows the decoded LLRs of blocks 187 (0 errors), 188 (3 errors), and 189 (0 errors). In this case, we see that a small number of bit errors in a single block does not trigger error propagation. In this regard, it is instructive to contrast the LLRs of block 606 in Fig. 11, which triggers error propagation, with those of block 188 in Fig. 11, which does not.

In summary, for large block size , a small number of bit errors in a block tends to affect only one or (occasionally) two blocks at a time, while larger numbers of bit errors in a block typically trigger error propagation. Also, when error propagation occurs, the corresponding decoded LLR magnitudes are highly unreliable, which indicates that we can design mitigation measures to detect and combat error propagation based on the decoded LLR magnitudes.

Example 4: We next consider the case of a smaller permutor (block) size . 10000 frames of the rate blockwise SBCC from Example 1 were simulated at dB (corresponding to a BER of about ) with , , , , and . The frequency of the burst-error frames and error-propagation frames among the 10000 simulated frames, along with the mean burst length, is shown in Fig. 12. We see that, compared to using a larger permutor (block) size (see Fig. 8), burst-error frames are in the majority, the burst errors are longer on average, and there are relatively few error-propagation frames.


Fig. 12: The frequency of the error frames in a rate SBCC with , , , , and at .

We now examine a typical burst-error frame, which has burst length 14 (from block 879 to block 892), in more detail. Fig. 13 shows the bit error distribution per block along with the decoded LLRs. We see that in this case (with ), unlike the case in Fig. 9 (with ), the decoder recovers from the burst of block errors, and error propagation does not occur. However, the average magnitudes of the LLRs in the burst are relatively small (roughly between -10 to +10), which is similar to the LLR behavior shown in Fig. 11 when .

In order to better understand the process of decoder recovery from an error burst, we tracked the superstates obtained from the decoded sequence by making hard decisions on the LLRs of the two parity output blocks (parity inputs for the next block) and the two encoder states after each information block is decoded and compared them to the superstates obtained by encoding the correct sequence of information blocks. Figs. 14-15 show the comparative results of these two superstate sequences, where, in order to highlight the details of the burst-error blocks, we only show the results in their vicinity. (The superstates corresponding to the blocks not shown in Figs. 14-15 are the same in both cases.) From Fig. 14, we see that the parity input block portion of the superstate sequences differs from block 880 to block 892, which agrees exactly with the distribution of burst-error blocks. In other words, starting with block 879 and continuing through block 891, the hard decisions obtained from the parity output block LLRs of both component decoders are incorrect, causing incorrect parity input blocks in the succeeding blocks. Fig. 15 compares the initial encoder state portion of the superstate (obtained by making hard decisions on the final encoder state LLRs of the previous block) in the two cases. Here, the results are somewhat different, with encoder 1 having only 7 different initial states (out of the 13 error blocks), while encoder 2 has only 3 different initial states. In other words, the 100-bit initial parity input block portion of the superstate has a greater influence on the propagation of block errors than does the 2-bit initial encoder state portion, and error propagation only ends when both the parity input blocks and the initial encoder states remerge. Also, although we see here that (particularly for small block sizes) bursts of block errors don’t necessarily result in error propagation and the decoder can recover, additional burst-error blocks can occur later in a long frame or in a streaming application.

Fig. 13: The bit error distribution per block and the LLRs of blocks 877 to 894 in frame 843 of a rate SBCC with .
(a) Encoder 1
(b) Encoder 2
Fig. 14: The difference between the actual sequence of parity input blocks and the correct sequence of parity input blocks in each block (“1” represents “different” and “0” represents “the same”).
(a) Encoder 1
(b) Encoder 2
Fig. 15: The difference between the actual initial encoder state sequences and the correct initial encoder state sequences (“1” represents “different” and “0” represents “the same”).

Examples 3 and 4 show that, for larger permutor (block) sizes, error propagation or single block error frames are the most likely, while for smaller permutor (block) sizes, burst-error frames occur more often. Therefore it is necessary to design mitigation techniques to combat both error propagation and burst errors. Based on the information obtained in Examples 3 and 4, i.e., that the absolute values of the LLRs of the information bits decrease during error propagation or burst errors, algorithms can be designed to combat these error conditions. In addition, it is important to be able to detect error propagation or a burst error early in the process, to avoid having to accept large numbers of decoded block errors. Therefore, the span of blocks over which the LLRs are observed must be carefully chosen.

In the following section, we present three techniques designed to mitigate error propagation and burst errors of finite duration.

V Error Propagation Mitigation

In this section, we propose a window extension algorithm, a resynchronization mechanism, and a retransmission strategy to mitigate the effect of error propagation in sliding window decoding of SBCCs.

V-a Window Extension Algorithm

In [14], window decoding of SBCCs is performed with a fixed window size . Based on the results presented in Fig. 5, we now introduce a variable window size concept for sliding window decoding, where the window size can change from an initial value to a maximum of . Before describing the window extension algorithm, we give some definitions. Let denote the decision LLRs of the information bits in the th block, , of the current window after the th horizontal iteration. Then the average absolute LLR of the information bits in block after the th horizontal iteration is given by

(6)

Also, we define the observation span as the number of consecutive blocks in the decoding window over which the average absolute LLRs are to be examined.

During the decoding process, the window extension algorithm operates as follows: with , when the number of horizontal iterations reaches its maximum value , if any of the average absolute LLRs of the first blocks in the current window, , is lower than a predefined threshold , i.e., if

(7)

then the target block is not decoded, the window size is increased by 1, and the decoding process restarts with horizontal iteration number 1.111111When decoding restarts, all the LLRs in the old blocks, except for the channel LLRs, are initialized to be 0s. In other words, the previous intermediate messages are not reused. This process continues until either the target block is decoded or the window size reaches , in which case the target block is decoded regardless of whether (7) is satisfied.

Assuming an initial window size , Fig. 16 illustrates how the decoder window size increases by 1 each time (7) is satisfied, up to a maximum window size of . Note that when window extension is triggered, the decoding delay, along with the decoding complexity, increases, so that an average latency measure must be adopted to characterize delay. Also, some buffering is required, and the decoder output is no longer continuous. These practical considerations suggest that should not be too large.121212Since, during horizontal iterations, messages from a given block are only shared with one adjacent block, the processing can be achieved, in principle, by using the existing hardware with a fixed window size serially, along with additional memory, to increase as needed. If error propagation persists given this constraint, window extension can be combined with one of the other mitigation methods, as discussed later in this section. Full details of the window extension algorithm are given in Algorithm 1 in the appendix.


Fig. 16: Sliding window decoder with the window extension algorithm.
(a) , , , , and .
(b) , , , , .
Fig. 17: BER (solid curves), BLER (dashed curves), and FER (dotted curves) performance comparison of a rate SBCC with and without the window extension.

For the same simulation parameters used in Example 1, the BER, BLER, and FER performance of a rate blockwise SBCC both with and without the window extension algorithm is shown in Fig. 17(a), where , , , the observation span , and the threshold .131313After some experimentation, was found to give a reasonable tradeoff among complexity, memory requirements, and delay in this example.141414The choice of is based on the information regarding typical LLR magnitudes during error bursts and error propagation presented in Figs. 11 and 13. Note that the higher the threshold , the more often window extension is triggered, which increases decoding complexity, while smaller values of risk failing to detect error propagation. (Throughout the remainder of this section, we assume , , and .) We see that window extension shows an order of magnitude improvement in BER, BLER, and FER compared to using a fixed window size. We also remark that, even though , the average window size is found to be only slightly larger than , e.g., for dB, since window extension is only activated in the few cases when error propagation is detected.

To examine the effect of a smaller block size, the BER, BLER, and FER performance of the rate blockwise SBCC of Example 1 both with and without the window extension is shown in Fig. 17(b) for , , , , and . We again see that window extension shows almost an order of magnitude improvement in BER, BLER, and FER compared to using a fixed window size, and the average window size, e.g., for dB, is only slightly larger than .

To further illustrate the performance gains achieved by the window extension algorithm, the frequency of the burst-error frames and error-propagation frames over a total of 10000 frames, along with the mean burst length, is shown in Fig. 18 for dB and . In this case, compared to Fig. 8, we see that window extension reduces the frequency of both error propagation frames and length 1 burst-error frames by roughly a factor of 10, while completely eliminating the small number of bursts of length 2. Also, we have observed empirically that the frequency of error frames decreases as we increase the observation span . Therefore, in order to maintain an acceptable tradeoff between performance and decoding complexity,151515Increasing also increases the complexity of performing the threshold test in (7). we typically choose

(8)
Fig. 18: The frequency of the error frames in a rate SBCC with window extension for , , and at dB.

To again examine the effect of a smaller block size, Fig. 19 shows the frequency of the burst-error frames and error-propagation frames over a total of 10000 frames, along with the mean burst length, both with and without window extension, for and dB. Plots are included for two different values of the observation span , and . Unlike the large block size () case, we see here that window decoding results in many different burst-error lengths. (More detailed information about the error frames is given in Table I, where any frame containing error propagation is counted as an error-propagation frame and the number of burst-error frames includes those with both single and multiple burst errors.) In particular, without window extension, we experience burst errors as long as 691 blocks, a mean burst of 189.49, and 19 error-propagation frames. With window extension, the total number of burst-error frames, the maximum length of error bursts, the mean burst length, and the number of error-propagation frames are all reduced, with performing better than , consistent with our choice in (8).

(a) Window decoding of an SBCC without window extension, ,
(b) Window decoding of an SBCC with window extension, , , , ,
(c) Window decoding of an SBCC with window extension, , , , ,
Fig. 19: The frequency of the error frames in a rate SBCC with and without window extension for at dB.
Number of
error frames
Number of error-
propagation frames
Number of burst-
error frames
Largest
burst size
Mean
burst size
No window extension 74 19 55 691 189.49
35 6 29 455 121.14
28 4 24 443 120.67
TABLE I: The distribution of error frames for a rate SBCC with and without window extension for and dB.

Considering the effect of an even smaller block size, Fig. 20 shows the frequency of the burst-error frames and error-propagation frames over a total of 10000 frames, along with the mean burst length, with window extension for and dB with and . Comparing to Fig. 12 without window extension, we see that window extension reduces the frequency of error-propagation frames from to and the frequency of burst-error frames by about a factor of 4, while the mean burst length stays about the same.

Fig. 20: The frequency of the error frames in a rate SBCC with window extension for at dB.

V-B Resynchronization Mechanism

We see from Fig. 17(a) that the window extension algorithm greatly reduces the effect of error propagation. However, for very long frames or for streaming applications, even one occurrence of error propagation can be catastrophic. We now introduce a resynchronization mechanism to address this problem.161616Resynchronization can be employed with or without window extension. Resynchronization is considered without window extension in Section V-B and with window extension in Section V-C.

As noted above, the parity input sequences in the first block of an SBCC encoder output sequence are known. Therefore, the input LLRs for the first block are more reliable than for the succeeding blocks. Motivated by this observation, and assuming the availability of an instantaneous noiseless binary feedback channel, we propose that, when the sliding window decoding algorithm is unable to recover from error propagation, the encoder resets to the state and restarts encoding. This resynchronization mechanism is described below.

In attempting to decode the target block at time in the window decoding algorithm, if the average absolute LLRs of the target block satisfy,

(9)

we consider the target block as failed, where is the same predefined threshold employed in window extension. If we experience consecutive failed target blocks, we then declare an error propagation condition and initiate encoder and decoder resynchronization using the feedback channel. In other words, the encoder 1) sets the initial states of the two component convolutional encoders to “0”, and 2) begins encoding the next block with two known (all “0”) parity input sequences together with the next information block. Meanwhile, the decoder makes decisions based on the current LLRs for the blocks in the current window and restarts decoding once new blocks are received. Full details of the resynchronization mechanism are given in Algorithm 2 in the appendix.

In order to test the efficiency of resynchronization, we simulated the rate blockwise SBCC of Example 1 with different permutor (block) sizes and different numbers of consecutive failed target blocks (). Fig. 21(a) shows the BER/BLER performance comparison with and without the resynchronization.171717Although resynchronization terminates error propagation in a frame, thus improving both the BER and the BLER, it does not reduce the number of frames in error. For this reason, FER results are not included in Figs. 21(a) and 21(b). The parameters are , , and . We see that, with the help of resynchronization, we obtain about two orders of magnitude improvement in both the BER and the BLER in the typical SNR operating range.181818Note that Fig. 18 implies that would not be a good choice here, since the high frequency of single block errors would result in only modest improvements in BER/BLER at a cost of significantly more resynchronization requests, i.e., increased decoding complexity. We also note that the curves tend to merge as the SNR increases, since error propagation, and thus the need for window extension or resynchronization, is rare under good channel operating conditions.

(a) , , and .
(b) and for and .
Fig. 21: BER (solid curves) and BLER (dashed curves) comparison of a rate SBCC with and without resynchronization.

Fig. 21(b) shows the BER/BLER performance comparison with resynchronization for two different values of , with and . We see that the performance with is slightly better than with , which implies that, for short block lengths, resynchronization should be launched as soon as (9) is satisfied by a single target block.191919Fig. 19(a) implies that is a good choice here because of the relative scarcity of single block errors.

V-C Window Extension plus Resynchronization

Window extension and resynchronization can also be employed together in order to further mitigate the effects of error propagation. Basically, window extension is triggered whenever (7) is satisfied. When the window size reaches and (7) is still satisfied, the decoder resets to and then checks if (9) is satisfied. If so, resynchronization is launched. Algorithm 3 in the appendix gives the details of window extension plus resynchronization.

To demonstrate the efficiency of resynchronization combined with window extension, the BER, BLER, and FER performance of a rate blockwise SBCC employing both techniques is shown in Fig. 22 for , , , , , and . We see that, compared to the blockwise SBCC of Example 1, the rate blockwise SBCC with window extension and resynchronization gains approximately two orders of magnitude in BER and BLER and about one order of magnitude in FER at typical operating SNRs.202020Including window extension along with resynchronization allows improvements in the FER, unlike the results for resynchronization alone. We also note that, comparing to Fig. 21(b), combining resynchronization with window extension gains almost an order of magnitude in BER and BLER compared to resynchronization alone.

Fig. 22: BER (solid curves), BLER (dashed curves), and FER (dotted curves) comparison of a rate SBCC with window extension combined with (a) resynchronization and (b) retransmission.

V-D Retransmission Strategy

In the resynchronization mechanism, once resynchronization is triggered, decisions are made on the remaining blocks in the current window, where it is likely that errors still exist. In order to eliminate these errors, we now describe a retransmission strategy as an alternative to resynchronization.

After a target block is decoded, if its average absolute LLRs satisfy (9), we consider the target block as failed. If there are consecutive failed target blocks, retransmission is triggered, again employing an instantaneous noiseless binary feedback channel, using the following steps:

  • The encoder sets the initial states of the two component convolutional encoders to “0”;

  • The information blocks corresponding to the failed blocks and the remaining blocks in the window reenter the encoder, in sequence, and the corresponding encoded blocks are retransmitted.212121This requires a buffer at the transmitter to store the most recent encoded blocks, so they are available for re-encoding when a retransmission request is received. The first retransmitted information block is encoded with two known (all “0”) parity input sequences;

  • The decoder is reset to its original state and decoding begins again with the first retransmitted block.

The details of this procedure are given in Algorithm 4 in the appendix.

The difference between resynchronization and retransmission is that no blocks are retransmitted in the former case, whereas blocks are retransmitted at a time in the latter case. Therefore, unlike resynchronization, retransmission involves some rate loss. However, unlike a conventional hybrid automatic repeat request (HARQ) scheme, the parity feedback (memory) in the encoding process and the fact that the component encoder states are reset to zero results in a different sequence of transmitted blocks (albeit representing the same sequence of information blocks), meaning that techniques such as selective repeat and Chase combining cannot be employed.222222We choose to reset the component encoders to the “0” state because BCCs are a type of spatially coupled code and thus benefit from termination at the beginning of a frame. It would also be possible to not reset and selectively repeat only blocks that satisfy (9), thus improving throughput at a cost of reduced performance. As suggested by a reviewer, this would be an interesting option to investigate in future research. The average effective rate (or throughput) of the retransmission strategy is given by

(10)

where is the code rate of the SBCC without retransmission and is the average number of retransmissions in a frame.

In the following, we give two examples to illustrate the effectiveness of the retransmission strategy.

Example 5: We first consider the rate blockwise SBCC of Example 1 with , , and . The BER/BLER performance with both resynchronization and retransmission is shown in Fig. 23(a).232323In Figs. 23(a) and 23(b), we plot the performance in terms of rather than , since the average effective rate changes depending on the channel noise conditions. The chosen values of were optimized empirically in both cases.

(a) , , and .
(b) , , and .
Fig. 23: BER (solid curves) and BLER (dashed curves) comparison of a rate SBCC with both resynchronization and retransmission.

Compared to the blockwise SBCC of Example 1, resynchronization gains about two orders of magnitude and retransmission almost four orders of magnitude in BER, while the gains in BLER are about two orders of magnitude for resynchronization and slightly more for retransmission. We also see that the curves tend to merge as the SNR increases, as we have noted previously, i.e., the error propagation mitigation methods we propose help mainly in a narrow, but very important, range of SNRs, viz., the operating range in many applications.

Example 6: We next consider the rate blockwise SBCC of Example 1 with , , and . The BER/BLER performance with both resynchronization and retransmission is shown in Fig. 23(b). Compared to the blockwise SBCC of Example 1, resynchronization again gains about two orders of magnitude and retransmission almost four orders of magnitude in BER, while the gains in BLER are almost two and three orders of magnitude, respectively. The frequencies of the burst-error frames and error-propagation frames, along with the mean burst length, are also given in Fig. 24, which shows that both retransmission and resynchronization provide significant performance improvements, but that retransmission is best.

(a) , , original.
(b) , , , resynchronization.
(c) , , , retransmission.
Fig. 24: The frequency of the error frames in a rate SBCC with and without resynchronization and retransmission for at dB.

V-E Window Extension plus Retransmission

Retransmission can also be combined with window extension. Similar to the case of window extension with resynchronization, the decoder tries window extension until and (7) is still satisfied, and then it checks if the retransmission condition (9) is satisfied. Algorithm 5 in the appendix illustrates the details.

The BER/BLER/FER performance of the rate blockwise SBCC of Example 1 employing window extension plus retransmission is shown in Fig. 22 for , , , , , and . We see that, compared to the rate blockwise SBCC of Example 1, the SBCC with window extension and retransmission gains close to one order of magnitude in FER, more than three orders of magnitude in BLER, and four orders of magnitude in BER in the SNR operating range of interest, exceeding the gains obtained with window extension plus resynchronization shown in Fig. 22. This confirms the fact that the retransmission eliminates some of the error blocks that remain following resynchronization. Also, comparing Fig. 22 to Fig. 23(b) illustrates the advantage of combining window extension and retransmission.

Vi Early Stopping Rule

The decoding complexity of SBCCs with sliding window decoding depends mainly on the number of horizontal iterations. Therefore, in order to minimize unnecessary horizontal iterations, we introduce a soft BER stopping rule, which was first proposed for spatially coupled LDPC codes in [22].242424Other stopping rules, such as the cross-entropy rule from [14]

, could be employed here. However, since the LLR magnitudes must be used anyway in the mitigation methods, it is easy to use them also to compute the soft BER estimates.

Every time a horizontal iteration finishes, the average estimated bit error rate of the target bits in the current window is obtained using the following steps:

  • Calculate the decision LLR (the sum of the channel LLR, the prior LLR, and the extrinsic LLR) of every information bit in the target block, ;

  • Compute the average estimated BER of the target information bits as

  • If the average estimated BER of the target bits satisfies , decoding is stopped and a decision on the target symbols in the current window is made, where is a predefined threshold value.

Note that window extension, resynchronization, and the soft BER stopping rule can operate together in a sliding window decoder. We now give an example to illustrate the tradeoffs between performance and computational complexity when these error propagation mitigation schemes are combined with the soft BER stopping rule. Fig. 26 shows the performance of the rate blockwise SBCC of Example 1 with window extension, resynchronization, and the soft BER stopping rule for the same simulation parameters used in Fig. 22 and . We see that using the stopping rule degrades the BER performance only slightly, but the BLER performance is negatively affected in the high SNR region.252525The BLER loss at high SNR can be reduced by using a smaller , at a cost of some increased decoding complexity, since a smaller results in a lower probability that a block will contain some bit errors. The average number of horizontal iterations per block is shown in Fig. 26, where we see that the soft BER stopping rule greatly reduces the required number of horizontal iterations, especially in the high SNR region.

Fig. 25: BER (solid curves) and BLER (dashed curves) comparison of a rate SBCC with window extension and resynchronization, with and without the soft BER stopping rule.
Fig. 26: Number of horizontal iterations of a rate SBCC with window extension and resynchronization, with and without the soft BER stopping rule.

Vii Conclusion

In this paper we investigated the severe but infrequent error propagation problem associated with blockwise SBCCs and low latency sliding window decoding, which can have a catastrophic effect on performance for large frame lengths and continuous streaming operation. We began by examining the causes of error propagation in sliding window decoding of SBCCs, noting that it is always accompanied by near zero average LLR magnitudes in the incorrectly decoded blocks. Based on this observation, a window extension algorithm, a resynchronization mechanism, and a retransmission strategy were proposed to mitigate the error propagation. The FER, BLER, and BER of blockwise SBCCs with these three error propagation mitigation methods was shown to improve performance by up to four orders of magnitude in the SNR operating range of interest. Furthermore, a soft BER stopping rule was introduced and shown to significantly reduce decoding complexity with only a slight effect on BER performance.

References

  • [1] W. Zhang, M. Lentmaier, K. Sh. Zigangirov, and D. J. Costello, Jr., “Braided convolutional codes: a new class of turbo-like codes,” IEEE Trans. Inf. Theory, vol. 56, no. 1, pp. 316-331, Jan. 2010.
  • [2] A. J. Feltström, M. Lentmaier, D. V. Truhachev, and K. S. Zigangirov, “Braided block codes,” IEEE Trans. Inf. Theory, vol. 55, no. 6, pp. 2640-2658, Jun. 2009.
  • [3] Y. Jian, H. D. Pfister, K. R. Narayanan, Raghu Rao, and R. Mazahreh, “Iterative hard-decision decoding of braided BCH codes for high-speed optical communication,” in Proc. IEEE Global Communications Conference (GLOBECOM), Atlanta, GA, Dec. 9-13, 2013, pp. 2376-2381.
  • [4] B. P. Smith, A. Farhood, A. Hunt, F. R. Kschischang, and J. Lodge, “Staircase codes: FEC for 100 Gb/s OTN,” J. Lightwave Technol., vol. 30, no. 1, pp. 110-117, Jan. 2012.
  • [5] A. Sheikh, A. Graell i Amat, G. Liva, and F. Steiner, “Probabilistic amplitude shaping with hard decision decoding and staircase codes,” J. Lightwave Technol., vol. 36, no. 9, pp. 1689-1697, May, 2018.
  • [6] P. Elias, “Error free coding,” IRE Trans. Inf. Theory, vol. 4, no. 4, pp. 29-37, Sep. 1954.
  • [7] M. Sipser and D. A. Spielman, “Expander codes,” IEEE Trans. Inf. Theory, vol. 42, no. 6, pp. 1710-1722, Nov. 1996.
  • [8] S. Moloudi, M. Lentmaier, and A. Graell i Amat, “Spatially coupled turbo-like codes,” IEEE Trans. Inf. Theory, vol. 63, no. 10, pp. 6199-6215, Oct. 2017.
  • [9] S. Moloudi, M. Lentmaier, and A. Graell i Amat, “Finite length weight enumerator analysis of braided convolutional codes,” in Proc. Int. Symp. Inf. Theory and Its Applications, Monterey, CA, USA, Oct. 30-Nov. 2, 2016, pp. 488-492.
  • [10] M. U. Farooq, S. Moloudi, and M. Lentmaier, “Threshold of braided convolutional codes on the AWGN channel,” in Proc. IEEE Int. Symp. Information Theory, Vail, CO, USA, June 17-22, 2018, pp. 1375-1379.
  • [11] S. Moloudi, M. Lentmaier, and A. Graell i Amat, “Spatially coupled turbo-like codes: A new trade-Off between waterfall and error floor,” IEEE Trans. on Communications, vol. 67, no. 5, pp. 3114-3123, May 2019.
  • [12] M. Lentmaier, A. Sridharan, D. J. Costello, Jr., and K. S. Zigangirov, “Iterative decoding threshold analysis for LDPC convolutional codes,” IEEE Trans. Inf. Theory, vol. 56, no. 10, pp. 5274-5289, Oct. 2010.
  • [13] A. R. Iyengar, M. Papaleo, P. H. Siegel, J. K. Wolf, A. Vanelli-Coralli, and G. E. Corazza, “Windowed decoding of protograph-based LDPC convolutional codes over erasure channels,” IEEE Trans. Inf. Theory, vol. 58, no. 4, pp. 2303-2320, April 2012.
  • [14] M. Zhu, D. G. M. Mitchell, M. Lentmaier, D. J. Costello, Jr., and B. Bai, “Braided convolutional codes with sliding window decoding,” IEEE Trans. on Communications, vol. 65, no. 9, pp. 3645-3658, Sept. 2017.
  • [15] M. Zhu, D. G. M. Mitchell, M. Lentmaier, D. J. Costello, Jr., and B. Bai, “Combating error propagation in window decoding of braided convolutional codes,” in Proc. IEEE Int. Symp. Information Theory, Vail, CO, USA, June 17-22, 2018, pp. 1380-1384.
  • [16] M. Nikhil Krishnan, D. Shukla, and P. Vijay Kumar, “Low field-size, rate-optimal streaming codes for channels with burst and random erasures,” IEEE Transactions on Information Theory, March 2020.
  • [17] A. Badr, P. Patil, A. Khisti, W. Tan, and J. Apostolopoulos, “Layered constructions for low-delay streaming codes,” IEEE Transactions on Information Theory, vol. 63, no. 1, pp. 111-141, Jan. 2017.
  • [18] A. Badr, D. Lui, and A. Khisti, “Streaming codes for multicast over burst erasure channels,” IEEE Transactions on Information Theory, vol. 61, no. 8, pp. 4181-4208, Aug. 2015.
  • [19] D. Dudzicz, S. L. Fong, and A. Khisti, “An explicit construction of optimal streaming codes for channels with burst and arbitrary erasures,” IEEE Transactions on Communications, vol. 68, no. 1, pp. 12-25, Jan. 2020.
  • [20] K. Klaiber, S. Cammerer, L. Schmalen, and S. ten Brink, “Avoiding burst-like error patterns in windowed decoding of spatially coupled LDPC codes,” in Proc. IEEE 10th International Symposium on Turbo Codes Iterative Information Processing (ISTC), Hong Kong, China, Dec. 3-7, 2018, pp. 1-5.
  • [21] Y. Polyanskiy, H. V. Poor, and S. Verdú, “Channel coding rate in the finite blocklength regime,” IEEE Trans. Inf. Theory, vol. 56, no. 5, pp. 2307-2359, May 2010.
  • [22] N. Ul Hassan, A E. Pusane, M. Lentmaier, G. P. Fettweis, and D. J. Costello, Jr., “Non-uniform window decoding schedules for spatially coupled LDPC codes,” IEEE Trans. on Communications, vol. 65, no. 2, pp. 501-510, Nov. 2016.
  • [23] M. Zhu, D. G. M. Mitchell, M. Lentmaier, and D. J. Costello, Jr., “A novel design of spatially coupled LDPC codes for sliding window decoding,” to appear in Proc. IEEE Int. Symp. Information Theory, Los Angeles, CA, USA, June 21-26, 2020.