## I introduction

In part I of this series of papers [1, 2]

, we revisited the bidirectional Bahl-Cocke-Jelinek-Raviv (BCJR) soft-in-soft-out (SISO) maximum a posteriori probability (MAP) decoding process of rate-1 binary convolutional codes. We observed an explicit relationship between the encoding and decoding of rate-1 binary convolutional codes and proposed a low complexity decoder using shift registers in the complex number field. The input to the decoder is the logarithm of soft symbol estimates of the coded symbols obtained from the received signals, and the output is the logarithm of the soft symbol estimates of the information symbols. The proposed decoder reduced the computational complexity of SISO MAP forward and backward recursion from exponential to linear without any performance loss.

The last few years have witnessed a drastic increase in the demand for reliable communications, constrained by the scarce available bandwidth, to support high-speed data transmission applications, such as voice, video, email and web browsing. To accommodate such demand, non-binary convolutional codes have been proposed to replace binary convolutional codes in many applications [3, 4, 5]. For example, non-binary turbo codes, which employ non-binary convolutional codes as component codes, achieve lower error-floor and better performance at the waterfall region compared to binary turbo codes [6]. Additionally, non-binary convolutional codes suit situations where bandwidth-efficient higher order (non-binary) modulation schemes are used, as well as situations where non-coherent modulation schemes are used, such as frequency-shift keying [7].

The main obstacle that impedes the practical implementation of the non-binary convolutional codes is the high decoding complexity. The decoding of non-binary convolutional codes is not equivalent to the decoding of binary convolutional codes, because the non-binary decoder operates on the symbol level, instead of bit level. Decoding is essentially finding an optimal path in a trellis based graph. Therefore, the Viterbi algorithm (VA) [8]

can be applied to decode non-binary convolutional codes. It provides the maximum-likelihood estimate of the information sequence based on the exhaustive search of the trellis over a fixed length window. Unfortunately, in standard VA, hard decision outputs are produced instead of soft outputs containing a posterior probability (APP) of the transmitted symbols. Therefore, the standard VA can not be used to decode concatenated codes, like turbo codes. To overcome this problem, the modified soft-output VA (SOVA) was proposed in

[9] to decode non-binary convolutional codes. SOVA not only delivers the maximum-likelihood information sequence but also provides the APPs of the transmitted symbols. Therefore, it can be applied to decode concatenated codes. However, according to [10], the computational complexity of VA and SOVA is proportional to the number of states, which is , where is the field size and is the constraint length [11]. Thus, it grows rapidly for non-binary alphabets, which makes practical implementation tremendously difficult. Furthermore, both VA and SOVA suffer from considerable performance loss compared to the BCJR MAP algorithm [12] which achieves optimal symbol error probability.The BCJR MAP decoding algorithm is a bidirectional decoding algorithm which includes a forward and backward decoding recursion. The APP of the information symbol is estimated based on the combined forward and backward recursions. All the intermediate results during forward and backward recursions have to be stored before a decision is made, which incurs large memory storage requirements. Furthermore, the computational complexity at each time unit in both recursions is proportional to . The high computational complexity results in large decoding delay and unacceptably high costs.

Therefore, a low complexity decoder with good error performance is desirable for the pragmatic implementation of non-binary convolutional codes. In the decoding of turbo codes based on memory-1 convolutional codes in [13], the authors found that the encoder memory at the current time slot is a linear combination of the encoder memory at the previous time slot and the current input. Since the encoder memory at the previous time slot and the current input are independent, the probability mass function (pmf) of the current encoder memory can be calculated by the convolution of the pmf of encoder memory at the previous time slot and the pmf of the current input. This reduced the calculation complexity in the forward and backward recursions at each time slot to . To further reduce complexity, fast Fourier transform (FFT) is employed on the pmf involved in the convolutions [13]. The decoding complexity is thus reduced to at each time slot. However, this calculation simplification only works for memory-1 convolutional codes. The generalization to non-binary convolutional codes with arbitrary memory length is not considered in [13]. Furthermore, forward and backward recursions still have to be performed based on the trellis of the non-binary convolutional codes. All the intermediate results have to be stored and thus large memory requirements are incurred.

In this paper, we propose a low complexity decoder for general rate-1 non-binary convolutional codes that achieves exactly the same error performance as the bidirectional BCJR MAP decoding algorithm. We observe an explicit relationship between the BCJR MAP forward/backward decoder of a convolutional code and its encoder. Based on this observation, we propose the dual encoders for SISO forward and backward decoding, which are simply implemented using shift registers whose contents are pmfs of complex vectors. Then, the bidirectional SISO MAP decoding is achieved by linearly combining the shift register contents in the forward and backward dual encoders. This significantly reduces the original exponential computational complexity of BCJR MAP forward and backward recursion to . To further reduce the computational complexity, FFT [14, 15] is applied and its complexity is reduced to . Mathematical proofs and simulation results are provided to validate our proposed decoder.

The rest of this paper is organized as follows. In Section II, we propose an dual encoder for SISO MAP forward decoding of non-binary convolutional codes. The dual encoder for SISO MAP backward decoding is presented in Section III. The bidirectional BCJR MAP decoding is achieved by linearly combining the shift register contents of the forward and backward dual encoders, and simulation results are provided to validate our proposed decoder in Section IV. In Section V, concluding remarks are drawn. Mathematical proofs are given in the appendices.

## Ii Dual encoder of SISO MAP forward decoding

In this section, we focus on the SISO MAP forward decoding algorithm. We consider convolutional codes over the finite fields with elements, denoted by . We focus on the decoding of a single constituent rate-1 convolutional code in , generated by , where . Its encoder C is shown in Fig. 1, where all the additions and multiplications are performed in . In this paper, the input, output and memory of shift registers for convolutional encoders are in . Let and denote the information symbol sequence and the codeword sequence, where is the frame length. The code sequence is modulated and transmitted through the additive white Gaussian noise (AWGN) channel. The receiver obtains at the output of the channel.

Let denote the conditional probability of the code symbol given . Let us further define the following probability mass function of and

(1) | |||

(2) |

The aim of the decoder is to derive based on the pmf of the code symbols. To facilitate the exposition, we first consider a simple example. Let us consider a convolutional code with generator polynomial in . Its encoder is shown in Fig. 2. We define an encoder in , described by (Fig. 3). If the input to the encoder is a codeword , generated by , the output of the encoder is the decoded information sequence . Let denote the memory of the shift register of encoder at time , the encoder output is given by

(3) |

where . Therefore, we have the following relationship

(4) |

Note that and are independent, and equation (4) can be written as

(5) |

According to the properties of random variables, since

is the summation of and in , the pmf of is the convolution of the pmf of and . Let denote the pmf of , then the pmf of can be calculated as(6) |

where denotes the convolution operation.

Its corresponding dual encoder is shown in Fig. 4. As verified mathematically in Appendix A, this dual encoder achieves exactly the same BER as the bidirectional BCJR MAP decoding algorithm.

This dual encoder could be generalized to any rate-1 convolutional codes in . First, we define an encoder in , described by , shown in Fig. 5. If the input to the encoder is a codeword , generated by , the output of the encoder is the decoded information sequence . In this encoding process, at each time instant, each encoder memory can be described as a linear combination of input symbols over , denoted by , where . If the linear combination equations of two memories contain one or more common input symbols, we say that the two memories are correlated. For example, if , and , then these two memories are correlated, as both linear combination equations of these two memories contain the input symbol .

If we map the structure in Fig. 5 to the convolutional structure in Fig. 6, where each is replaced by . The output of the dual encoder in Fig. 6 is different from the decoding output of the BCJR MAP forward decoding output, resulted from the correlation of the encoder memories of . Thus, when memories are correlated, the dual encoder cannot be used as an equivalent MAP forward decoding. To eliminate the memory correlation, we can multiply both the numerator and denominator of polynomial by a common polynomial without actually changing the polynomial of . In order to obtain such a common polynomial, let us first define the minimum complementary polynomial for a given polynomial as the polynomial of the smallest degree,

(7) |

such that

(8) |

Since always divides , the minimum complementary polynomial of always exists. Let and let denote the memory of the -th shift register of encoder , generated by . In encoder , the output is given by

(9) |

and the memory of shift registers for encoder can be expressed as

(10) | |||

(11) |

Note that the additions and multiplications in the above three equations are performed in . If we denote , as one symbol in , then the value of equals multiplied by in . The pmf of is denoted by and can be derived by cyclically shifting the th element of to position . Let denote such permutation of by , where each th element in is cyclically shifted to the in [16]. Based on (9), the probability that can be written as

(12) |

Because and are mutually independent, equation (II) can be written as

(13) |

According to the definition of convolution, the probability mass function of can be expressed from (II) as

(14) |

Similarly, due to the independence of the memories of shift registers, the pmf vectors of can be represented as the convolution of the pmf vectors of and

(15) |

Similarly, following (10) and (11), shift register contents of the dual encoder are updated as follows

(16) | |||

(17) |

Based on the above analysis, we can derive a simple structure for MAP forward decoding implemented using the convolutional encoders, described by , as shown in Fig. 7, where denotes the convolution operation and denotes permutation. The input of the dual encoder is and the output of the dual encoder . Here the convolution operation is performed on complex vectors.

Equations (II), (16), (17) and Fig. 7 reveal an interesting relationship of the convolutional encoder and the SISO forward decoder for rate-1 convolutional codes in . This can be summarized in the following theorem.

###### Theorem 1

Dual encoder for SISO MAP forward decoding: For a rate-1 convolutional code in , generated by , we define its dual encoder as the encoder with inverse generator polynomial of , given by . Then the SISO MAP forward decoding of convolutional codes can be implemented by its dual encoder of complex vectors, which is shown in Fig. 7. The output of the dual encoder is the pmf of the information sequence. Note that all the operations in the dual encoder are convolution operations.

See Appendix A.

The complexity of the dual encoder for forward decoding in Fig. 7 is dominated by the convolution operations, and thus scales as [17]. The complexity can be further reduced by applying the FFT on the probability vectors involved in the convolutions [18]. Let and be the FFT transformed vectors of and , we define the Hadamard product, which is the element-wise multiplication of two vectors [19], of and as . The Fourier transform of the convolution of two functions equals the product of the Fourier transforms of these two functions [20]. Therefore, (II), (16) and (17) can be expressed as

(18) | |||

(19) | |||

(20) |

Therefore, we propose an FFT dual encoder for forward decoding, shown in Fig. 8, where denotes the element-wise multiplication of two vectors and the FFT of a vector. Note that all the convolution operations in the dual encoder in Fig. 7 become the element-wise multiplication in Fig. 8, and this considerably reduces the complexity from to . Note that the output of the FFT dual encoder is exactly the same as the output of the dual encoder for forward decoding.

## Iii Inversev encoder of SISO MAP backward decoding

In this section, we propose an dual encoder for the BCJR MAP backward decoding of rate-1 convolutional codes in . In the BCJR MAP backward decoding, the received signals are decoded in a time-reverse order. That is, given the received signal sequence , the order of the signals to be decoded is from , to . In addition, in the backward decoding, the decoder has to follow the trellis in a reverse direction. Figs. 9 and 10 show the encoder and its trellis, described by the generator polynomial . Fig. 11 shows the backward trellis, where the input to the decoder is at the right hand side of the decoder and its output is at the left hand side, which operates in a reverse direction of the conventional order.

For ease of exposition, we propose to present the backward trellis in the forward direction where the decoder input and output are changed to the conventional order. Specifically, for a convolutional encoder, described by , if the labeling of the th shift register in the encoder is changed from to and their respective coefficients are changed from from to , and from to , the resulting encoder is referred to as the reverse-memory labeling encoder of . For example, Fig. 12 shows the forward representation of the backward trellis of code . Its corresponding reverse-memory labeling encoder is shown in Fig. 13.

It is shown in [1, Theorem 3] that the relationship of the encoders for the forward and backward trellises can be extended to general rate-1 convolutional codes in , as shown in the following theorem.

###### Theorem 2

Given an encoder with generator polynomial , the forward representation of its backward trellis can be implemented by its reverse-memory labeling encoder of the same generator polynomial .

This can be proved similarly as the proof of Theorem 3 in [1], and we omit it here.

From Theorem 1, we know that the SISO forward decoding of a given convolutional code, generated by , can be implemented by its dual encoder described by , where is the degree minimum complementary polynomial of . Then according to Theorem 2, the SISO backward decoding of the convolutional code can be implemented by its reverse-memory labeling encoder of . By combining Theorems 1 and 2, we can obtain the dual encoder for SISO MAP backward decoding, which is summarized in the following Theorem.

###### Theorem 3

dual encoder for SISO MAP backward decoding: We consider a convolutional code, generated by . Let be the degree- minimum complementary polynomial of . Its SISO backward decoding can be implemented by its dual encoder, described by , with reverse-memory labeling and time-reverse input, shown in Fig. 14.

See Appendix B.

The computational complexity of the dual encoder for the SISO backward decoding is dominated by the convolution operation. Similar to the forward decoding, we can apply FFT to further reduce the complexity of dual encoder for backward decoding. The FFT backward dual encoder is shown in Fig. 15.

## Iv The representation of bidirectional SISO MAP decoding

In the previous two sections, dual encoders for SISO MAP forward and backward decoding have been proposed. Based on the derived dual encoder structures, in this section, we represent the bidirectional SISO decoder by linearly combining shift register contents of the dual encoders for SISO MAP forward and backward decoding. We prove mathematically that such linear combining achieves exactly the same output as the bidirectional BCJR MAP decoding.

In the bidirectional BCJR MAP decoding, the APPs derived from the forward and backward recursions are combined at the same state at each time unit to obtain the desired decoding output. Therefore, it is usually assumed that the encoder begins with and ends at the all-zero state [10]. The proposed dual encoder will produce the same output as the BCJR MAP algorithm when the forward and backward dual encoders have the same state at each time unit. As will be discussed shortly, this is ensured if the proposed dual encoder begins with and terminates at the all-zero state. To achieve this, tail symbols are added at the end of the code sequence.

Let us consider an encoder of memory length in , described by . If the input to the encoder is a codeword , generated by , the output of the encoder the decoded information sequence . Let us define as the tail-bits required to terminate at the all-zero state. Then following an analysis similar to that in [1], we can prove that the tail-biting convolutional encoder has the following property.

###### Lemma 1

The tail-bits that terminate the encoder , described by , at the all-zero state also terminate the encoder C, generated by , at the all-zero state.

###### Lemma 2

For a tail-biting convolutional encoder , generated by , and a given input sequence , we define its backward encoder as the encoder of the same generator polynomial with reverse-memory labeling and time-reverse input . Then the tail-biting encoder and its backward encoder arrive at the same state at any time .

In the decoding structures we introduced in the previous two sections, the input, output and shift register contents of dual encoders for forward and backward decoding are pmf vectors. To derive the bidirectional SISO decoder output, we need to combine the shift register contents of dual encoders for forward and backward decoding in an optimal way. Let denote the combined pmf of the th shift register of the combined dual encoder at time . Since and are obtained from the forward decoding based on the received signals from time 1 to and that from backward decoding based on the received signals from time to , they are independent. Furthermore, as shown in Lemma 2, for tail-biting encoder , generated by , forward and backward encoders will arrive at the same state at time . Therefore, in the optimal combining, we have

(21) |

Based on the dual encoder structure in Fig. 7, the bidirectional SISO MAP decoding can be implemented by the proposed dual encoder with combined shift register contents. The output of the combined dual encoder is given by

(22) |

As shown in the following theorem, such combining will produce exactly the same output as the bidirectional BCJR MAP algorithm.

###### Theorem 4

See Appendix C.

To reduce computational complexity, FFT can be applied to (22)

(23) |

Next, let us present some simulation results to validate our proposed scheme. A BPSK modulation is assumed. A frame size of symbols is employed over AWGN channels.

The bit error rate (BER) of various 4-state and 16-state convolutional codes are shown in Figs. 16 to 20. The curve “dual encoder forward+backward” refers to the direct summation of the forward and backward dual encoder outputs, and the curve “dual encoder shift register combined output” refers to the optimal combined output (22).

Figs. 16 to 20 show that the direct summation of the forward and backward dual encoder outputs suffers from some performance loss when compared to the bidirectional BCJR MAP algorithm. The SNR loss relative to the bidirectional BCJR MAP algorithm is 0, 0.1, 0.48, 0.1 and 1 dB for codes , , , and . However, the proposed optimal linear combining scheme achieves exactly the same performance as the bidirectional BCJR MAP algorithm.

## V Conclusions

In this paper, we investigated the BCJR MAP decoding of rate-1 convolutional codes in . We observed an explicit relationship between the SISO BCJR MAP forward and backward decoder of a convolutional code and its encoder. Based on this observation, we proposed dual encoders for forward and backward decoding. The input of the dual encoders is the probability mass function of the code symbols and the output of the dual encoders is the probability mass function of the information symbols. The bidirectional SISO decoder is implemented by linearly combining the shift register contents of the dual encoders for forward and backward decoding. The proposed dual encoders significantly reduced the computational complexity of the bidirectional BCJR MAP decoding from exponential to linear in terms of convolutional code constraint length. To further reduce the complexity, fast Fourier transform is employed. Mathematical proofs and simulation results validate that the proposed dual encoder with shift register contents combining produces exactly the same output as the BCJR MAP decoding algorithm.

## Appendix A proof of theorem 1

We consider the BCJR forward decoding algorithm of a general convolutional code in . Its dual encoder for forward decoding is described by . If the state of the dual encoder transits from at time to at time with input , then the probability of can be expressed as

(24) |

According to the generator polynomial , the dual encoder output is independent of the shift register contents in , shown in Fig. 8. Therefore (A) can be written as

(25) |

According to the definition of convolution operation, the probability mass function of can be written as

(26) |

This proves Theorem 1.

## Appendix B proof of theorem 3

We consider the backward decoding of convolutional codes in . Let denote the memory of the -th shift register of the backward encoder of , generated by . Let denote the pmf vector of . The probability that is given by

(28) | ||||

(29) | ||||

(30) |

Note that (29) is derived from (28) because at time slot , the dual encoder output for BCJR MAP backward decoding is independent of . Based on (30), we can get

(31) |

This proves Theorem 3.

## Appendix C Proof of theorem 4

We consider a convolutional code in , generated by . Its dual encoder for decoding is described by . It is assumed that the state of the dual encoder transits from

Comments

There are no comments yet.