Segmentation-Discarding Ordered-Statistic Decoding for Linear Block Codes

01/09/2019 ∙ by Chentao Yue, et al. ∙ The University of Sydney 0

In this paper, we propose an efficient reliability based segmentation-discarding decoding (SDD) algorithm for short block-length codes. A novel segmentation-discarding technique is proposed along with the stopping rule to significantly reduce the decoding complexity without a significant performance degradation compared to ordered statistics decoding (OSD). In the proposed decoder, the list of test error patterns (TEPs) is divided into several segments according to carefully selected boundaries and every segment is checked separately during the reprocessing stage. Decoding is performed under the constraint of the discarding rule and stopping rule. Simulations results for different codes show that our proposed algorithm can significantly reduce the decoding complexity compared to the existing OSD algorithms in literature.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Since 1948, when Shannon introduced the notion of channel capacity and channel coding[1], researchers have been looking for powerful channel codes which can approach the Shannon capacity. Low density parity check (LDPC) and turbo codes have been demonstrated to be capacity approaching and been widely applied in 3G (3rd-generation) and 4G (4th-generation) mobile communications[2]. The polar code proposed by Erdal Arikan et al. in 2008 [3] also attracted much attention in the last decade and has been chosen as one of the standard coding schemes for 5G (5th-generation) communications. In addition to large bandwidth and high-speed enhanced Mobile BroadBand (eMBB) scenarios, 5G has put forward the demand for ultra-reliable and low latency communications (uRLLC). For uRLLC, reducing the latency mandates the use of short block-length codes and conventionally designed moderate/long codes are not suitable [4]. Short code design and the related decoding algorithms have rekindled a great deal of interest among industry and academia recently[5, 6].

Ordered statistics decoding (OSD) was proposed in 1995 as an approximate maximum likelihood (ML) decoder for block codes[7]. OSD has recently aroused interests again because its potential to be a universal decoding algorithm for all short block-length codes. For a linear block code with minimum distance , it is proven that an OSD with order is asymptotically optimum [7]. However, the decoding complexity is a main disadvantage of OSD, as an order- OSD needs a candidate list with size of and the overall algorithmic complexity can be up to .

Much previous work has focused on improving OSD in terms of efficiency and some remarkable progresses have been achieved[8, 9, 10, 11, 12, 13, 14, 15]. The Box-and-Match algorithm [10] can greatly reduce the size of the candidates list, while it brings other computations due to the matching process. Decoding with different biases over reliability was proposed in [11] to refine the performance, and skipping and stopping rules were used in [12] and [13] to abandon unpromising candidates. All of the above methods can be combined with the iterative information set reduction (IISR) technique in [9] to further reduce the complexity. Recently an approach proposed in [15] cuts the most reliable basis (MRB) to several partitions and performs independent OSD over each of them, but it overlooks candidates generated across partitions so that a dramatic performance degradation is resulted. Also a fast OSD algorithm combining stopping rules from [13] and sufficient conditions from [14] was proposed in [16], which can reduce the complexity from to in high signal-to-noise ratios (SNRs).

In this paper, we propose a new fast decoding algorithm combining segmentation-discarding technique and an easily calculated stopping rule. Firstly the list of test error patterns (TEPs) is partitioned into segments according to

carefully selected boundaries over MRB. Then a segment in each reprocessing is discarded if it satisfies a discarding rule. The rule estimates the reliability of each segment by calculating a lower bound on the distance from received signal to the decoded codeword. Reprocessing is performed from the segment with highest priority and terminated if all segments are checked or a stopping rule is satisfied. This algorithm can achieve the performance of the OSD algorithm of large orders with significantly reduced complexity. Simulation results show that this degree of complexity reduction maintains for any rate eBCH code. In addition, the complexity and memory overhead due to the segmentation and discarding is negligible.

The rest of this paper is organized as follows: Section II describes preliminaries. In Section III, the proposed segmentation-discarding algorithm is presented. The analysis of computation complexity is provided in Section IV. Simulation results are presented in Section V and conclusions are drawn in Section VI.

Ii Preliminaries

We consider a binary linear block code , where and denote the information block size and codeword length, respectively. Let and denote the information sequence and codeword, respectively. Given the generator matrix , the encoding operation can be described as .

We suppose an additive white Gaussian Noise (AWGN) channel and binary phase shift keying (BPSK) modulation. Let denote the modulated signals, where . At the channel output, the received signal is given by , where

is the vector of white Gaussian noise samples with zero mean and variance

.

In general, if the codewords in

have equal transmission probability, the log-likelihood-ratio of the

-th symbol of the received signal can be calculated as , which can be further simplified to [13]. Bitwise hard decision can be used to obtain the codewords estimation according to following rule:

(1)

where is the estimation of codeword bit .

We consider the scaled magnitude of log-likelihood-ratio as the reliability (or confidence value) corresponding to bitwise decision, defined as . Utilizing the bits reliability, the soft-decision decoding can be effectively conducted using the OSD algorithm [7]. At the first step of OSD, a permutation is performed to sort the received signals and the corresponding columns of generator matrix in descending order of their reliabilities. Thus the sorted received signal vector is and the corresponding reliability vector satisfies

(2)

Next, the systematic form matrix is obtained by performing Gaussian elimination on , where

denotes the K-dimensional identity matrix and

is the parity sub-matrix. An additional permutation may be necessary during Gaussian elimination to ensure that the first K columns are linearly independent. Correspondingly, the received signal and reliability are finally sorted to and , respectively. A simple greedy search algorithm to perform the permutation can be found in [7].

After the transformation, the first index positions are associated with the most reliable basis (MRB) [7], and the rest of positions are associated with the redundancy part. For the phase-0 reprocessing, the hard decision is performed on ordered sequence using decision rule (1) to obtain estimated information . Let denotes first positions of corresponding to MRB, so the first candidate codeword is obtained by re-encoding as

(3)

Obviously, is the transmitted codewords if and only if there are no errors in MRB positions, otherwise, a test error pattern (TEP) is added to MRB hard-decision before re-encoding, which is equivalent to flipping bits corresponding to nonzero positions of . The decoding operation with bit flipping can be described as

(4)

where is the candidates codewords with respect to TEP .

In the reprocessing of OSD, a number of TEPs are checked to generate codeword candidates until a predetermined maximum candidate number is achieved. For BPSK modulation, finding the best ordered codeword estimation is equivalent to minimizing the weighted Hamming distance (WHD) [17], which is defined as

(5)

Finally, the optimal estimation corresponding to initial received sequence is obtained by performing inverse permutations over , i.e. .

Iii A Fast OSD-based Decoding Algorithm

In this section, we propose a segmentation-discarding decoding (SDD) algorithm that can significantly reduce the decoding complexity, in which the TEP list is divided into several segments and some least reliable segments are discarded according to a discarding rule. In addition, the algorithm is terminated when a stopping condition is satisfied.

Iii-a Segmentation

Firstly, the segmentation is conducted to prepare for applying the discarding rule. Specifically, all TEPs generated in phase- () reprocessing, denoted by , is divided into segments according to boundaries over MRB positions. The boundary position index satisfies

(6)

and the corresponding ordered reliabilities satisfies

(7)

where is the ordered reliability of position . The -th segments bounded by is derived by

(8)

where is the weight of TEP . The conditions in (8) means that TEPs in the -th segment only have nonzero elements over the positions from to , and have at least one nonzero element over the positions from to .

From the perspective of MRB, the segmentation operation over set is equivalent to cutting the MRB positions into segments and generating TEPs accordingly in the phase- reprocessing, as shown in Fig 1. The -th MRB segment is called sub-MRB , defined as positions from boundary to , i.e.

(9)

for . Particularly is exactly the MRB, defined as positions from to . Let denote the set of weight- TEPs which only have nonzero elements over positions , thus the TEPs segment can be easily obtained by

(10)

for . Particularly when , the TEP segment is identical to .

Fig. 1: Segmentation of MRB positions

The choice of boundaries during each reprocessing can greatly affect the trade-off between performance and complexity when some segments are discarded. We determine the boundaries by considering a deviation of the mean of reliability values over MRB. The mean of reliabilities serves as a benchmark for the boundary calculation, and the deviation enables boundaries to be changed adaptively according to the decoding process. At the beginning of each reprocessing we first estimate the reliability of the first boundary position as

(11)

where is the mean of ordered reliabilities over positions from to , is the minimum WHD from checked codeword candidates to received sequence, and is a given parameter. Boundary reliability is tightened and updated adaptively at each reprocessing phase according to the offset function . The choice of will be discussed in Section V.

The first boundary is determined by finding the position over MRB whose reliability is closest to . Then boundary , , is sequentially determined as the position over whose reliability is closest to which is estimated as

(12)

The value of affects the positions of all boundaries, Furthermore with discarding rule, the trade-off between complexity and performance can be adjusted by choosing different value.

Iii-B Discarding and stopping rules

In order- OSD decoding, all the weight- TEPs are checked in phase- reprocessing. Thus some search strategies can be used to improve the checking efficiency. It is proved that for a reliability-ordered hard-decision sequence , the following inequalities holds [7],

(13)

and

(14)

for , where is the sequence length and is the probability that the hard-decision of -th symbol is in error. Equivalent results hold for any number of positions considered. Therefore, one of the regular search orders is to start checking TEPs from the least reliable positions with least weight[17].

From (6) and (14) it can be concluded that the first TEPs segments have the highest checking priority within the same phase reprocessing, and those of the rest are diminishing. Therefore, a promising scheme for order- decoding is to conduct reprocessing times in an ascending phase order from 0 to and checking TEPs segments individually from to in phase- reprocessing. Since the last few segments in every reprocessing procedures have the TEPs with least opportunities for successful re-encoding, some of them can be discarded to reduce the decoding complexity.

We introduce a segments discarding rule utilizing local lower bounds of WHD in each phase- reprocessing (). When current minimum WHD is lower than the local lower bound, all the remaining unprocessed segments in corresponding reprocessing phase will be discarded. Thus the segments are discarded if the following condition is satisfied

(15)

is estimated from the first checked TEP in segments , and it is tightened and updated in every TEP segments checking procedure. When the reprocessing starts checking TEPs from a new segments, the is updated by

(16)

where

(17)

is the sum of reliabilities over nonzero positions of ,

is the standard deviation of ordered reliabilities

, and is a parameter that can adjust the trade-off between performance and complexity. In [13], a similar approach was used to estimate TEP likelihood from WHD .

Assuming that segments are discarded in phase- reprocessing, the combining segmentation and discarding scheme is depicted in Fig. 2, where discarded segments are indicated by light colored blocks.

Fig. 2: Decoding Scheme. Light colored blocks represent the segments that was discarded, while dark-colored ones are retained.

Another stopping rule utilizing the first boundary is used to terminate the decoding in advance. During phase- reprocessing, the decoding stops and outputs the result immediately if the following condition is satisfied

(18)

for . This is because that if sub-MRB does not have enough positions to generate a weight- TEP, the decoding codeword is close to the ideal output and no further decoding needs to be conducted.

We present the complete decoding algorithm combining segmentation-discarding and stopping rules in Algorithm 1.

0:    Generator matrix , received sequence Order , segments number , and parameters and
0:    Optimal codeword estimation   
1:  Calculate reliability value
2:  First permutation: , ,
3:  Gaussian elimination and second permutation: , ,
4:  Perform hard-decision:
5:  Phase-0 reprocessing
6:  Calculate and
7:  Phase- reprocessing with segments
8:  for  do
9:     for  do
10:        Determine the boundary through
11:        if  then
12:           return  
13:        Generate TEP segment
14:        Calculate and
15:        if  then
16:           break
17:        Check all TEPs from by re-encoding and evaluating . Find local optimum estimation with distance for
18:        if  then
19:           ,
20:  return  
Algorithm 1 Proposed SDD Algorithm

Iv Computational Complexity

We estimate the algorithm complexity by evaluating the number of floating point operations (FLOPs) and binary operations (BOPs) of each step. The total computational complexity is mainly dependent on following terms:

  • Sorting (the first permutation): Merge sort algorithm can efficiently generate and perform the first permutation with average complexity of FLOPs[7].

  • Gaussian elimination: The operation to obtain systematical generation matrix from can be done with [7].

  • Re-encoding: Re-encoding uses sign operations and parallel XOR operations [7], which can be represented as BOPs.

  • Number of candidates: For OSD-based decoding, the total number of checked candidates greatly affect the complexity since times of re-encoding is required.

  • Segment boundaries and distance lower bound: The searching of can be regarded as one-dimension look-up table operation with , thus the total cost of boundaries calculation is FLOPs. While distance lower bound is simply calculated with complexity of FLOP in every segments.

In a complete decoding procedure, the sorting and Gaussian elimination is performed once, the re-encoding repeats times in reprocessing, the boundaries are calculated times, and the distance lower bound is updated times. Therefore, the total computational complexity can be estimated as

(19)

The last term, e.i., can be ignored since it is too small in comparison with the other terms when Q is not large. This implies that the complexity due to the segmentation and discarding is negligible.

As a comparison, we also derive the extra computation of the Fast OSD algorithm from [16]. The probabilistic necessary condition (PNC) in [16] requires parallel XOR operations and is checked at least once during each reprocessing. The probabilistic sufficient condition (PSC) requires approximately one FLOP and is checked for each TEP [16]. Therefore, the overall extra computation due to PNC and PSC in the Fast OSD [16] is at least

(20)

Compared with (20), our proposed algorithm is more efficient in terms of the computational complexity since only extra operations are introduced.

V Simulation Results and Comparisons

In this section, we present several simulation results and comparisons for length-128 extended BCH (eBCH) codes with different rates. The form of offset function in (11) will significantly affect the performance of the proposed algorithm. By simulation, we find that the best decoding efficiency is obtained when the offset function has the following form

(21)

Substituting the offset function (21) into (11), the first boundary is the position whose reliabilities is closest to

(22)
Fig. 3: Performance and complexity comparison in decoding the (128,64,22) eBCH code.

The performance and complexity of various decoders for eBCH code is depicted in Fig. 3. For our proposed algorithm, we set segment number , parameters , and in order-4 decoding and in order-3 decoding. The original OSD algorithm [7], the recent proposed OSD fast approach [16], and the normal approximation of the Polyanskiy-Poor-Verdú (PPV) [18] are included in simulation as benchmarks for comparison. From simulation results, it can be seen that our proposed SDD algorithm exhibits a nearly identical performance compared to other simulated counterparts. However, the complexity in terms of the average number of checked candidates is different. Compared to fast OSD approach, our algorithm requires less than half of the TEP candidates.

Fig. 4: Performance and complexity comparison in decoding the (128,22,48) eBCH code.

Same simulation is conducted for a lower coding rate with eBCH code. For this case, we set , parameters and for oder-5, order-4 and order-3 decoding, respectively. As shown in Fig. 4, the performance and complexity is compared for different approaches and significant improvement is brought by the proposed SDD algorithm. At low SNRs, our decoding achieves the same performance using three times less the number of candidates than fast OSD, and particularly significant complexity reduction can be observed at high SNRs as well. Note that the PPV normal approximation is not included in simulation because of its inaccuracy at low coding rate.

Fig. 5: Performance and complexity comparison in decoding the (64,16,24) eBCH code.

The simulation results of decoding (64,16,24) eBCH code is depicted in Fig. 5. For the proposed SDD, we set ,, and for order-3 and order-2 decoding, respectively. In this 64-length regime, the proposed SDD also outperforms the fast OSD in terms of the decoding complexity, with the near-optimal performance achieved. The average numbers of in decoding the above three codes are recorded in Table I, Table II and Table III.

Fig. 6: Order-3 decoding of the (128,64,22) eBCH code with different parameter , and segments number .

For completeness, we have also conducted a study of the impact of different segments number , parameter and values on performance and complexity in order-3 decoding for (128,64,22) eBCH code. As depicted in Fig 6, it can be seen that the performance decreases gradually with increasing , and the average candidates number is reduced accordingly. affects the performance at high SNRs and simulation advises that is there is an optimal value for . Changing also affects the decoding efficiency because more segments bring more discarding options. Choosing different parameters can adjust the trade-off between performance and complexity to meet the needs of different decoding requirements.

Apart from the class of BCH codes, the proposed SDD also has the potential to be a universal decoding approach for all linear block codes in the short block-length regime. We compared the decoding performance for various length-32 codes with fixed coding rate 0.5, as depicted in Fig. 7. The (32,16) eBCH code, the CCSDS standard LDPC code, and the (32,16) Polar codes are decoded by the SDD decoder and also their corresponding traditional decoder (SPA for LDPC codes and SCL for Polar codes)[19]. From the simulation results, BCH code performs best among these three codes, and the Polar code is slightly inferior. The block-error-rate performance of LDPC code is the worst, only reaches using SDD and reaches using SPA. For decoding both the LDPC code and Polar code, the proposed SDD outperforms their traditional decoder.

Fig. 7: Decoding performance comparison for length-32 codes
SNR(dB) 0 1 2 3
Order-3 fast OSD 20107 9775 2452 310
Order-3 proposed algorithm 6194 3762 1016 158
Order-4 fast OSD 70262 31917 5164 489
Order-4 proposed algorithm 29992 13777 2821 258
TABLE I: Comparison of average between fast OSD and proposed algorithm for (128,64,22) eBCH code
SNR(dB) -5 -4 -3 -2 -1
Order-3 fast OSD 1409 1043 621 282 104
Order-3 proposed algorithm 640 485 289 132 52
Order-4 fast OSD 5567 3475 1718 661 312
Order-4 proposed algorithm 1255 1072 591 240 77
Order-5 fast OSD 10753 6172 2635 929 289
Order-5 proposed algorithm 3116 2328 1243 464 128
TABLE II: Comparison of average between fast OSD and proposed algorithm for (128,22,48) eBCH code
SNR(dB) -2 -1 0 1
Order-2 fast OSD 55.9 31.6 15.1 6.1
Order-2 proposed algorithm 36.4 21.0 10.7 4.9
Order-3 fast OSD 69.8 36.0 15.9 6.2
Order-3 proposed algorithm 54.4 28.2 13.0 5.7
TABLE III: Comparison of average between fast OSD and proposed algorithm for (64,16,24) eBCH code

Vi Conclusion

In this paper we proposed a new fast segmentation-discarding decoding (SDD) algorithm for short block-length codes based on ordered reliability. Two techniques were combined in the proposed approach: 1) an adaptive segmentation and discarding rule to discard unpromising TEPs, and 2) a stopping rule to terminate the decoding when good estimation has been found.

From simulation results, we conclude that the proposed algorithm can significantly reduce the decoding complexity of OSD for multiple rate short block-length eBCH codes. By adjusting parameters, the trade-off between performance and complexity can be obtained. In addition, the proposed algorithm has the potential to be a universal decoding approach for any linear codes in the short block-length regime with near-optimal performance guaranteed.

References

  • [1] C. E. Shannon, “A mathematical theory of communication,” The Bell System Technical Journal, vol. 27, no. 4, pp. 623–656, Oct 1948.
  • [2] S. Lin and D. J. Costello, Error control coding.   Pearson Education India, 2004.
  • [3] E. Arikan, “Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels,” IEEE Transactions on Information Theory, vol. 55, no. 7, pp. 3051–3073, July 2009.
  • [4] M. Shirvanimoghaddam, M. S. Mohamadi, R. Abbas, A. Minja, B. Matuz, G. Han, Z. Lin, Y. Li, S. Johnson, and B. Vucetic, “Short block-length codes for ultra-reliable low-latency communications,” arXiv preprint arXiv:1802.09166, 2018.
  • [5] G. Liva, L. Gaudio, T. Ninacs, and T. Jerkovits, “Code design for short blocks: A survey,” arXiv preprint arXiv:1610.00873, 2016.
  • [6] J. V. Wonterghem, A. Alloumf, J. J. Boutros, and M. Moeneclaey, “Performance comparison of short-length error-correcting codes,” in 2016 Symposium on Communications and Vehicular Technologies (SCVT), Nov 2016, pp. 1–6.
  • [7] M. P. C. Fossorier and S. Lin, “Soft-decision decoding of linear block codes based on ordered statistics,” IEEE Transactions on Information Theory, vol. 41, no. 5, pp. 1379–1396, Sep 1995.
  • [8] ——, “Computationally efficient soft-decision decoding of linear block codes based on ordered statistics,” IEEE Transactions on Information Theory, vol. 42, no. 3, pp. 738–750, May 1996.
  • [9] M. P. C. Fossorier, “Reliability-based soft-decision decoding with iterative information set reduction,” IEEE Transactions on Information Theory, vol. 48, no. 12, pp. 3101–3106, Dec 2002.
  • [10] A. Valembois and M. Fossorier, “Box and match techniques applied to soft-decision decoding,” IEEE Transactions on Information Theory, vol. 50, no. 5, pp. 796–810, May 2004.
  • [11] W. Jin and M. P. C. Fossorier, “Reliability-based soft-decision decoding with multiple biases,” IEEE Transactions on Information Theory, vol. 53, no. 1, pp. 105–120, Jan 2007.
  • [12] Y. Wu and C. N. Hadjicostis, “Soft-decision decoding of linear block codes using preprocessing and diversification,” IEEE transactions on information theory, vol. 53, no. 1, pp. 378–393, 2007.
  • [13] ——, “Soft-decision decoding using ordered recodings on the most reliable basis,” IEEE transactions on information theory, vol. 53, no. 2, pp. 829–836, 2007.
  • [14] W. Jin and M. Fossorier, “Probabilistic sufficient conditions on optimality for reliability based decoding of linear block codes,” in Information Theory, 2006 IEEE International Symposium on.   IEEE, 2006, pp. 2235–2239.
  • [15] S. E. Alnawayseh and P. Loskot, “Ordered statistics-based list decoding techniques for linear binary block codes,” EURASIP Journal on Wireless Communications and Networking, vol. 2012, no. 1, p. 314, 2012.
  • [16] J. Van Wonterghem, A. Alloum, J. J Boutros, and M. Moeneclaey, “On performance and complexity of osd for short error correcting codes in 5g-nr,” 06 2017.
  • [17] A. Valembois and M. Fossorier, “A comparison between ”most-reliable-basis reprocessing” strategies,” IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences, vol. 85, no. 7, pp. 1727–1741, 2002.
  • [18] Y. Polyanskiy, H. V. Poor, and S. Verdú, “Channel coding rate in the finite blocklength regime,” IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2307–2359, 2010.
  • [19] M. Helmling, S. Scholl, F. Gensheimer, T. Dietz, K. Kraft, S. Ruzika, and N. Wehn, “Database of Channel Codes and ML Simulation Results,” www.uni-kl.de/channel-codes, 2017.