Efficient decoding of polar codes with some 16×16 kernels

01/12/2020 ∙ by Grigorii Trofimiuk, et al. ∙ Peter the Great St.Petersburg Polytechnic University 0

A decoding algorithm for polar codes with binary 16×16 kernels with polarization rate 0.51828 and scaling exponents 3.346 and 3.450 is presented. The proposed approach exploits the relationship of the considered kernels and the Arikan matrix to significantly reduce the decoding complexity without any performance loss. Simulation results show that polar (sub)codes with 16×16 kernels can outperform polar codes with Arikan kernel, while having lower decoding complexity.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Polar codes are a novel class of error-correcting codes, which achieve the symmetric capacity of a binary-input discrete memoryless channel , have low complexity construction, encoding and decoding algorithms [1]. However, the performance of polar codes of practical length is quite poor. The reasons for this are the presence of imperfectly polarized subchannels and the suboptimality of the successive cancellation (SC) decoding algorithm. To improve performance, successive cancellation list decoding (SCL) algorithm [2], as well as various code constructions were proposed [3, 4, 5].

Polarization is a general phenomenon, and is not restricted to the case of Arikan matrix [6]. One can replace it by a larger matrix, called polarization kernel, which can provide higher polarization rate. Polar codes with large kernels were shown to provide asymptotically optimal scaling exponent [7]. Many kernels with various properties were proposed [6, 8, 9, 10], but, to the best of our knowledge, no efficient decoding algorithms for kernels with polarization rate greater than were presented, except [11], where an approximate algorithm was introduced. Therefore, polar codes with large kernels are believed to be impractical due to very high decoding complexity.

In this paper we present reduced complexity decoding algorithms for polarization kernels with polarization rate and scaling exponents and . We show that with these kernels increasing list size in the SCL decoder provides much more significant performance gain compared to the case of Arikan kernel, and ultimately the proposed approach results in lower decoding complexity compared to the case of polar codes with Arikan kernel with the same performance.

The proposed approach exploits the relationship between the considered kernels and the Arikan matrix. Essentially, the log-likelihood ratios (LLRs) for the input symbols of the considered kernels are obtained from the LLRs computed via the Arikan recursive expressions.

2 Background

2.1 Channel polarization

Consider a binary input memoryless channel with transition probabilities

, where is output alphabet. For a positive integer , denote by the set of integers . A polarization kernel is a binary invertible matrix, which is not upper-triangular under any column permutation. The Arikan kernel is given by An polar code is a linear block code generated by rows of matrix , where is a digit-reversal permutation matrix, corresponding to mapping ,. The encoding scheme is given by , where are set to some pre-defined values, e.g. zero (frozen symbols), , and the remaining values are set to the payload data.

It is possible to show that a binary input memoryless channel together with matrix gives rise to bit subchannels with capacities approaching or , and fraction of noiseless subchannels approaching [6]. Selecting as the set of indices of low-capacity subchannels enables almost error-free communication. It is convenient to define probabilities

(1)

Let us further define , where kernel will be clear from the context. We also need probabilities for Arikan matrix . Due to the recursive structure of , one has

(2)

where . A trellis-based algorithm for computing these values was presented in [12].

At the receiver side, one can successively estimate

(3)

This is known as the successive cancellation (SC) decoding algorithm.

3 Computing kernel input symbols LLRs

3.1 General case

Our goal is to compute efficiently probabilities for a given polarization transform . Let us assume for the sake of simplicity that . The corresponding task will be referred to as kernel processing.

We propose to introduce approximate probabilities

(4)

This is the probability of the most likely continuation of path in the code tree, without taking into account possible freezing constraints on symbols . Note that the same probabilities were introduced in [11, 13], and shown to provide substantial reduction of the complexity of sequential decoding of polar codes.

Decoding can be implemented using the log-likelihood ratios Hence, kernel output LLRs can be approximated by

(5)

where . The above expression means that can be computed by performing ML decoding of the code, generated by last rows of the kernel , assuming that all are equiprobable.

3.2 Binary algorithm

Straightforward evaluation of (3.1) for arbitrary kernel has complexity . However, we have a simple explicit recursive procedure for computing these values for the case of the Arikan matrix .

Let . Consider encoding scheme . Similarly to (4), define approximate probabilities

and modified log-likelihood ratios

It can be seen that

(6)
(7)

where , , , , . Then the log-likelihood of a path (path score) can be obtained as [14]

(8)

where can be set to , is an empty sequence, and

It can be verified that

(9)

where .

It was suggested in [15] to express values via for some . One can represent the kernel as , where is an matrix. Let . Then, , so that

Observe, that it is possible to reconstruct from , where is the position of the last non-zero symbol in the -th row of . Recall that successive cancellation decoding of polar codes with arbitrary kernel requires one to compute values . However, fixing the values may impose constraints on , which must be taken into account while computing these probabilities.

Indeed, vectors

and satisfy the equation

where , and matrix is obtained by transposing and reversing the order of columns in the obtained matrix. By applying elementary row operations, matrix can be transformed into a minimum-span form , such that the first and last non-zero elements of the -th row are located in columns and , respectively, where all are distinct. This enables one to obtain symbols of vector as

(10)

where . Let . It can be seen that111The method given in [10] is a special case of this approach.

(11)

where is the set of vectors , such that (10) holds for . Similarly we can rewrite the above expression for the case of the approximate probabilities

(12)

Let . Hence, one obtains

(13)

Observe that computing these values requires considering multiple vectors of input symbols of the Arikan transform . Let be a decoding window, i.e. the set of indices of Arikan input symbols , which are not determined by symbols . The number of such vectors, which determines the decoding complexity, is . In general, one has for an arbitrary kernel.

4 Efficient processing of kernels

To minimize complexity of proposed approach one needs to find kernels with small decoding windows while preserving required polarization rate (

in our case) and scaling exponent. By computer search, based on heuristic algorithm presented in

[8], we found a kernel
with BEC scaling exponent [8]. Furthermore, to minimize the size of decoding windows, we derived another kernel , were is a permutation matrix corresponding to permutation , with scaling exponent . Both kernels have polarization rate .

Cost Cost
0 15 15
1 1 1
2 3 3
3 21 1
4 127 7
5 48 67
6 95 24
7 1 47
8 127 1
9 1 1
10 1 1
11 1 1
12 1 7
13 1 1
14 3 3
15 1 1
Table 1: Input symbols for kernels as functions of input symbols for

Table 1 presents the right hand side of expression (10) for each , as well as the corresponding decoding windows , for both kernels. It can be seen that the maximal decoding windows size for and is and , respectively. Note that by applying the row permutation to , we have reduced decoding windows, but increased scaling exponent. Below we present efficient methods for computing some input symbol LLRs for these kernels.

4.1 Processing of kernel with

It can be seen that for one has , i.e. LLR for .

4.1.1 phase 3

In case of expressions (13) and (10) imply that the decoding window and LLR for is given by

where , are already estimated symbols.

To obtain LLR one should compute:

  • with 1 operation,

  • for . Observe that this can be done with 1 summation, since and there is such as ,

  • for with operations,

  • for with 2 operations,

  • for with 2 operations,

  • with 1 operation.

Total number of operations is given by 21.

4.1.2 phase 4

The decoding window is given by and

where is given by the set of vectors , .

Instead of exhaustive enumeration of vectors in (13), we propose to exploit the structure of to identify some common subexpressions (CSE) in formulas for and , which can be computed once and used multiple times. In some cases computing these subexpressions reduces to decoding of well-known codes, which can be implemented with appropriate fast algorithms. Furthermore, we observe that the set of possible values of these subexpressions is less than the number of different to be considered. This results in further complexity reduction. More accurate and detailed description of CSE can be found in [16]. To demonstrate this approach, we consider computing the LLR for of .

This requires considering 16 vectors satisfying (10). According to (8), one obtains Observe that is a coset of Reed-Muller code , where is the set of vectors , so that (10) holds for . Furthermore,

where , . Assume for the sake of simplicity that . Then the first term in this expression can be obtained for each via the fast Hadamard transform (FHT) [17] of vector , and the second one does not need to be computed, since it cancels in (13).

It remains to compute and . In a straightforward implementation, one would recursively apply formulas (6) and (7) to compute for 16 vectors . It appears that there are some CSE arising in this computation.

At first, one needs to compute , . Since , the values constitute the first set of CSE. We store them in the array

where , . Computing these values requires summations only, instead of summations in a straightforward implementation.

The next step is to compute the values which are equal to . Since , gives us the second set of CSE. One can use values stored in to compute as

Observe that for any one has . That is, one needs to consider only vectors of even weight while computing . These values can be calculated as

Finally, the values can be obtained as

Each element of corresponds to some . Finally, these values are used in (13) together with to calculate .

Let us compute the number of operations required to process phase . One need to compute

  • for via FHT with 24 operations,

  • all different arising in CSE (array ) with 16 operations,

  • all in CSE (array ) with 16 operations,

  • all in CSE (array ) with 16 operations,

  • all in CSE (array ) with 16 operations,

  • for with 16 operations,

  • , , with operations,

  • with 1 operation.

The overall complexity is given by 135 operation.

We also employ one observation to reduce the complexity of computing . Let be a FHT of the vector , where . Observe that we can compute with 7 operations and obtain which gives us . Recall that function is zero for one of , therefore, there is a value of such as