# Interleaved Polar (I-Polar) Codes

By inserting interleavers between intermediate stages of the polar encoder, a new class of polar codes, termed interleaved polar (i-polar) codes, is proposed. By the uniform interleaver assumption, we derive the weight enumerating function (WEF) and input-output weight enumerating function (IOWEF) averaged over the ensemble of i-polar codes. The average WEF can be used to calculate the upper bound on the average block error rate (BLER) of a code selected at random from the ensemble of i-polar codes. Also, we propose a concatenated coding scheme that employs P high rate codes as the outer code and Q i-polar codes as the inner code with an interleaver in between. The average WEF of the concatenated code is derived based on the uniform interleaver assumption. Simulation results show that BLER upper bounds can well predict BLER performance levels of the concatenated codes. The results show that the performance of the proposed concatenated code with P=Q=2 is better than that of the CRC-aided i-polar code with P=Q=1 of the same length and code rate at high signal-to-noise ratios (SNRs). Moreover, the proposed concatenated code allows multiple decoders to operate in parallel, which can reduce the decoding latency and hence is suitable for ultra-reliable low-latency communications (URLLC).

## Authors

• 1 publication
• ### Practical Product Code Construction of Polar Codes

In this paper, we study the connection between polar codes and product c...
10/14/2019 ∙ by Carlo Condo, et al. ∙ 0

• ### A New Construction of Nonbinary Polar Codes with Two-stage Polarization

In this paper, we propose a new class of nonbinary polar codes with two-...
01/24/2018 ∙ by Peiyao Chen, et al. ∙ 0

• ### Polar Coding for the Large Hadron Collider: Challenges in Code Concatenation

In this work, we present a concatenated repetition-polar coding scheme t...
12/01/2017 ∙ by Alexios Balatsoukas-Stimming, et al. ∙ 0

• ### Polarization Weight Family Methods for Polar Code Construction

Polar codes are the first proven capacity-achieving codes. Recently, the...
05/08/2018 ∙ by Yue Zhou, et al. ∙ 0

• ### Polar Codes for the Deletion Channel: Weak and Strong Polarization

This paper presents the first proof of polarization for the deletion cha...
04/30/2019 ∙ by Ido Tal, et al. ∙ 0

• ### Enumeration of Minimum Hamming Weight Polar Codewords with Sublinear Complexity

Polar code, with explicit construction and recursive structure, is the l...
12/18/2019 ∙ by Fengyi Cheng, et al. ∙ 0

• ### An Upper Limit of AC Huffman Code Length in JPEG Compression

A strategy for computing upper code-length limits of AC Huffman codes fo...
01/19/2009 ∙ by Kenichi Horie, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Polar codes [2] are constructed from the generator matrix with , where denotes the th Kronecker power. It has been shown in [2], that the synthesized channels seen by individual bits approach two extremes, either a noiseless channel or a pure-noise channel, as the block length grows large. The fraction of noiseless channels is close to the channel capacity. Therefore, the noiseless channels, termed unfrozen bit channels, are selected for transmitting message bits while the other channels, termed frozen bit channels, are set to fixed values known by both encoder and decoder. Therefore, polar codes are the first family of codes that achieve the capacity of symmetric binary-input discrete memoryless channels under a low-complexity successive cancellation (SC) decoding algorithm as the block length approaches infinity.

However, the performance of polar codes at short to moderate block lengths is disappointing under the SC decoding algorithm. Later, a successive cancellation list (SCL) decoding algorithm for polar codes was proposed [22], which approaches the performance of the maximum-likelihood (ML) decoder as the list size is large. However, the performance levels of polar codes are still inferior to those of low-density parity-check (LDPC) codes even under the ML decoder. To strengthen polar codes, a serial concatenation of a cyclic redundancy check (CRC) code and a polar code, termed the CRC-aided polar code, was found to be effective to improve the performance under the SCL decoding algorithm [22]. The performance levels of CRC-aided polar codes under the SCL decoding algorithm are better than those of LDPC and turbo codes [22, 17].

As the SCL decoder is capable to achieve the ML performance, it is important to study the block error rate (BLER) of polar codes under the ML decoder. However, in the literature, there are no analytical results regarding the ML performance of polar codes. The BLERs of polar codes rely on simulations that are time-consuming. A possible way to analyze the BLER performance of a coding scheme is to use the BLER upper bound which is a function of the weight enumerating function (WEF) as that used to analyze turbo codes [5]

. However, if the code size is large, obtaining the exact WEF of a polar code with the heuristic method is prohibitively complex. Approximations of WEFs of polar codes are proposed in

[25, 27] based on the probabilistic weight distribution (PWD) [11].

In this paper, we propose to randomize the polar code using interleavers between the intermediate stages of the polar code encoder. Codes constructed on the basis of this idea are called interleaved polar (i-polar) codes. The ensemble of i-polar codes is formed by considering all possible interleavers. The regular polar code is just one realization of the ensemble of i-polar codes. Based on the concept of uniform interleaver, i.e., all interleavers are selected uniformly at random from all possible permutations, the average WEF of a code selected at random from the ensemble of i-polar codes can be evaluated. The concept of uniform interleaver has also been used in the analysis of turbo codes [5]. Note that the WEF analysis in this paper is not an approximation to the WEF of a polar code, but is an exact WEF averaged over the ensemble of i-polar codes. Based on the average WEF, a BLER upper bound, termed simple bound [8], can be used to evaluate the BLER performance averaged over the ensemble of codes. Simulation results show that the BLER upper bounds can well predict the ML performance levels of i-polar codes at high SNRs. Also, we will show by simulations that a specific realization of i-polar codes outperforms a regular polar code under the SCL decoder of the same list size.

We also propose a concatenated coding scheme that employs identical high rate codes as the outer code and identical i-polar codes as the inner code with an interleaver in between. CRC codes are the most popular outer codes employed in the concatenation of polar codes. We propose as an alternative to use systematic regular repeat-accumulate (RRA) codes or irregular repeat-accumulate (IRA) codes [16] as the outer component code. The average WEF of the concatenated code is derived based on the uniform interleaver assumption. Simulation results show that the BLER upper bounds can well predict the BLER performance levels of the concatenated codes. One advantage of the proposed concatenated code is that, for , the code can be decoded using SCL decoders working in parallel which can significantly reduce the decoding latency when is large. Analytical and simulation results both show that the performance of the proposed concatenated code with is better than that of the CRC-aided i-polar code with of the same length and code rate at high SNRs. Therefore, the proposed coding scheme is suitable for ultra-reliable low-latency communications (URLLC) [15].

The rest of the paper is organized as follows. We begin with a brief introduction of polar codes in Section II. The construction of i-polar codes is presented in Section III. Section IV presents the WEF and IOWEF analysis of i-polar codes. In Section V, a concatenated coding scheme with the i-polar code as the inner component code is proposed and the WEF of the concatenated code is presented. Analytical and simulation results are given in Section VI. Finally, conclusions are given in Section VII.

Notations: Throughout this paper, matrices and vectors are set in boldface, with upper case letters for matrices and lower case letters for vectors. An

-tuple vector is denoted as with the indices starting from 0 (instead of 1 for normal vector representations). The notation means the sub-vector if and null vector otherwise. Set quantities such as are denoted using the calligraphic font, and the cardinality of the set is denoted as .

## Ii Background

A codeword of the polar code of length without bit-reversal matrix can be represented by

 \boldmathx=\boldmathu\boldmathG⊗M2, (1)

where is the message bits, is the codeword bits, and . A polar code of block length can be represented by a graph with layers of trellis connections as given in [2] which is called the standard graph of the polar code. It has been shown in [14] that for a polar code of block length , there exist different graphs obtained by different permutations of the layers of trellis connections. We consider to represent a polar code with reverse ordering of its standard graph. Figure 1 shows an example graph for with reverse ordering of the standard graph, where the notation represents a modulo-2 adder.

The codeword obtained from (1) is then transmitted via independent uses of the binary input discrete memoryless channel (B-DMC) , where denotes the input alphabet, denotes the output alphabet, and

denotes the channel transition probabilities. The conditional distribution of the output

given the input , denoted as , is given by

 WN(\boldmathy|\boldmathx)=N−1∏i=0W(yi|xi).

The distribution of conditioned on , denoted as , is given by

 WM(\boldmathy|\boldmathu)=WN(\boldmathy|%\boldmath$u$\boldmathG⊗n2).

The polar code of length transfers the original identical channels into synthesized channels, denoted as for with the transition probability given by

 W(i)M(\boldmathy,\boldmathui−10|ui)≡∑\boldmathuN−1i+1∈XN−i−112N−1WM(\boldmathy|\boldmathu).

It has been shown in [2] that as grows large, the synthesized channels start polarizing. They approach either a noiseless channel or a pure-noise channel. The fraction of noiseless channels is close to the channel capacity. Therefore, the noiseless channels are selected for transmitting message bits while the other channels are set to fixed values known by both encoder and decoder. In the code design, a polar code of dimension is generated by selecting the least noisy channels among and the indices of the least noisy channels are denoted as a set . Define as a sub-vector of formed by the elements of with indices in . Only the sub-vector , termed unfrozen bits, is employed to transmit message bits. The other bits , termed frozen bits, are set to fixed values known by both encoder and decoder. In this paper, we set the frozen bits to all zeros.

Polar codes can be decoded by the SC decoder which has decoding complexity of and can achieve the capacity as approaches infinity [2]. However, SC decoder does not perform well at short to moderate block lengths. The SC decoder has the drawback that if a bit is not correctly detected, it is not possible to correct it in future decoding steps. To improve the performance, a more sophisticated SCL decoder was proposed in [22], which performs very close to the ML performance for large list size . The SCL decoder of list size is based on the tree search over the message bits under the complexity constraint that the number of candidates in the list is at most . At the th step, if , the decoder extends every candidate path in the list along two paths of the binary tree by appending a bit 0 or a bit 1 to each of the candidate paths. Therefore, for every , the decoder doubles the number of paths. When the number of paths exceeds , only most reliable paths are retained. This procedure is repeated until . At the last step, the most reliable path is selected as the output of the decoder. The SCL decoder degenerates to the SC decoder when . The details of the SCL decoder can be found in [22] based on the probability domain and in [3] based on the log-likelihood ratio (LLR) domain.

## Iii Interleaved Polar (I-Polar) Codes

A polar code is constructed recursively by the well-known structure . Without ambiguity, we use ’+’ to denote the binary addition as well as the ordinary real number addition. Let and be two linear codes of the same length . We define as

 C1+C2|C2={[\boldmathx+% \boldmathy,\boldmathy]|\boldmathx∈C1,% \boldmathy∈C2}.

As shown by the graph representation of the polar code, a polar code can be described by the following recursive equation

 Cm,j=Cm−1,2j+Cm−1,2j+1|Cm−1,2j+1,

for and . The initial conditions are if and if . The polar code of length is represented by the code .

We propose to construct the i-polar code by inserting an interleaver at the output of every upper encoder for . An interleaver can be represented as a permutation matrix . We define as

 C\boldmathΠ={\boldmathx\boldmathΠ|%\boldmath$x$∈C}

which represents a code obtained by permuting the code bits of all codewords of using the interleaver . Therefore, the i-polar code can be described by the following recursive equation

 Cm,j=Cm−1,2j\boldmathΠm−1,j+Cm−1,2j+1|Cm−1,2j+1, (2)

for and . The initial conditions are if and if . Note that the interleavers , for , are trivial. At the th layer, there are interleavers of size . Figure 2 shows the graph of an i-polar code of length , for which three interleavers are required, i.e., , , and with sizes 2, 2, and 4, respectively. Note that the interleavers , for , are omitted because they are trivial.

The following theorem shows that the interleavers do not change the polarization effect.

###### Theorem 1.

I-polar codes have the same polarization effect as polar codes.

###### Proof.

Reading from right to left, Figure 3 illustrates the channel transformation process of the i-polar code for with synthesized channels de-interleaved by , , and . The channel transformation process of the polar code is formed by replacing the de-interleavers with direct links represented by dashed lines in Figure 3. For channel transformation process of the polar code, the figure starts with copies of the transformation . The transformation continues in butterfly patterns with copies of the transformation for and with . Finally, the synthesized channels for can be obtained. The synthesized channels at the intermediate stages can be represented as a binary tree similar to that shown in Figure 6 of [2]. The root node represents the channel . The root gives birth to an upper channel and a lower channel , which are represented by the two nodes at level 1. The channel in turn gives birth to channels and , and the channel gives birth to channels and , and so on. Then based on the concept of a random tree process, the polarization effect is proved in [2]. We want to prove that inserting de-interleavers at the upper channels as that shown in Figure 3 does not change the polarization effect. Note that there are copies of the transformation . For the i-polar code, after channel transformation, the copies of the upper channels are de-interleaved by . Since the de-interleaver acts only on the channels of the same type , the outputs of the de-interleaver are just re-ordered channels of the same type which is the same as that of the original polar code. Therefore, by induction, for the i-polar code, further transformation of the re-ordered channels of the same type gets the same synthesized channels as those of the polar code. ∎

The SC or SCL decoder for polar codes can be easily modified to decode i-polar codes. As proved in Theorem 1, the i-polar code and polar code produce the same synthesized channels for . Therefore, the same bit channel selection algorithm as those designed for polar codes can be employed for i-polar codes. It has been shown that the bit channel selection algorithms based on Gaussian approximation (GA) for density evolution such as those proposed in [24, 6, 7] are effective for binary-input additive white Gaussian noise (BI-AWGN) channels. Bhattacharyya parameter can calso be employed for bit channel selection [29]. In this paper, we employ the bit channel selection algorithm given in [6] for both i-polar and polar codes. For convenience, we give a brief review of the algorithm proposed in [6]. Assume that the all-zero codeword was transmitted. The bit LLR under SC decoding for the channel is defined as

. The idea of GA is to approximate the LLR as a Gaussian random variable with mean

and variance

satisfying . Therefore, the p.d.f. of the LLR random variable can be described by a single parameter . The mutual information of the channel , defined as , was shown in [23] to be

 J(σ)=1−∫∞−∞e−(x−σ2/2)2/(2σ2)√2πσlog2(1+e−x)dx,

where is the variance of the LLR random variable . We want to find the transformation of mutual information that corresponds to the mutual information for the channel transformation . The initial condition is , where is the noise variance of the AWGN channel. Now under SC decoding, assuming that the upper branch is correctly decoded, is the sum of two i.i.d. Gaussian random variables with variance . Therefore is a Gaussian random variable with variance , and hence the mutual information is given by

 I(2i+1)μ+1=J(√2J−1(I(i)μ)), (3)

for . Also, according to Proposition 4 of [2], , and hence

 I(2i)μ+1=2I(i)μ−J(√2J−1(I(i)μ)), (4)

for . Through the recursions of (3) and (4) with , we can calculate for . The subset is selected such that if then for all .

Given the same set , the i-polar code has the same performance level as that of the polar code under the SC decoder, since the synthesized channels, for , are the same for both codes as shown in the proof of Theorem 1. However, they have different performance levels when a more sophisticated decoder, such as the SCL decoder [22] or stack decoder [19, 20], is employed. We will show that the WEF of the i-polar code is different from that of the polar code. This implies that these two codes have different performance levels under the ML decoder. Actually, simulation results show that i-polar codes perform better than polar codes under the SCL decoder.

## Iv WEF and IOWEF of I-Polar Codes

### Iv-a (N,K,A) Ensemble of I-Polar Codes

As described in Section III, at the th layer of the i-polar graph, there are interleavers of size . For an interleaver of size , there are possible interleavers. The ensemble of i-polar codes is formed by all possible interleavers given the unfrozen bit set , of which the code length and dimension . In theory, it is impossible to exhaustively enumerate the WEF of i-polar codes over all possible interleavers when the code size is large. To overcome this difficulty, we assume that all interleavers are selected independently at random and each interleaver follows the uniform assumption as that used in the analysis for turbo codes [5].

###### Definition 1.

[5] A uniform interleaver of length is a probability device selected in random over all possible interleavers which maps a given input binary vector of weight to all permutations with equal probability .

### Iv-B WEF and IOWEF

Given an linear block code , its WEF is defined as

 AC(Y) = ∑\scriptsize\boldmathc∈CYwH(% \scriptsize\boldmathc) = n∑d=0ACdYd,

where is the Hamming weight of , is the number of codewords of with Hamming weight , and

is a dummy variable. The WEF can be used to compute the exact probability of undetected errors and an upper bound on the BLER

[8].

We define the input-output weight enumerating function (IOWEF) of the code as

 AC(X,Y)=∑w,dACw,dXwYd,

where denotes the number of codewords of generated by an input message word of Hamming weight whose output codeword has Hamming weight . It should be noted that the WEF is a polynomial in one variable and the IOWEF is a polynomial in two variables. The relation between the WEF and IOWEF is given by

 AC(Y)=AC(X=1,Y).

The WEF depends only on the code , as only weights of codewords are enumerated. However, the IOWEF depends on the encoder, as it depends on the pairs of Hamming weights of input message word and output codeword. Since there are many different encoders that generate the same code , we will assume that a specific encoder is employed when the IOWEF is considered. The IOWEF can be used to compute the upper bound on the bit error rate (BER) [8]. Also it is important for the study of concatenated coding schemes.

Due to the recursive equation (2), the code can be represented as the graph shown in Figure 4. The code forms a ensemble of i-polar codes, where and . The input of the encoder of is and the output is , where is the output vector at the -th stage of the -polar encoder. Define the WEF of the i-polar code averaged over the ensemble as . Assume that the WEFs and are known. Then is a function of and . The following lemma is important to calculate .

###### Lemma 2.

Let and be length- binary vectors with Hamming weights and , respectively. Assume that is a uniform interleaver. Then the weight distribution for averaged over all possible interleavers is given by

 E\scriptsize\boldmathΠ[YwH(% \scriptsize\boldmathc1\scriptsize\boldmathΠ+\scriptsize% \boldmathc2|\scriptsize\boldmathc2)] ≜ ∑\scriptsize\boldmathΠP{\boldmathΠ}YwH(\scriptsize\boldmathc1\scriptsize\boldmathΠ+\scriptsize\boldmathc2|\scriptsize\boldmathc2) = min(d1,d2)∑k=max(0,d1+d2−n)(d2k)(n−d2d1−k)(nd1)Yd1+2d2−2k.
###### Proof.

We first derive the weight distribution for . Since the weight of is , there are permutations of with equal probability . Among these permutations, let be the number of positions at which the elements in and are both equal to 1. The minimum value of can be easily shown to be and the maximum value of to be . Given the value , the Hamming weight of is , and there are a total of such permutations. Therefore, the probability that has weight is given by . Finally, the additional concatenation of gives the Hamming weight of as . The proof is completed. ∎

We are ready to calculate the average WEF based on the recursive equation (2).

###### Theorem 3.

Given the WEFs and , the WEF of the code averaged over all possible interleavers is

 ACm,j(Y) = ∑d1,d2ACm−1,2jd1ACm−1,2j+1d2min(d1,d2)∑k=max(0,d1+d2−n)(d2k)(n−d2d1−k)(nd1)Yd1+2d2−2k ≜ Hm(ACm−1,2j(Y),ACm−1,2j+1(Y)),

where .

###### Proof.

The averaged WEF of can be written as

 ACm,j(Y) = E\scriptsize\boldmathΠm−1,j⎡⎢ ⎢⎣∑% \scriptsize\boldmathc1∈Cm−1,2j∑\scriptsize% \boldmathc2∈Cm−1,2j+1YwH(\scriptsize\boldmathc% 1\scriptsize\boldmathΠm−1,j+\scriptsize\boldmathc2|\scriptsize\boldmathc2)⎤⎥ ⎥⎦ = ∑\scriptsize\boldmathc1∈Cm−1,2j∑\scriptsize\boldmathc2∈Cm−1,2j+1E% \scriptsize\boldmathΠm−1,j[YwH(\scriptsize\boldmathc1\scriptsize\boldmathΠm−1,j+\scriptsize\boldmathc2|\scriptsize\boldmathc2)].

By Lemma 2, we have

 ACm,j(Y) =∑\scriptsize\boldmathc1∈Cm−1,2j∑\scriptsize\boldmathc2∈Cm−1,2j+1min(wH(\scriptsize\boldmathc1),wH(% \scriptsize\boldmathc2))∑k=max(0,wH(\scriptsize\boldmathc1)+wH(\scriptsize\boldmathc2)−n)(wH(\scriptsize\boldmathc% 2)k)(n−wH(\scriptsize\boldmathc2)wH(\scriptsize\boldmathc1)−k)(nwH(% \scriptsize\boldmathc1))YwH(\scriptsize\boldmathc1)+2wH(\scriptsize\boldmathc2)−2k.

The average number of codeword combinations of and for and is . The proof is completed. ∎

Similarly, for the average IOWEF, we have the following theorem.

###### Theorem 4.

Given the IOWEFs and , the IOWEF of the code averaged over all possible interleavers is

 ACm,j(X,Y) = ∑w1,w2,d1,d2ACm−1,2jw1,d1ACm−1,2j+1w2,d2min(d1,d2)∑k=max(0,d1+d2−n)(d2k)(n−d2d1−k)(nd1)Xw1+w2Yd1+2d2−2k ≜ Fm(ACm−1,2j(X,Y),ACm−1,2j+1(X,Y)),

where .

Based on Theorem 3 and Theorem 4, we can compute the average WEF and IOWEF using the following recursive equations

 ACm,j(Y) = Hm(ACm−1,2j(Y),ACm−1,2j+1(Y)), (5) ACm,j(X,Y) = Fm(ACm−1,2j(X,Y),ACm−1,2j+1(X,Y)), (6)

for and , where the initial conditions are given by

 AC0,j(Y) = {1+Y,j∈A1,j∈Ac,

and

 AC0,j(X,Y) = {1+XY,j∈A1,j∈Ac.

The interleaver can also be applied for every output of the lower encoder. In this case, the recursive equation becomes

 ¯Cm,j=¯Cm−1,2j+¯Cm−1,2j+1% \boldmathΠm−1,j|¯Cm−1,2j+1\boldmathΠm−1,j,

for and . The initial conditions are if and if . De-interleaving and by does not change the WEF of the code. Therefore, the following code has the same WEF as

 ¯Cm−1,2j¯\boldmathΠm−1,j+¯Cm−1,2j+1|¯Cm−1,2j+1,

where . Since the mapping is bijective, the interleaver is uniform if is uniform. Comparing to (2), as averaged over all interleavers, has the same average WEF as . Therefore, it is sufficient to consider only the former option.

### Iv-C WEFs of (32, 16) I-Polar Code and Polar Code

The WEFs of the (32, 16) i-polar code and polar code are compared. The set which is used for the i-polar code and polar code is given by

 A={11,13,14,15,19,21,22,23,24,25,26,27,28,29,30,31}.

This set is obtained by using the bit channel selection algorithm described previously which was proposed in [6]. Table I gives the coefficients of WEFs of all test cases, of which the Hamming weights, denoted as , are listed in the first column and the remaining columns are the coefficients of all test cases. The WEF averaged over the ensemble of i-polar codes is denoted as WEF(A), which is computed based on the recursive equation (5). Since the code size is small, the WEF of a realization of i-polar codes can be enumerated exhaustively. We take 1000 independent realizations of i-polar codes and compute the WEF of each realization. In Table I, WEF(B) denotes the sample average of the WEFs over 1000 realizations. The WEF of the polar code is also enumerated exhaustively and is denoted as WEF(C). It can be observed that the sample average WEF(B) is very close to the (analytical) ensemble average WEF(A). Also, among the 1000 realizations, only two types of WEFs are observed, denoted as WEF type 1 and WEF type 2 as given in Table I. Among the 1000 realizations, the WEF type 1 and WEF type 2 appear 991 times and 9 times, respectively. Note that WEF type 1 and WEF type 2 are just two WEFs that occur with higher probability than the others. There are other WEFs, e.g., the WEF of the polar code, with small probabilities that do not appear among 1000 realizations. The WEF type 1 and WEF type 2 are all close to the WEF(A), which means that, with high probability, any realizations are as good as the ensemble average WEF(A). The WEF(C) of the polar code concentrates to a smaller number of Hamming weights, i.e., there are no codewords of Hamming weights 10, 14, 18, and 22 as shown in Table I. The reason is that the i-polar code contains interleavers which have the effect of spreading the Hamming weights of codewords widely.

For linear codes, the minimum Hamming weight, denoted as , and its multiplicity, denoted as , dominate the performance at high SNRs. Table I shows that both polar code and i-polar code have the same number of codewords 8 with minimum Hamming weight 4. This means that both codes have the the same error probability at high SNRs with ML decoding. This phenomenon can be observed from the upper bounds and simulated BLER curves of both codes shown in Figure 6.

The parameters and for the i-polar and polar codes with and varying from 32 to 480 are shown in Figure 5. The parameters and for i-polar codes are obtained through the recursive equation (5). Since there are no analytical WEFs for polar codes, we use the SCL decoder with to search for the minimum weight codewords as that proposed in [17]. The results show that both codes have the same minimum Hamming weight. However, the parameters of i-polar codes are smaller than or equal to those of polar codes, which means that, for some cases, i-polar codes perform better than polar codes with ML decoding at high SNRs.

### Iv-D BLER Upper Bound

In this paper, we focus on the performance of codes over BI-AWGN channels. For BI-AWGN channels, the th received signal can be represented as

 yi=√Es(1−2ci)+wi,    for i=0,1,…,N−1,

where is the -th bit of the codeword , is the zero-mean additive white Gaussian noise with , and is the symbol energy. Given the WEF of a code , the union bound on the BLER over BI-AWGN channels is a function of the WEF given by

 PBLER≤∑d≠0ACdQ(√2dρ), (7)

where is the signal-to-noise ratio defined as . However, the union bound may be too loose at low SNRs. A tighter upper bound, called simple bound, was proposed in [8]. For convenience, the bound is given here as

 PBLER≤N−K+1∑d=d\scriptsize minmin{e−nE(ρ,δ),ACdQ(√2dρ)}, (8)

where , and with ,

 E(ρ,δ)=12ln[1−2c0(δ)f(ρ,δ)]+ρf(ρ,δ)1+f(ρ,δ),  c0(δ)<ρ

Otherwise,

 E(ρ,δ)=−r(δ)+δρ.

The functions and are given by

 c0(δ)=(1−e−2r(δ))1−δ2δ,

and

 f(ρ,δ)=√ρc0(δ)+2ρ+ρ2−ρ−1.

It should be noted that a similar bound can be employed to obtain the upper bound on the bit error rate (BER) if the IOWEF of the code is known. However, in this paper, we only focus on the BLER upper bound.

Figure 6 shows the BLER upper bounds for the polar code and i-polar code based on the WEFs given in Table I. Also BLER simulations are conducted based on the SCL decoder with . In this case, we have verified that the performance of the SCL decoder with is very close to the ML performance. The BLER upper bounds show that the i-polar code is slightly better than the polar code at low SNRs. Simulation results also show such a slight difference.

## V WEF of Concatenated Coding Schemes

Polar codes are weak at short to moderate block lengths. Therefore, concatenated coding schemes are often considered in the design of polar codes. A famous coding scheme is to concatenate a CRC code as the outer code with a polar code as the inner code [22]. The Reed-Solomon codes, BCH codes, convolutional codes, and LDPC codes are also considered as the outer code with a polar code as the inner code [18, 26, 10]

. So far, for concatenated coding schemes, most research works focus on the asymptotic analysis, i.e., the performance analysis as

approaches infinity. The performance levels for codes of finite block lengths rely on simulations which are time-consuming. We will develop the WEF analysis for concatenated codes which can then be used to evaluate the BLER performance using the bound given in (8).

We consider a concatenated coding scheme as shown in Figure 7. The encoder consists of parallel outer encoders of the same code, denoted as . The code is called the outer component code. For the th outer encoder, the input message word is denoted as a -bit vector and the output codeword is denoted as an -bit vector . The output codewords from the outer encoders form an -bit super-codeword, denoted as . The -bit super-codeword is then interleaved by an interleaver which outputs a vector represented as , where is the permutation matrix formed by the interleaver. The vector is then partitioned into blocks with each block of size , where . The partition is given by with being a -bit vector for . Then parallel i-polar encoders of the inner component code are employed to encode the input vectors and finally the output codewords are obtained. The final super-codeword is .

As indicated in Figure 7 by dashed blocks, we may represent the super-code corresponding to the parallel outer encoders as and the super-code corresponding to the parallel inner encoders as . The entire system becomes a simple concatenation of and with an interleaver in between. Given the WEF of the outer component code, and the IOWEF of the inner component code, we can calculate the WEF of the outer super-code and the IOWEF of the inner super-code