Toward Terabits-per-second Communications: A High-Throughput Hardware Implementation of G_N-Coset Codes

04/21/2020 ∙ by Jiajie Tong, et al. ∙ HUAWEI Technologies Co., Ltd. 0

Recently, a parallel decoding algorithm of G_N-coset codes was proposed.The algorithm exploits two equivalent decoding graphs.For each graph, the inner code part, which consists of independent component codes, is decoded in parallel. The extrinsic information of the code bits is obtained and iteratively exchanged between the graphs until convergence. This algorithm enjoys a higher decoding parallelism than the previous successive cancellation algorithms, due to the avoidance of serial outer code processing. In this work, we present a hardware implementation of the parallel decoding algorithm, it can support maximum N=16384. We complete the decoder's physical layout in TSMC 16nm process and the size is 999.936μ m× 999.936μ m, ≈ 1.00mm^2. The decoder's area efficiency and power consumption are evaluated for the cases of N=16384,K=13225 and N=16384, K=14161. Scaled to 7nm process, the decoder's throughput is higher than 477Gbps/mm^2 and 533Gbps/mm^2 with five iterations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

I-a Background

High throughput is one of the primary targets for the evolution of mobile communications. The next generation of mobile communication, i.e., 6G, is expected to supply Tbp/s throughput[1]. which requires roughly a x increase in throughput over the 5G standards.

-coset codes, defined by Arıkan in [2], are a class of linear block codes with the generator matrix . is an binary matrix defined as , in which and denotes the -th Kronecker power of .

Polar codes is a specific type of -coset codes, adopted for the 5G control channel, respectively. The throughput of polar codes is limited by the successive cancellation (SC) decoders, since they are serial in nature.

Recently, a parallel decoding framework of -coset codes is proposed in [3]. It is alternately decoded on two factor graphs and , as shown in Fig. 1. The permuted graph is generated by swapping the inner codes and outer codes. The decoder only decodes the inner codes of each graph . In each , the inner codes are independent sub-codes that can be decoded in parallel. The code construction under the parallel decoding algorithm is different from polar/RM codes, and is studied separately in [3].

I-B Motivations and Contributions

This paper introduce an ASIC implementation based on the parallel decoding framework (PDF). We set up a decoder which can support -coset codes. It deploys sub-decoder to decode the 128 independent sub-codes in parallel. The target of high throughput and area efficiency is decomposed into the reduction of sub-codes decoding latency, worst-case iteration time, and chip area, and optimized respectively.

We adopt the proposal in [4] which employs successive cancellation (SC) decoders as the component decoder. It can support soft-in-hard-out decoding which results in low decoding complexity and reduced interconnection among the component decoders. In this work, we propose hardware-oriented optimizations on LLR generation and quantization. We implemented the whole decoder in hardware and present the ASIC layout to evaluate multiple key metrics. The hardware-specific data is obtained from the cells flip ratio from the circuit simulation results and the parasitic capacitance extracted from the layout result. With process, the area efficiency is , the power consumption is and energy efficiency is around . Scaled to , the area efficiency can reach with five iterations.

Fig. 1: For -coset codes, equivalent encoding graphs may be obtained based on stage permutations: (a) Arıkan’s original encoding graph [2] and (b) stage-permuted encoding graph. Each node adds (mod-2) the signals on all incoming edges from the left and sends the result out on all edges to the right.

Ii Parallel Decoding

A parallel decoding framework is introduced in [3], where three types of component decoders (i) soft-output SC list (SCL), (ii) soft-output SC permutation list and (iii) soft cancellation (SCAN) are employed to decode the sub-codes (inner codes). To achieve even higher area efficiency, this work adopts SC [4], i.e., without list decoding, as the sub-decoder. In this section, we describe the parallel decoding framework-successive cancellation (PDF-SC) algorithm from the implementation point of view.

Ii-a Parallel decoding framework (PDF)

We use to denote the -th code bit position of -th inner sub-code, and to denote the code bit position in the -coset code. They have the following relationship

(1)

The aforementioned parallel turbo-like decoding framework is described in Algorithm 1. In every two iterations, the algorithm alternately decodes the two graphs and (line 4 in Algorithm 1) with inner component decoders. The -th component decoder, denoted by , is a SC decoder assisted by the error detector.

Note that each component decoder takes a soft input vector

, but outputs hard code bit estimates

and error detecting results . The mismatch between soft input and hard output poses a challenge for iterative decoding, since the SC component decoders in the next iteration cannot directly take the hard output from the previous iteration as input. To solve this problem, [4] proposes to generate soft values from the hard outputs. Specifically, the log likelihood ratio (LLR) of the -th code bit in the -th iteration, denoted by , is calculated from the hard decoder outputs from the alternate graph (line 5).

For the -th component decoder, the hard output vector and soft input vector have length

and the error detection indicator is a binary value.

The independent inner sub-codes allow us to instantiate component decoders for maximum degree of parallelism. After iterations, the algorithm outputs all hard bits.

0:    The received signal ;
0:    The recovered codeword: ;
1:  Initilize: for ;
2:  for iterations:  do
3:     Select decoding graph: ;
4:     for inner component decoders: (in parallel) do
5:        ;
6:        ;
7:     end for
8:  end for
9:  for  do
10:      is described in (1).
11:  end for
Algorithm 1 Parallel decoding framework.

Ii-B Component decoder:

The component decoder  [4] is described in Algorithm 2. Before each SC decoding, error detection is performed. This can be achieved by applying a syndrome check based on hard decisions (line 25 in Algorithm 2). The cases with detected errors are denoted by Type-1 and otherwise Type-2.

The error detector brings two-fold advantages. On the one hand, since Type- component codes require no further SC decoding, this approach reduces power consumption. If all sub-codes pass error detection, decoding can be early terminated for further power saving. On the other hand, the error detection result provides us with additional information that the input LLRs of Type-2 component codes are more reliable than those of Type-1. Such information can be used to improve the overall performance by estimating the input LLRs from the hard outputs.

0:    The input LLRs: ;Frozen set: .
0:    Binary output , error detected indicator ;
1:  ;
2:  Hard decisions:   ;
3:  Vector ;
4:  Syndrome check:
5:  if  then
6:     ;
7:     SC decoding: ;
8:  end if
Algorithm 2 The -th

Ii-C Input LLR generator:

In each iteration, the input LLRs are calculated by

  • Type-1:

    (2)
  • Type-2:

    (3)

where a set of damping factors (, , ) are defined for each iteration, and is calculated from according to (1). The specific values of damping factors are optimized in [4].

Since SC decoder is invariant to input LLR scaling, we can cancel noise variance

during LLR initialization. By multiplying both sides of (2) and (3) by , the equations are simplified as follows

(4)

where and is the received signal.

We use a pair of new coefficients to replace and :

Hence (4) can be replace by (5).

(5)

in which is determined by the binary and the hard outputs of the previous two iterations as in (6). The input signal calculation reduces to only one addition operation.

(6)

Iii An ASIC implementation

In this section, we present the ASIC implementation of a PDF-SC decoder in TSMC process for . The hardware optimization addresses both the SC decoders for component codes and the overall parallel decoding framework for -coset codes.

Iii-a Bit Quantization

Lower precision quantization is the key to higher throughput, thanks to its reduced implementation area and increased clock frequency. As a tradeoff, performance loss is expected. To maximize throughput while retaining performance, we must determine an appropriate quantization width. Specifically, we use simulation to find out the smallest quantization width of a fixed-point decoder within performance loss from a floating decoder.

First, we compare the performance of component codes under Algorithm 2. We test two cases and with different quantization widths. According to Fig.2, 6-bit quantization achieves the same performance as floating-point, 5-bit quantization incurs loss, and 4-bits quantization yields significant loss. Therefore, we set the quantization width to 5 or 6 bits.

Fig. 2: Sub-Decoder Performance Comparison between Floating Point and Fixed Point.

We then simulate the BLER performance of the long codes and under different quantizations, as shown in Fig. 3. Similarly, 6-bit quantization has no performance loss, and 5-bit quantization only incurs loss for both code rates. Again, 4-bit quantization suffers from performance degradation.

Fig. 3: Decoder Performance Comparison between Floating Point and Fixed Point.

Iii-B SC Core Optimization

A component SC decoder is optimized via Rate-0 nodes, Rate-1 nodes[5], single parity check (SPC) nodes and repetition nodes (REP)[6]. The decoder skips all Rate-0 nodes, parallelizes Rate-1, SPC and REP nodes for code blocks shorter than 32. If neither applies, maximum-likelihood (ML) multi-bit decision [7] is employed for 4-bit blocks.

The architecture of SC decoders used here is described in [8], with supported code length reduced from 32768 to 128 to save area. With TSMC technology, an SC core synthesizes to 4,100 area. Under 1.05 clock frequency, its latency is shown in Table I.

Information Bits(K) 111 115 119 122
Latency (Cycle) 24 19 18 13
(ns) 22.8 18.05 17.1 12.35
TABLE I: Sub-Decoder Decoding Latency for N=128 Polar codes

Iii-C SC Core Sharing

A unique design that significantly reduces area is called “SC core sharing”. In particular, we bind four SC cores as a sub-decoder group. The four SC cores share input/output pins, LLR updating circuits and error detector related components. The sharing reduces a lot of local computation resources and global routing resources, but increases the latency between iterations. However, the overall area efficiency and power efficiency are improved.

In addition, we reuse the adders in the SC core to perform the input LLR addition in (5). These adders were used for the -function calculation in each processing element (PE) [9]. Altogether, area can be saved for each sub-decoder group. Fig. 4 shows the architecture of a sub-decoder group, including how adders in the PEs are reused. We run the synthesis flow with TSMC process, and the resulting area of a sub-decoder group is 19,400.

Fig. 4: The Sub-Decoders Group architecture.

Iii-D Global Layout

Combining all algorithmic and hardware optimizations, the synthesized decoder ASIC requires area. The global layout is presented in Fig. 5. In the center of the layout is the top logic, including the input channel LLR storage, finite state machine (FSM) controller, interleaved connection routing and output buffer. The aforementioned 32 sub-decoder groups (SG in the figure) are placed around the top logic, highlighted by different colors.

Fig. 5: The global layout of ASIC for parallel decoding.

Iv Key Performance Indicators

Info Iter- Es/N Latency Area Eff. Convert to
size ation () (ns) (Gbps/)
13225
4 7.14 91.2 135.07 310.66 596.47
5 6.82 114 108.06 248.53 477.17
6 6.55 136.8 90.05 207.11 397.64
7 6.36 159.6 77.18 177.52 340.84
8 6.20 182.4 67.53 155.33 298.23
14161
4 7.79 87.4 150.92 347.11 666.45
5 7.48 109.25 120.73 277.69 533.16
6 7.22 131.1 100.61 231.41 444.30
7 7.06 152.95 86.24 198.35 380.83
8 6.97 174.8 75.46 173.55 322.22
TABLE II: Decoder Area Efficiency
Implementation This Work This Work This Work This Work [8] [8] [8] [8]
Construction -coset -coset -coset -coset Polar Polar Polar Polar
Decoding Algorithm PDF-SC PDF-SC PDF-SC PDF-SC SC SC CA-SC-List CA-SC-List
List size / Iterations 5 8 5 8 1 1 8 8
Code Length 16384 16384 16384 16384 32768 32768 16384 16384
Code Rate 0.807 0.807 0.864 0.864 0.807 0.864 0.807 0.864
EsN0@BLER= 6.82 6.20 7.48 6.97 5.61 6.49 5.24 6.13
Technology All in TSMC
Clock Ferquency() 1.05 1.05 1.05 1.05 1.00 1.00 1.00 1.00
Throughtput () 108.06 67.53 120.73 75.46 4.16 4.56 0.91 1.01
Area() 1.00 1.00 1.00 1.00 0.35 0.35 0.45 0.45
Area Eff() 108.06 67.53 120.73 75.46 11.89 13.02 2.02 2.24
TABLE III: COMPARISON WITH FABRICATED ASIC OF TRADITIONAL POLAR DECODER
Fig. 6: Left: The Sub-decoder running times during per packet decoding. Center: Decoding Power efficiency (mW/mm). Right: Energy efficiency (pJ/bit).

The key performance indicators (KPIs) are examined. First of all, we evaluate the area efficiency using equation 111The error detector can early terminate the decoding, but its worst-case latency is guaranteed by maximum iteration times, as required by most practical communication systems.. The proposed decoder can reach up to hundreds of gigabits per second within one square millimeter. The evaluated throughput in given in Table II222The third column “Es/N0” is chosen such that BLER.. With TSMC process, the area efficiency for code rate and are and when the maximum number of iterations is eight. If we reduce to five iterations by allowing performance loss, the area efficiency can reach and . The estimated throughput under and technologies can be converted from 333The converting ratios including cell density ratio and speed improvement ratio are obtained from the TSMC process introductions [10, 11].. With the more advanced process, the throughput is as high as and , which are much higher than the target given in the EPIC project [12]. Note that the KPI is achieved at code length , which exhibits significant coding gain over codes with length  [4]. With future technologies of and below, it is promising to achieve an extremely challenging target of .

The area efficiency is also compared with a highly-optimized and fabricated ASIC polar444The polar codes are constructed by Gaussian approximation at Es/N0=, , and for ()=(), (), () and () respectively. is code length and is code rate. decoder in [8]. For both code rates, the throughput of the proposed decoder (with five iterations) is nine times as fast as a polar fast-SC decoder, and 53 times that of a CA-SC-List-8 decoder555In [8], the fabricated ASIC SC decoder supports code length .. Detailed comparison results can be found in Table III.

We further evaluate the average running time and power consumption (per packet). In the lower SNR region, longer decoding time and higher power consumption are observed. But in the higher SNR region, both decoding time and power consumption are much smaller, thanks to the built-in error detection and early termination function described in section II-B. Specifically, only 15% component codes cannot pass the error detection while the rest 85% can skip SC decoding. The power consumption (mW/mm) and energy efficiency (pJ/bit) follow similar trend to the average running time. When Es/N0, the power consumption is smaller than . The energy efficiency is around , again meeting the target proposed in [12]. These results666Note that the power consumption with eight iterations is lower than that with five iterations. This is due to the lower error detection successful rate during the first five (1-5) iterations. The last three (5-8) iterations consume much lower power, which reduced the averaged power level. are plotted in Fig. 6. The power consumption and energy efficiency are evaluated with TSMC process.

V Conclusions

In this paper, we present an ASIC implementation of high-throughput -coset codes. The parallel decoding framework leads to a hardware with high area efficiency and low decoding power consumption. An area efficiency of is achieved within approximately process. The power consumption is as low as and energy efficiency is around . Scaled to process, the area efficiency can reach . It confirms that -coset codes can meet the high-throughput demand in next-generation wireless communication systems.

References

  • [1] W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: applications, trends, technologies, and open research problems,” IEEE Network, 2019.
  • [2] E. Arıkan, “Channel polarization: a method for constructing capacityachieving codes for symmetric binary-input memoryless channels,” IEEE Transactions on Information Theory, vol. 55, no. 7, pp. 3051-3073, Jul. 2009.
  • [3] X. Wang et al., “On the construction of -coset codes for parallel decoding,” Accepted by IEEE Wireless communications and Networking Conference 2019.
  • [4] X. Wang et al., “Toward Terabits-per-second Communications: Low-Complexity Parallel Decoding of -Coset Codes,” Available on ArXiv.
  • [5] A. Alamdar-Yazdi and F. R. Kschischang, “A Simplified Successive-Cancellation Decoder for Polar Codes,” in IEEE Communications Letters, vol. 15, no. 12, pp. 1378-1380, December 2011.
  • [6] S. A. Hashemi, C. Condo, and W. J. Gross, “Fast and flexible successive-cancellation list decoders for polar codes,” IEEE Transactions on Signal Processing, vol. 65, no. 21, pp. 5756–5769, Nov 2017.
  • [7] G. Sarkis, P. Giard, A. Vardy, C. Thibeault and W. J. Gross, “Fast Polar Decoders: Algorithm and Implementation,” in IEEE Journal on Selected Areas in Communications, vol. 32, no. 5, pp. 946-957, May 2014.
  • [8] X. Liu et al., “A 5.16Gbps decoder ASIC for Polar Code in 16nm FinFET,” 2018 15th International Symposium on Wireless Communication Systems (ISWCS), Lisbon, 2018, pp. 1-5.
  • [9] A. Balatsoukas-Stimming, M. B. Parizi and A. Burg, “LLR-Based Successive Cancellation List Decoding of Polar Codes,” in IEEE Transactions on Signal Processing, vol. 63, no. 19, pp. 5165-5179, Oct. 2015.
  • [10] Taiwan Semiconductor Manufacturing Company Limited, “TSMC 10nm Technology,” [Online]. Available: https://www.tsmc.com/english/dedicatedFoundry/technology/10nm.htm.
  • [11] Taiwan Semiconductor Manufacturing Company Limited, “TSMC 7nm Technology,” [Online]. Available: https://www.tsmc.com/english/dedicatedFoundry/technology/7nm.htm.
  • [12] “EPIC - Enabling practical wireless Tb/s communications with next generation channel coding.” [Online]. Available: https://epic-h2020.eu/results.