I Introduction
Ia Background
High throughput is one of the primary targets for the evolution of mobile communications. The next generation of mobile communication, i.e., 6G, is expected to supply Tbp/s throughput[1]. which requires roughly a x increase in throughput over the 5G standards.
coset codes, defined by Arıkan in [2], are a class of linear block codes with the generator matrix . is an binary matrix defined as , in which and denotes the th Kronecker power of .
Polar codes is a specific type of coset codes, adopted for the 5G control channel, respectively. The throughput of polar codes is limited by the successive cancellation (SC) decoders, since they are serial in nature.
Recently, a parallel decoding framework of coset codes is proposed in [3]. It is alternately decoded on two factor graphs and , as shown in Fig. 1. The permuted graph is generated by swapping the inner codes and outer codes. The decoder only decodes the inner codes of each graph . In each , the inner codes are independent subcodes that can be decoded in parallel. The code construction under the parallel decoding algorithm is different from polar/RM codes, and is studied separately in [3].
IB Motivations and Contributions
This paper introduce an ASIC implementation based on the parallel decoding framework (PDF). We set up a decoder which can support coset codes. It deploys subdecoder to decode the 128 independent subcodes in parallel. The target of high throughput and area efficiency is decomposed into the reduction of subcodes decoding latency, worstcase iteration time, and chip area, and optimized respectively.
We adopt the proposal in [4] which employs successive cancellation (SC) decoders as the component decoder. It can support softinhardout decoding which results in low decoding complexity and reduced interconnection among the component decoders. In this work, we propose hardwareoriented optimizations on LLR generation and quantization. We implemented the whole decoder in hardware and present the ASIC layout to evaluate multiple key metrics. The hardwarespecific data is obtained from the cells flip ratio from the circuit simulation results and the parasitic capacitance extracted from the layout result. With process, the area efficiency is , the power consumption is and energy efficiency is around . Scaled to , the area efficiency can reach with five iterations.
Ii Parallel Decoding
A parallel decoding framework is introduced in [3], where three types of component decoders (i) softoutput SC list (SCL), (ii) softoutput SC permutation list and (iii) soft cancellation (SCAN) are employed to decode the subcodes (inner codes). To achieve even higher area efficiency, this work adopts SC [4], i.e., without list decoding, as the subdecoder. In this section, we describe the parallel decoding frameworksuccessive cancellation (PDFSC) algorithm from the implementation point of view.
Iia Parallel decoding framework (PDF)
We use to denote the th code bit position of th inner subcode, and to denote the code bit position in the coset code. They have the following relationship
(1) 
The aforementioned parallel turbolike decoding framework is described in Algorithm 1. In every two iterations, the algorithm alternately decodes the two graphs and (line 4 in Algorithm 1) with inner component decoders. The th component decoder, denoted by , is a SC decoder assisted by the error detector.
Note that each component decoder takes a soft input vector
, but outputs hard code bit estimates
and error detecting results . The mismatch between soft input and hard output poses a challenge for iterative decoding, since the SC component decoders in the next iteration cannot directly take the hard output from the previous iteration as input. To solve this problem, [4] proposes to generate soft values from the hard outputs. Specifically, the log likelihood ratio (LLR) of the th code bit in the th iteration, denoted by , is calculated from the hard decoder outputs from the alternate graph (line 5).For the th component decoder, the hard output vector and soft input vector have length
and the error detection indicator is a binary value.
The independent inner subcodes allow us to instantiate component decoders for maximum degree of parallelism. After iterations, the algorithm outputs all hard bits.
IiB Component decoder:
The component decoder [4] is described in Algorithm 2. Before each SC decoding, error detection is performed. This can be achieved by applying a syndrome check based on hard decisions (line 25 in Algorithm 2). The cases with detected errors are denoted by Type1 and otherwise Type2.
The error detector brings twofold advantages. On the one hand, since Type component codes require no further SC decoding, this approach reduces power consumption. If all subcodes pass error detection, decoding can be early terminated for further power saving. On the other hand, the error detection result provides us with additional information that the input LLRs of Type2 component codes are more reliable than those of Type1. Such information can be used to improve the overall performance by estimating the input LLRs from the hard outputs.
IiC Input LLR generator:
In each iteration, the input LLRs are calculated by

Type1:
(2) 
Type2:
(3)
where a set of damping factors (, , ) are defined for each iteration, and is calculated from according to (1). The specific values of damping factors are optimized in [4].
Since SC decoder is invariant to input LLR scaling, we can cancel noise variance
during LLR initialization. By multiplying both sides of (2) and (3) by , the equations are simplified as follows(4) 
where and is the received signal.
We use a pair of new coefficients to replace and :
Hence (4) can be replace by (5).
(5) 
in which is determined by the binary and the hard outputs of the previous two iterations as in (6). The input signal calculation reduces to only one addition operation.
(6) 
Iii An ASIC implementation
In this section, we present the ASIC implementation of a PDFSC decoder in TSMC process for . The hardware optimization addresses both the SC decoders for component codes and the overall parallel decoding framework for coset codes.
Iiia Bit Quantization
Lower precision quantization is the key to higher throughput, thanks to its reduced implementation area and increased clock frequency. As a tradeoff, performance loss is expected. To maximize throughput while retaining performance, we must determine an appropriate quantization width. Specifically, we use simulation to find out the smallest quantization width of a fixedpoint decoder within performance loss from a floating decoder.
First, we compare the performance of component codes under Algorithm 2. We test two cases and with different quantization widths. According to Fig.2, 6bit quantization achieves the same performance as floatingpoint, 5bit quantization incurs loss, and 4bits quantization yields significant loss. Therefore, we set the quantization width to 5 or 6 bits.
We then simulate the BLER performance of the long codes and under different quantizations, as shown in Fig. 3. Similarly, 6bit quantization has no performance loss, and 5bit quantization only incurs loss for both code rates. Again, 4bit quantization suffers from performance degradation.
IiiB SC Core Optimization
A component SC decoder is optimized via Rate0 nodes, Rate1 nodes[5], single parity check (SPC) nodes and repetition nodes (REP)[6]. The decoder skips all Rate0 nodes, parallelizes Rate1, SPC and REP nodes for code blocks shorter than 32. If neither applies, maximumlikelihood (ML) multibit decision [7] is employed for 4bit blocks.
The architecture of SC decoders used here is described in [8], with supported code length reduced from 32768 to 128 to save area. With TSMC technology, an SC core synthesizes to 4,100 area. Under 1.05 clock frequency, its latency is shown in Table I.
Information Bits(K)  111  115  119  122  

Latency  (Cycle)  24  19  18  13 
(ns)  22.8  18.05  17.1  12.35 
IiiC SC Core Sharing
A unique design that significantly reduces area is called “SC core sharing”. In particular, we bind four SC cores as a subdecoder group. The four SC cores share input/output pins, LLR updating circuits and error detector related components. The sharing reduces a lot of local computation resources and global routing resources, but increases the latency between iterations. However, the overall area efficiency and power efficiency are improved.
In addition, we reuse the adders in the SC core to perform the input LLR addition in (5). These adders were used for the function calculation in each processing element (PE) [9]. Altogether, area can be saved for each subdecoder group. Fig. 4 shows the architecture of a subdecoder group, including how adders in the PEs are reused. We run the synthesis flow with TSMC process, and the resulting area of a subdecoder group is 19,400.
IiiD Global Layout
Combining all algorithmic and hardware optimizations, the synthesized decoder ASIC requires area. The global layout is presented in Fig. 5. In the center of the layout is the top logic, including the input channel LLR storage, finite state machine (FSM) controller, interleaved connection routing and output buffer. The aforementioned 32 subdecoder groups (SG in the figure) are placed around the top logic, highlighted by different colors.
Iv Key Performance Indicators
Info  Iter  Es/N  Latency  Area Eff.  Convert to  

size  ation  ()  (ns)  (Gbps/)  

4  7.14  91.2  135.07  310.66  596.47  
5  6.82  114  108.06  248.53  477.17  
6  6.55  136.8  90.05  207.11  397.64  
7  6.36  159.6  77.18  177.52  340.84  
8  6.20  182.4  67.53  155.33  298.23  

4  7.79  87.4  150.92  347.11  666.45  
5  7.48  109.25  120.73  277.69  533.16  
6  7.22  131.1  100.61  231.41  444.30  
7  7.06  152.95  86.24  198.35  380.83  
8  6.97  174.8  75.46  173.55  322.22 
Implementation  This Work  This Work  This Work  This Work  [8]  [8]  [8]  [8] 

Construction  coset  coset  coset  coset  Polar  Polar  Polar  Polar 
Decoding Algorithm  PDFSC  PDFSC  PDFSC  PDFSC  SC  SC  CASCList  CASCList 
List size / Iterations  5  8  5  8  1  1  8  8 
Code Length  16384  16384  16384  16384  32768  32768  16384  16384 
Code Rate  0.807  0.807  0.864  0.864  0.807  0.864  0.807  0.864 
EsN0@BLER=  6.82  6.20  7.48  6.97  5.61  6.49  5.24  6.13 
Technology  All in TSMC  
Clock Ferquency()  1.05  1.05  1.05  1.05  1.00  1.00  1.00  1.00 
Throughtput ()  108.06  67.53  120.73  75.46  4.16  4.56  0.91  1.01 
Area()  1.00  1.00  1.00  1.00  0.35  0.35  0.45  0.45 
Area Eff()  108.06  67.53  120.73  75.46  11.89  13.02  2.02  2.24 
The key performance indicators (KPIs) are examined. First of all, we evaluate the area efficiency using equation ^{1}^{1}1The error detector can early terminate the decoding, but its worstcase latency is guaranteed by maximum iteration times, as required by most practical communication systems.. The proposed decoder can reach up to hundreds of gigabits per second within one square millimeter. The evaluated throughput in given in Table II^{2}^{2}2The third column “Es/N0” is chosen such that BLER.. With TSMC process, the area efficiency for code rate and are and when the maximum number of iterations is eight. If we reduce to five iterations by allowing performance loss, the area efficiency can reach and . The estimated throughput under and technologies can be converted from ^{3}^{3}3The converting ratios including cell density ratio and speed improvement ratio are obtained from the TSMC process introductions [10, 11].. With the more advanced process, the throughput is as high as and , which are much higher than the target given in the EPIC project [12]. Note that the KPI is achieved at code length , which exhibits significant coding gain over codes with length [4]. With future technologies of and below, it is promising to achieve an extremely challenging target of .
The area efficiency is also compared with a highlyoptimized and fabricated ASIC polar^{4}^{4}4The polar codes are constructed by Gaussian approximation at Es/N0=, , and for ()=(), (), () and () respectively. is code length and is code rate. decoder in [8]. For both code rates, the throughput of the proposed decoder (with five iterations) is nine times as fast as a polar fastSC decoder, and 53 times that of a CASCList8 decoder^{5}^{5}5In [8], the fabricated ASIC SC decoder supports code length .. Detailed comparison results can be found in Table III.
We further evaluate the average running time and power consumption (per packet). In the lower SNR region, longer decoding time and higher power consumption are observed. But in the higher SNR region, both decoding time and power consumption are much smaller, thanks to the builtin error detection and early termination function described in section IIB. Specifically, only 15% component codes cannot pass the error detection while the rest 85% can skip SC decoding. The power consumption (mW/mm) and energy efficiency (pJ/bit) follow similar trend to the average running time. When Es/N0, the power consumption is smaller than . The energy efficiency is around , again meeting the target proposed in [12]. These results^{6}^{6}6Note that the power consumption with eight iterations is lower than that with five iterations. This is due to the lower error detection successful rate during the first five (15) iterations. The last three (58) iterations consume much lower power, which reduced the averaged power level. are plotted in Fig. 6. The power consumption and energy efficiency are evaluated with TSMC process.
V Conclusions
In this paper, we present an ASIC implementation of highthroughput coset codes. The parallel decoding framework leads to a hardware with high area efficiency and low decoding power consumption. An area efficiency of is achieved within approximately process. The power consumption is as low as and energy efficiency is around . Scaled to process, the area efficiency can reach . It confirms that coset codes can meet the highthroughput demand in nextgeneration wireless communication systems.
References
 [1] W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: applications, trends, technologies, and open research problems,” IEEE Network, 2019.
 [2] E. Arıkan, “Channel polarization: a method for constructing capacityachieving codes for symmetric binaryinput memoryless channels,” IEEE Transactions on Information Theory, vol. 55, no. 7, pp. 30513073, Jul. 2009.
 [3] X. Wang et al., “On the construction of coset codes for parallel decoding,” Accepted by IEEE Wireless communications and Networking Conference 2019.
 [4] X. Wang et al., “Toward Terabitspersecond Communications: LowComplexity Parallel Decoding of Coset Codes,” Available on ArXiv.
 [5] A. AlamdarYazdi and F. R. Kschischang, “A Simplified SuccessiveCancellation Decoder for Polar Codes,” in IEEE Communications Letters, vol. 15, no. 12, pp. 13781380, December 2011.
 [6] S. A. Hashemi, C. Condo, and W. J. Gross, “Fast and flexible successivecancellation list decoders for polar codes,” IEEE Transactions on Signal Processing, vol. 65, no. 21, pp. 5756–5769, Nov 2017.
 [7] G. Sarkis, P. Giard, A. Vardy, C. Thibeault and W. J. Gross, “Fast Polar Decoders: Algorithm and Implementation,” in IEEE Journal on Selected Areas in Communications, vol. 32, no. 5, pp. 946957, May 2014.
 [8] X. Liu et al., “A 5.16Gbps decoder ASIC for Polar Code in 16nm FinFET,” 2018 15th International Symposium on Wireless Communication Systems (ISWCS), Lisbon, 2018, pp. 15.
 [9] A. BalatsoukasStimming, M. B. Parizi and A. Burg, “LLRBased Successive Cancellation List Decoding of Polar Codes,” in IEEE Transactions on Signal Processing, vol. 63, no. 19, pp. 51655179, Oct. 2015.
 [10] Taiwan Semiconductor Manufacturing Company Limited, “TSMC 10nm Technology,” [Online]. Available: https://www.tsmc.com/english/dedicatedFoundry/technology/10nm.htm.
 [11] Taiwan Semiconductor Manufacturing Company Limited, “TSMC 7nm Technology,” [Online]. Available: https://www.tsmc.com/english/dedicatedFoundry/technology/7nm.htm.
 [12] “EPIC  Enabling practical wireless Tb/s communications with next generation channel coding.” [Online]. Available: https://epich2020.eu/results.
Comments
There are no comments yet.