Polar Code Moderate Deviation: Recovering the Scaling Exponent

06/06/2018 ∙ by Hsin-Po Wang, et al. ∙ 0

In 2008 Arikan proposed polar coding [arXiv:0807.3917] which we summarize as follows: (a) From the root channel W synthesize recursively a series of channels W_N^(1),,W_N^(N). (b) Select sophisticatedly a subset A of synthetic channels. (c) Transmit information using synthetic channels indexed by A and freeze the remaining synthetic channels. Arikan gives each synthetic channel a score (called the Bhattacharyya parameter) that determines whether it should be selected or frozen. As N grows, a majority of the scores are either very high or very low, i.e., they polarize. By characterizing how fast they polarize, Arikan showed that polar coding is able to produce a series of codes that achieve capacity on symmetric binary-input memoryless channels. In measuring how the scores polarize the relation among block length, gap to capacity, and block error probability are studied. In particular, the error exponent regime fixes the gap to capacity and varies the other two. The scaling exponent regime fixes the block error probability and varies the other two. The moderate deviation regime varies all three factors at once. The latest result [arxiv:1501.02444, Theorem 7] in the moderate deviation regime does not imply the scaling exponent regime as a special case. We give a result that does. (See Corollary 8.)



There are no comments yet.


page 4

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.