Boost decoding performance of finite geometry LDPC codes with deep learning tactics

05/01/2022
by   Guangwen Li, et al.
0

It was known a standard min-sum decoder can be unrolled as a neural network after weighting each edges. We adopt the similar decoding framework to seek a low-complexity and high-performance decoder for a class of finite geometry LDPC codes in short and moderate block lengths. It is elaborated on how to generate high-quality training data effectively, and the strong link is illustrated between training loss and the bit error rate of a neural decoder after tracing the evolution curves. Considering there exists a potential conflict between the neural networks and the error-correction decoders in terms of their objectives, the necessity of restraining the number of trainable parameters to ensure training convergence or reduce decoding complexity is highlighted. Consequently, for the referred LDPC codes, their rigorous algebraic structure promotes the feasibility of cutting down the number of trainable parameters even to only one, whereas incurring marginal performance loss in the simulation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset