Neural Decoding with Optimization of Node Activations

06/01/2022
by   Eliya Nachmani, et al.
0

The problem of maximum likelihood decoding with a neural decoder for error-correcting code is considered. It is shown that the neural decoder can be improved with two novel loss terms on the node's activations. The first loss term imposes a sparse constraint on the node's activations. Whereas, the second loss term tried to mimic the node's activations from a teacher decoder which has better performance. The proposed method has the same run time complexity and model size as the neural Belief Propagation decoder, while improving the decoding performance by up to 1.1dB on BCH codes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2019

A Gated Hypernet Decoder for Polar Codes

Hypernetworks were recently shown to improve the performance of message ...
research
01/08/2018

Near Maximum Likelihood Decoding with Deep Learning

A novel and efficient neural decoder algorithm is proposed. The proposed...
research
11/02/2022

Semi-Deterministic Subspace Selection for Sparse Recursive Projection-Aggregation Decoding of Reed-Muller Codes

Recursive projection aggregation (RPA) decoding as introduced in [1] is ...
research
11/04/2020

Learned Decimation for Neural Belief Propagation Decoders

We introduce a two-stage decimation process to improve the performance o...
research
06/21/2017

Deep Learning Methods for Improved Decoding of Linear Codes

The problem of low complexity, close to optimal, channel decoding of lin...
research
12/21/2021

Adversarial Neural Networks for Error Correcting Codes

Error correcting codes are a fundamental component in modern day communi...
research
11/21/2018

Regularizing by the Variance of the Activations' Sample-Variances

Normalization techniques play an important role in supporting efficient ...

Please sign up or login with your details

Forgot password? Click here to reset