Learning from the Syndrome

10/23/2018
by   Loren Lugosch, et al.
0

In this paper, we introduce the syndrome loss, an alternative loss function for neural error-correcting decoders based on a relaxation of the syndrome. The syndrome loss penalizes the decoder for producing outputs that do not correspond to valid codewords. We show that training with the syndrome loss yields decoders with consistently lower frame error rate for a number of short block codes, at little additional cost during training and no additional cost during inference. The proposed method does not depend on knowledge of the transmitted codeword, making it a promising tool for online adaptation to changing channel conditions.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/10/2021

FAID Diversity via Neural Networks

Decoder diversity is a powerful error correction framework in which a co...
02/05/2021

Intermediate Loss Regularization for CTC-based Speech Recognition

We present a simple and efficient auxiliary loss function for automatic ...
10/27/2021

Reed-Muller Codes Achieve Capacity on BMS Channels

This paper considers the performance of long Reed-Muller (RM) codes tran...
01/21/2018

Channel Input Adaptation via Natural Type Selection

For the model of communication through a discrete memoryless channel usi...
05/11/2021

Using Deep Neural Networks to Predict and Improve the Performance of Polar Codes

Polar codes can theoretically achieve very competitive Frame Error Rates...
02/23/2018

Time-Varying Block Codes for Synchronization Errors: MAP Decoder and Practical Issues

In this paper we consider Time-Varying Block (TVB) codes, which general...
12/21/2021

Adversarial Neural Networks for Error Correcting Codes

Error correcting codes are a fundamental component in modern day communi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.