Autoencoder-Based Error Correction Coding for One-Bit Quantization

09/24/2019
by   Eren Balevi, et al.
0

This paper proposes a novel deep learning-based error correction coding scheme for AWGN channels under the constraint of one-bit quantization in the receivers. Specifically, it is first shown that the optimum error correction code that minimizes the probability of bit error can be obtained by perfectly training a special autoencoder, in which "perfectly" refers to converging the global minima. However, perfect training is not possible in most cases. To approach the performance of a perfectly trained autoencoder with a suboptimum training, we propose utilizing turbo codes as an implicit regularization, i.e., using a concatenation of a turbo code and an autoencoder. It is empirically shown that this design gives nearly the same performance as to the hypothetically perfectly trained autoencoder, and we also provide a theoretical proof of why that is so. The proposed coding method is as bandwidth efficient as the integrated (outer) turbo code, since the autoencoder exploits the excess bandwidth from pulse shaping and packs signals more intelligently thanks to sparsity in neural networks. Our results show that the proposed coding scheme at finite block lengths outperforms conventional turbo codes even for QPSK modulation. Furthermore, the proposed coding method can make one-bit quantization operational even for 16-QAM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2020

High Rate Communication over One-Bit Quantized Channels via Deep Learning and LDPC Codes

This paper proposes a method for designing error correction codes by com...
research
01/17/2019

AI Coding: Learning to Construct Error Correction Codes

In this paper, we investigate an artificial-intelligence (AI) driven app...
research
10/18/2018

A mathematical theory of imperfect communication: Energy efficiency considerations in multi-level coding

A novel framework is presented for the analysis of multi-level coding th...
research
03/08/2017

Don't Fear the Bit Flips: Optimized Coding Strategies for Binary Classification

After being trained, classifiers must often operate on data that has bee...
research
05/16/2023

Component Training of Turbo Autoencoders

Isolated training with Gaussian priors (TGP) of the component autoencode...
research
04/16/2021

Autoencoder-Based Unequal Error Protection Codes

We present a novel autoencoder-based approach for designing codes that p...
research
06/06/2021

Area-Delay-Efficeint FPGA Design of 32-bit Euclid's GCD based on Sum of Absolute Difference

Euclids algorithm is widely used in calculating of GCD (Greatest Common ...

Please sign up or login with your details

Forgot password? Click here to reset