Capacity-achieving decoding by guessing noise

02/20/2018
by   Ken R. Duffy, et al.
0

We introduce a new algorithm for realizing Maximum Likelihood (ML) decoding in channels with memory. The algorithm is based on the principle that the receiver rank orders noise sequences from most likely to least likely. Subtracting noise from the received signal in that order, the first instance that results in an element of the code-book is the ML decoding. For channels where the noise is independent of the input and determinable from the input and output, we establish that the algorithm is capacity-achieving with uniformly-at-random selected code-books. When the code-book rate is less than capacity, we identify error exponents as the block length becomes large. When the code-book rate is beyond capacity, we identify exponents for the probability that the ML decoding is the transmitted code-word. We determine properties of the complexity of the scheme in terms of the number of computations the receiver must perform. For code rates greater than capacity, this reveals thresholds for the number of guesses in which if an element of the code-book is identified, it is likely to be the transmitted code-word. A sufficient condition for this to occur is that the normalized code-book rate is less than one minus the min-entropy rate of the noise. Based on the analysis of this noise guessing approach to ML decoding, we introduce an Approximate ML decoding scheme (AML) where the receiver abandons the search for an element of the code-book after a fixed number of noise removal queries, giving an upper-bound on complexity. While not an ML decoder, we establish that AML is also capacity-achieving for an appropriate choice of abandonment threshold, and characterize its complexity, error and success exponents. Worked examples are presented for binary memoryless and Markovian noise. These indicate that the decoding scheme provides high rates for small block sizes with low complexity.

READ FULL TEXT

page 17

page 18

page 19

research
02/11/2019

Guessing random additive noise decoding with soft detection symbol reliability information (SGRAND)

Assuming hard detection from an additive noise channel, we recently intr...
research
12/02/2022

Physical layer insecurity

In the classic wiretap model, Alice wishes to reliably communicate to Bo...
research
01/07/2022

On The Decoding Error Weight of One or Two Deletion Channels

This paper tackles two problems that are relevant to coding for insertio...
research
02/06/2018

A Distance Between Channels: the average error of mismatched channels

Two channels are equivalent if their maximum likelihood (ML) decoders co...
research
01/15/2020

The Error Probability of Maximum-Likelihood Decoding over Two Deletion Channels

This paper studies the problem of reconstructing a word given several of...
research
06/08/2020

Noise Recycling

We introduce Noise Recycling, a method that substantially enhances decod...
research
05/24/2023

Segmented GRAND: Combining Sub-patterns in Near-ML Order

The recently introduced maximum-likelihood (ML) decoding scheme called g...

Please sign up or login with your details

Forgot password? Click here to reset