Capacity-achieving decoding by guessing noise
We introduce a new algorithm for realizing Maximum Likelihood (ML) decoding in channels with memory. The algorithm is based on the principle that the receiver rank orders noise sequences from most likely to least likely. Subtracting noise from the received signal in that order, the first instance that results in an element of the code-book is the ML decoding. For channels where the noise is independent of the input and determinable from the input and output, we establish that the algorithm is capacity-achieving with uniformly-at-random selected code-books. When the code-book rate is less than capacity, we identify error exponents as the block length becomes large. When the code-book rate is beyond capacity, we identify exponents for the probability that the ML decoding is the transmitted code-word. We determine properties of the complexity of the scheme in terms of the number of computations the receiver must perform. For code rates greater than capacity, this reveals thresholds for the number of guesses in which if an element of the code-book is identified, it is likely to be the transmitted code-word. A sufficient condition for this to occur is that the normalized code-book rate is less than one minus the min-entropy rate of the noise. Based on the analysis of this noise guessing approach to ML decoding, we introduce an Approximate ML decoding scheme (AML) where the receiver abandons the search for an element of the code-book after a fixed number of noise removal queries, giving an upper-bound on complexity. While not an ML decoder, we establish that AML is also capacity-achieving for an appropriate choice of abandonment threshold, and characterize its complexity, error and success exponents. Worked examples are presented for binary memoryless and Markovian noise. These indicate that the decoding scheme provides high rates for small block sizes with low complexity.
READ FULL TEXT