Transversal GRAND for Network Coded Data
This paper considers a transmitter, which uses random linear coding (RLC) to encode data packets. The generated coded packets are broadcast to one or more receivers. A receiver can recover the data packets if it gathers a sufficient number of coded packets. We assume that the receiver does not abandon its efforts to obtain the data packets if RLC decoding has been unsuccessful; instead, it employs syndrome decoding in an effort to repair erroneously received coded packets before it attempts RLC decoding again. A key assumption of most decoding techniques, including syndrome decoding, is that errors are independently and identically distributed within the received coded packets. Motivated by the recently proposed `guessing random additive noise decoding' (GRAND) framework, we develop transversal GRAND: an algorithm that exploits statistical dependence in the occurrence of errors, complements RLC decoding and achieves a gain over syndrome decoding, in terms of the probability that the receiver will recover the original data packets.
READ FULL TEXT