Lower bounds on the minimax redundancy for Markov chains with large state space

05/02/2018
by   Kedar Shriram Tatwawadi, et al.
0

For any Markov source, there exist universal codes whose normalized codelength approaches the Shannon limit asymptotically as the number of samples goes to infinity. This paper investigates how fast the gap between the normalized codelength of the "best" universal compressor and the Shannon limit (i.e. the compression redundancy) vanishes non-asymptotically in terms of the alphabet size and mixing time of the Markov source. We show that, for Markov sources whose relaxation time is at least 1 + (2+c)/√(k), where k is the state space size (and c>0 is a constant), the phase transition for the number of samples required to achieve vanishing compression redundancy is precisely Θ(k^2).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/02/2018

Minimax redundancy for Markov chains with large state space

For any Markov source, there exist universal codes whose normalized code...
research
02/01/2018

Redundancy of Markov Family with Unbounded Memory

We study the redundancy of universally compressing strings X_1,..., X_n ...
research
02/01/2018

Redundancy of unbounded memory Markov classes with continuity conditions

We study the redundancy of universally compressing strings X_1,..., X_n ...
research
04/16/2020

Algorithmic Foundations for the Diffraction Limit

For more than a century and a half it has been widely-believed (but was ...
research
09/06/2022

Compression Optimality of Asymmetric Numeral Systems

Compression also known as entropy coding has a rich and long history. Ho...
research
10/03/2018

Algorithmic Polarization for Hidden Markov Models

Using a mild variant of polar codes we design linear compression schemes...
research
01/11/2019

Universal Compression with Side Information from a Correlated Source

Packets originated from an information source in the network can be high...

Please sign up or login with your details

Forgot password? Click here to reset