Convergence of Conditional Entropy for Long Range Dependent Markov Chains

10/28/2021
by   Andrew Feutrill, et al.
0

In this paper we consider the convergence of the conditional entropy to the entropy rate for Markov chains. Convergence of certain statistics of long range dependent processes, such as the sample mean, is slow. It has been shown in Carpio and Daley <cit.> that the convergence of the n-step transition probabilities to the stationary distribution is slow, without quantifying the convergence rate. We prove that the slow convergence also applies to convergence to an information-theoretic measure, the entropy rate, by showing that the convergence rate is equivalent to the convergence rate of the n-step transition probabilities to the stationary distribution, which is equivalent to the Markov chain mixing time problem. Then we quantify this convergence rate, and show that it is O(n^2H-2), where n is the number of steps of the Markov chain and H is the Hurst parameter. Finally, we show that due to this slow convergence, the mutual information between past and future is infinite if and only if the Markov chain is long range dependent. This is a discrete analogue of characterisations which have been shown for other long range dependent processes.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset