Determining the Number of Samples Required to Estimate Entropy in Natural Sequences
Calculating the Shannon entropy for symbolic sequences has been widely considered in many fields. For descriptive statistical problems such as estimating the N-gram entropy of English language text, a common approach is to use as much data as possible to obtain progressively more accurate estimates. However in some instances, only short sequences may be available. This gives rise to the question of how many samples are needed to compute entropy. In this paper, we examine this problem and propose a method for estimating the number of samples required to compute Shannon entropy for a set of ranked symbolic natural events. The result is developed using a modified Zipf-Mandelbrot law and the Dvoretzky-Kiefer-Wolfowitz inequality, and we propose an algorithm which yields an estimate for the minimum number of samples required to obtain an estimate of entropy with a given confidence level and degree of accuracy.
READ FULL TEXT