Similarity of symbol frequency distributions with heavy tails

10/01/2015
by   Martin Gerlach, et al.
0

Quantifying the similarity between symbolic sequences is a traditional problem in Information Theory which requires comparing the frequencies of symbols in different sequences. In numerous modern applications, ranging from DNA over music to texts, the distribution of symbol frequencies is characterized by heavy-tailed distributions (e.g., Zipf's law). The large number of low-frequency symbols in these distributions poses major difficulties to the estimation of the similarity between sequences, e.g., they hinder an accurate finite-size estimation of entropies. Here we show analytically how the systematic (bias) and statistical (fluctuations) errors in these estimations depend on the sample size N and on the exponent γ of the heavy-tailed distribution. Our results are valid for the Shannon entropy (α=1), its corresponding similarity measures (e.g., the Jensen-Shanon divergence), and also for measures based on the generalized entropy of order α. For small α's, including α=1, the errors decay slower than the 1/N-decay observed in short-tailed distributions. For α larger than a critical value α^* = 1+1/γ≤ 2, the 1/N-decay is recovered. We show the practical significance of our results by quantifying the evolution of the English language over the last two centuries using a complete α-spectrum of measures. We find that frequent words change more slowly than less frequent words and that α=2 provides the most robust measure to quantify language change.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2016

Generalized Entropies and the Similarity of Texts

We show how generalized Gibbs-Shannon entropies can provide new insights...
research
07/07/2018

A Note on the Shannon Entropy of Short Sequences

For source sequences of length L symbols we proposed to use a more reali...
research
11/07/2022

Heavy-Tailed Loss Frequencies from Mixtures of Negative Binomial and Poisson Counts

Heavy-tailed random variables have been used in insurance research to mo...
research
05/03/2023

Quantifying the Dissimilarity of Texts

Quantifying the dissimilarity of two texts is an important aspect of a n...
research
08/30/2020

Probability-turbulence divergence: A tunable allotaxonometric instrument for comparing heavy-tailed categorical distributions

Real-world complex systems often comprise many distinct types of element...
research
04/24/2019

Maximum Entropy Based Significance of Itemsets

We consider the problem of defining the significance of an itemset. We s...
research
10/06/2017

Comparing reverse complementary genomic words based on their distance distributions and frequencies

In this work we study reverse complementary genomic word pairs in the hu...

Please sign up or login with your details

Forgot password? Click here to reset