Information Distance in Multiples

05/20/2009
by   Paul M. B. Vitanyi, et al.
0

Information distance is a parameter-free similarity measure based on compression, used in pattern recognition, data mining, phylogeny, clustering, and classification. The notion of information distance is extended from pairs to multiples (finite lists). We study maximal overlap, metricity, universality, minimal overlap, additivity, and normalized information distance in multiples. We use the theoretical notion of Kolmogorov complexity which for practical purposes is approximated by the length of the compressed version of the file involved, using a real-world compression program. Index Terms-- Information distance, multiples, pattern recognition, data mining, similarity, Kolmogorov complexity

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2010

Normalized Information Distance is Not Semicomputable

Normalized information distance (NID) uses the theoretical notion of Kol...
research
01/05/2012

Information Distance: New Developments

In pattern recognition, learning, and data mining one obtains informatio...
research
02/17/2006

Similarity of Objects and the Meaning of Words

We survey the emerging area of compression-based, parameter-free, simila...
research
07/29/2018

Information Distance Revisited

We consider the notion of information distance between two objects x and...
research
05/10/2010

On Macroscopic Complexity and Perceptual Coding

The theoretical limits of 'lossy' data compression algorithms are consid...
research
12/19/2003

Clustering by compression

We present a new method for clustering based on compression. The method ...
research
05/22/2014

Compressive Mining: Fast and Optimal Data Mining in the Compressed Domain

Real-world data typically contain repeated and periodic patterns. This s...

Please sign up or login with your details

Forgot password? Click here to reset