1 Introduction: Hierarchy and Other Symmetries in Data Analysis
Herbert A. Simon, Nobel Laureate in Economics, originator of “bounded rationality” and of “satisficing”, believed in hierarchy at the basis of the human and social sciences, as the following quotation shows: “… my central theme is that complexity frequently takes the form of hierarchy and that hierarchic systems have some common properties independent of their specific content. Hierarchy, I shall argue, is one of the central structural schemes that the architect of complexity uses.” ([74], p. 184.)
Partitioning a set of observations [75, 76, 49] leads to some very simple symmetries. This is one approach to clustering and data mining. But such approaches, often based on optimization, are not of direct interest to us here. Instead we will pursue the theme pointed to by Simon, namely that the notion of hierarchy is fundamental for interpreting data and the complex reality which the data expresses. Our work is very different too from the marvelous view of the development of mathematical group theory – but viewed in its own right as a complex, evolving system – presented by Foote [19].
Weyl [80] makes the case for the fundamental importance of symmetry in science, engineering, architecture, art and other areas. As a “guiding principle”, “Whenever you have to do with a structureendowed entity … try to determine its group of automorphisms, the group of those elementwise transformations which leave all structural relations undisturbed. You can expect to gain a deep insight in the constitution of [the structureendowed entity] in this way. After that you may start to investigate symmetric configurations of elements, i.e. configurations which are invariant under a certain subgroup of the group of all automorphisms; …” ([80], p. 144).
1.1 About this Article
In section 2, we describe ultrametric topology as an expression of hierarchy. This provides comprehensive background on the commonly used quadratic computational time (i.e., , where is the number of observations) agglomerative hierarchical clustering algorithms.
In section 3, we look at the generalized ultrametric context. This is closely linked to analysis based on lattices. We use a case study from chemical database matching to illustrate algorithms in this area.
In section 4, padic encoding, providing a number theory vantage point on ultrametric topology, gives rise to additional symmetries and ways to capture invariants in data.
Section 5 deals with symmetries that are part and parcel of a tree, representing a partial order on data, or equally a set of subsets of the data, some of which are embedded. An application of such symmetry targets from a dendrogram expressing a hierarchical embedding is provided through the Haar wavelet transform of a dendrogram and wavelet filtering based on the transform.
Section 6 deals with new and recent results relating to the remarkable symmetries of massive, and especially high dimensional data sets. An example is discussed of segmenting a financial forex (foreign exchange) trading signal.
1.2 A Brief Introduction to Hierarchical Clustering
For the reader new to analysis of data a very short introduction is now provided on hierarchical clustering. Along with other families of algorithm, the objective is automatic classification, for the purposes of data mining, or knowledge discovery. Classification, after all, is fundamental in human thinking, and machinebased decision making. But we draw attention to the fact that our objective is unsupervised, as opposed to supervised
classification, also known as discriminant analysis or (in a general way) machine learning. So here we are
notconcerned with generalizing the decision making capability of training data, nor are we concerned with fitting statistical models to data so that these models can play a role in generalizing and predicting. Instead we are concerned with having “data speak for themselves”. That this unsupervised objective of classifying data (observations, objects, events, phenomena, etc.) is a huge task in our society is unquestionably true. One may think of situations when precedents are very limited, for instance.
Among families of clustering, or unsupervised classification, algorithms, we can distinguish the following: (i) array permuting and other visualization approaches; (ii) partitioning to form (discrete or overlapping) clusters through optimization, including graphbased approaches; and – of interest to us in this article – (iii) embedded clusters interrelated in a treebased way.
1.3 A Brief Introduction to pAdic Numbers
The real number system, and a padic number system for given prime, p, are potentially equally useful alternatives. pAdic numbers were introduced by Kurt Hensel in 1898.
Whether we deal with Euclidean or with nonEuclidean geometry, we are (nearly) always dealing with reals. But the reals start with the natural numbers, and from associating observational facts and details with such numbers we begin the process of measurement. From the natural numbers, we proceed to the rationals, allowing fractions to be taken into consideration.
The following view of how we do science or carry out other quantitative study was proposed by Volovich in 1987 [78, 79]. See also the surveys in [15, 22]. We can always use rationals to make measurements. But they will be approximate, in general. It is better therefore to allow for observables being “continuous, i.e. endow them with a topology”. Therefore we need a completion of the field of rationals. To complete the field of rationals, we need Cauchy sequences and this requires a norm on (because the Cauchy sequence must converge, and a norm is the tool used to show this). There is the Archimedean norm such that: for any , with , then there exists an integer such that . For convenience here, we write: for this norm. So if this completion is Archimedean, then we have , the reals. That is fine if space is taken as commutative and Euclidean.
What of alternatives? Remarkably all norms are known. Besides the norm, we have an infinity of norms, , labeled by primes, p. By Ostrowski’s theorem [65] these are all the possible norms on . So we have an unambiguous labeling, via p, of the infinite set of nonArchimedean completions of to a field endowed with a topology.
In all cases, we obtain locally compact completions, , of . They are the fields of padic numbers. All these are continua. Being locally compact, they have additive and multiplicative Haar measures. As such we can integrate over them, such as for the reals.
1.4 Brief Discussion of pAdic and mAdic Numbers
We will use p to denote a prime, and m to denote a nonzero positive integer. A padic number is such that any set of p integers which are in distinct residue classes modulo p may be used as padic digits. (Cf. remark below, at the end of section 4.1, quoting from [25]. It makes the point that this opens up a range of alternative notation options in practice.) Recall that a ring does not allow division, while a field does. mAdic numbers form a ring; but padic numbers form a field. So a priori, 10adic numbers form a ring. This provides us with a reason for preferring padic over madic numbers.
We can consider various padic expansions:

, which defines positive integers. For a padic number, we require . (In practice: just write the integer in binary form.)

defines rationals.

where is an integer, not necessarily positive, defines the field of padic numbers.
, the field of padic numbers, is (as seen in these definitions) the field of padic expansions.
The choice of p is a practical issue. Indeed, adelic numbers use all possible values of p (see [6] for extensive use and discussion of the adelic number framework). Consider [14, 37]. DNA (desoxyribonucleic acid) is encoded using four nucleotides: A, adenine; G, guanine; C, cytosine; and T, thymine. In RNA (ribonucleic acid) T is replaced by U, uracil. In [14] a 5adic encoding is used, since 5 is a prime and thereby offers uniqueness. In [37] a 4adic encoding is used, and a 2adic encoding, with the latter based on 2digit boolean expressions for the four nucleotides (00, 01, 10, 11). A default norm is used, based on a longest common prefix – with padic digits from the start or left of the sequence (see section 4.2 below where this longest common prefix norm or distance is used and, before that, section 3.3 where an example is discussed in detail).
2 Ultrametric Topology
In this section we mainly explore symmetries related to: geometric shape; matrix structure; and lattice structures.
2.1 Ultrametric Space for Representing Hierarchy
Consider Figures 1 and 2, illustrating the ultrametric distance and its role in defining a hierarchy. An early, influential paper is Johnson [35] and an important survey is that of Rammal et al. [67]. Discussion of how a hierarchy expresses the semantics of change and distinction can be found in [61].
The ultrametric topology was introduced by Marc Krasner [40], the ultrametric inequality having been formulated by Hausdorff in 1934. Essential motivation for the study of this area is provided by [70] as follows. Real and complex fields gave rise to the idea of studying any field with a complete valuation comparable to the absolute value function. Such fields satisfy the “strong triangle inequality” . Given a valued field, defining a totally ordered Abelian (i.e. commutative) group, an ultrametric space is induced through . Various terms are used interchangeably for analysis in and over such fields such as padic, ultrametric, nonArchimedean, and isosceles. The natural geometric ordering of metric valuations is on the real line, whereas in the ultrametric case the natural ordering is a hierarchical tree.
2.2 Some Geometrical Properties of Ultrametric Spaces
We see from the following, based on [41] (chapter 0, part IV), that an ultrametric space is quite different from a metric one. In an ultrametric space everything “lives” on a tree.
In an ultrametric space, all triangles are either isosceles with small base, or equilateral. We have here very clear symmetries of shape in an ultrametric topology. These symmetry “patterns” can be used to fingerprint data data sets and time series: see [55, 57] for many examples of this.
Some further properties that are studied in [41] are: (i) Every point of a circle in an ultrametric space is a center of the circle. (ii) In an ultrametric topology, every ball is both open and closed (termed clopen). (iii) An ultrametric space is 0dimensional (see [7, 69]). It is clear that an ultrametric topology is very different from our intuitive, or Euclidean, notions. The most important point to keep in mind is that in an ultrametric space everything “lives” in a hierarchy expressed by a tree.
2.3 Ultrametric Matrices and Their Properties
For an matrix of positive reals, symmetric with respect to the principal diagonal, to be a matrix of distances associated with an ultrametric distance on , a sufficient and necessary condition is that a permutation of rows and columns satisfies the following form of the matrix:

Above the diagonal term, equal to 0, the elements of the same row are nondecreasing.

For every index , if
then
and
Under these circumstances, is the length of the section beginning, beyond the principal diagonal, the interval of columns of equal terms in row .
To illustrate the ultrametric matrix format, consider the small data set shown in Table 1. A dendrogram produced from this is in Figure 3. The ultrametric matrix that can be read off this dendrogram is shown in Table 2. Finally a visualization of this matrix, illustrating the ultrametric matrix properties discussed above, is in Figure 4.
Sepal.Length  Sepal.Width  Petal.Length  Petal.Width  

iris1  5.1  3.5  1.4  0.2 
iris2  4.9  3.0  1.4  0.2 
iris3  4.7  3.2  1.3  0.2 
iris4  4.6  3.1  1.5  0.2 
iris5  5.0  3.6  1.4  0.2 
iris6  5.4  3.9  1.7  0.4 
iris7  4.6  3.4  1.4  0.3 
iris1  iris2  iris3  iris4  iris5  iris6  iris7  

iris1  0  0.6480741  0.6480741  0.6480741  1.1661904  1.1661904  1.1661904 
iris2  0.6480741  0  0.3316625  0.3316625  1.1661904  1.1661904  1.1661904 
iris3  0.6480741  0.3316625  0  0.2449490  1.1661904  1.1661904  1.1661904 
iris4  0.6480741  0.3316625  0.2449490  0  1.1661904  1.1661904  1.1661904 
iris5  1.1661904  1.1661904  1.1661904  1.1661904  0  0.6164414  0.9949874 
iris6  1.1661904  1.1661904  1.1661904  1.1661904  0.6164414  0  0.9949874 
iris7  1.1661904  1.1661904  1.1661904  1.1661904  0.9949874  0.9949874  0 
2.4 Clustering Through Matrix Row and Column Permutation
Figure 4 shows how an ultrametric distance allows a certain structure to be visible (quite possibly, in practice, subject to an appropriate row and column permuting), in a matrix defined from the set of all distances. For set , then, this matrix expresses the distance mapping of the Cartesian product, . denotes the nonnegative reals. A priori the rows and columns of the function of the Cartesian product set with itself could be in any order. The ultrametric matrix properties establish what is possible when the distance is an ultrametric one. Because the matrix (a 2way data object) involves one mode (due to set being crossed with itself; as opposed to the 2mode case where an observation set is crossed by an attribute set) it is clear that both rows and columns can be permuted to yield the same order on . A property of the form of the matrix is that small values are at or near the principal diagonal.
A generalization opens up for this sort of clustering by visualization scheme. Firstly, we can directly apply row and column permuting to 2mode data, i.e. to the rows and columns of a matrix crossing indices by attributes , . A matrix of values, , is furnished by the function acting on the sets and . Here, each such term is realvalued. We can also generalize the principle of permuting such that small values are on or near the principal diagonal to instead allow similar values to be near one another, and thereby to facilitate visualization. An optimized way to do this was pursued in [45, 44]. Comprehensive surveys of clustering algorithms in this area, including objective functions, visualization schemes, optimization approaches, presence of constraints, and applications, can be found in [46, 43]. See too [12, 53].
For all these approaches, underpinning them are row and column permutations, that can be expressed in terms of the permutation group, , on elements.
2.5 Other Miscellaneous Symmetries
As examples of various other local symmetries worthy of consideration in data sets consider subsets of data comprising clusters, and reciprocal nearest neighbor pairs.
Given an observation set, , we define dissimilarities as the mapping . A dissimilarity is a positive, definite, symmetric measure (i.e., ). If in addition the triangular inequality is satisfied (i.e., ) then the dissimilarity is a distance.
If is endowed with a metric, then this metric is mapped onto an ultrametric. In practice, there is no need for to be endowed with a metric. Instead a dissimilarity is satisfactory.
A hierarchy, , is defined as a binary, rooted, noderanked tree, also termed a dendrogram [3, 35, 41, 53]. A hierarchy defines a set of embedded subsets of a given set of objects , indexed by the set . That is to say, object in the object set is denoted , and . These subsets are totally ordered by an index function , which is a stronger condition than the partial order required by the subset relation. The index function is represented by the ordinate in Figure 3 (the “height” or “level”). A bijection exists between a hierarchy and an ultrametric space.
Often in this article we will refer interchangeably to the object set, , and the associated set of indices, .
3 Generalized Ultrametric
In this subsection, we consider an ultrametric defined on the power set or join semilattice. Comprehensive background on ordered sets and lattices can be found in [10]. A review of generalized distances and ultrametrics can be found in [72].
3.1 Link with Formal Concept Analysis
Typically hierarchical clustering is based on a distance (which can be relaxed often to a dissimilarity, not respecting the triangular inequality, and mutatis mutandis to a similarity), defined on all pairs of the object set: . I.e., a distance is a positive real value. Usually we require that a distance cannot be 0valued unless the objects are identical. That is the traditional approach.
A different form of ultrametrization is achieved from a dissimilarity defined on the power set of attributes characterizing the observations (objects, individuals, etc.) . Here we have: , where indexes the attribute (variables, characteristics, properties, etc.) set.
This gives rise to a different notion of distance, that maps pairs of objects onto elements of a join semilattice. The latter can represent all subsets of the attribute set, . That is to say, it can represent the power set, commonly denoted , of .
As an example, consider, say, objects characterized by 3 boolean
(presence/absence) attributes, shown in Figure 5 (top).
Define dissimilarity between a pair of objects in this table as
a set of 3 components, corresponding to the 3 attributes, such that
if both components are 0, we have 1; if either component is 1 and the
other 0, we have 1; and if both components are 1 we get 0. This is the
simple matching coefficient [33]. We could
use, e.g., Euclidean distance for each of the values sought; but we prefer
to treat 0 values in both components as signaling a 1 contribution. We get
then which we will call d1,d2
. Then,
which we will call d2
. Etc.
With the latter we create lattice nodes as shown in the middle
part of Figure 5.
a  1  0  1 

b  0  1  1 
c  1  0  1 
e  1  0  0 
f  0  0  1 
Potential lattice vertices Lattice vertices found Level d1,d2,d3 d1,d2,d3 3 / \ / \ d1,d2 d2,d3 d1,d3 d1,d2 d2,d3 2 \ / \ / d1 d2 d3 d2 1
The set d1,d2,d3 corresponds to: and
The subset d1,d2 corresponds to: and
The subset d2,d3 corresponds to: and
The subset d2 corresponds to:
Clusters defined by all pairwise linkage at level :
Clusters defined by all pairwise linkage at level :
In Formal Concept Analysis [10, 24], it is the lattice itself which is of primary interest. In [33]
there is discussion of, and a range of examples on, the close relationship between the traditional hierarchical cluster analysis based on
, and hierarchical cluster analysis “based on abstract posets” (a poset is a partially ordered set), based on . The latter, leading to clustering based on dissimilarities, was developed initially in [32].3.2 Applications of Generalized Ultrametrics
As noted in the previous subsection, the usual ultrametric is an ultrametric distance, i.e. for a set I, . The generalized ultrametric is also consistent with this definition, where the range is a subset of the power set: , where is a partially ordered set. In other words, the generalized ultrametric distance is a set. Some areas of application of generalized ultrametrics will now be discussed.
In the theory of reasoning, a monotonic operator is rigorous application of a succession of conditionals (sometimes called consequence relations). However negation or multiple valued logic (i.e. encompassing intermediate truth and falsehood) require support for nonmonotonic reasoning.
Thus [28]: “Once one introduces negation … then certain of the important operators are not monotonic (and therefore not continuous), and in consequence the KnasterTarski theorem [i.e. for fixed points; see [10]] is no longer applicable to them. Various ways have been proposed to overcome this problem. One such [approach is to use] syntactic conditions on programs … Another is to consider different operators … The third main solution is to introduce techniques from topology and analysis to augment arguments based on order … [the latter include:] methods based on metrics … on quasimetrics … and finally … on ultrametric spaces.”
The convergence to fixed points that are based on a generalized ultrametric system is precisely the study of spherically complete systems and expansive automorphisms discussed in section 4.3 below. As expansive automorphisms we see here again an example of symmetry at work.
3.3 Example of Application: Chemical Database Matching
In the 1990s, the Ward minimum variance hierarchical clustering method became the method of choice in the chemoinformatics community due to its hierarchical nature and the quality of the clusters produced. Unfortunately the method reached its limits once the pharmaceutical companies tried processing datasets of more than 500,000 compounds due to: the processing requirements of the reciprocal nearest neighbor algorithm; the requirement to hold all chemical structure “fingerprints” in memory to enable random access; and the requirement that parallel implementation use a sharedmemory architecture. Let us look at an alternative hierarchical clustering algorithm that bypasses these computational difficulties.
A direct application of generalized ultrametrics to data mining is the following. The potentially huge advantage of the generalized ultrametric is that it allows a hierarchy to be read directly off the input data, and bypasses the consideration of all pairwise distances in agglomerative hierarchical clustering. In [62] we study application to chemoinformatics. Proximity and best match finding is an essential operation in this field. Typically we have one million chemicals upwards, characterized by an approximate 1000valued attribute encoding.
Consider first our need to normalize the data. We divide each boolean (presence/absence) value by its corresponding column sum.
We can consider the hierarchical cluster analysis from abstract posets as based on . In [33], the median of the distance values is used, as input to a traditional hierarchical clustering, with alternative schemes discussed. See also [32] for an early elaboration of this approach.
Let us now proceed to take a particular approach to this, which has very convincing computational benefits.
3.3.1 Ultrametrization through Baire Space Embedding: Notation
A Baire space [42] consists of countably infinite sequences with a metric defined in terms of the longest common prefix: the longer the common prefix, the closer a pair of sequences. The Baire metric, and simultaneously ultrametric, will be defined in definition 1 in the next subsection. What is of interest to us here is this longest common prefix metric, which additionally is an ultrametric. The longest common prefixes at issue here are those of precision of any value (i.e., , for chemical compound , and chemical structure code ). Consider two such values, and , which, when the context easily allows it, we will call and . Each are of some precision, and we take the integer
to be the maximum precision. We pad a value with 0s if necessary, so that all values are of the same precision. Finally, we will assume for convenience that each value
and this can be arranged by normalization.3.3.2 The Case of One Attribute
Thus we consider ordered sets and for . In line with our notation, we can write and for these numbers, with the set now ordered. (So, is the first decimal place of precision; is the second decimal place; ; is the th decimal place.) The cardinality of the set is the precision with which a number, , is measured. Without loss of generality, through normalization, we will take all . We will also consider decimal numbers, only, in this article (hence for all numbers , and for all digits ), again with no loss of generality to nondecimal number representations.
Consider as examples ; and . In these cases, . For , we find . For . But for .
We now introduce the following distance:
(1) 
So for and we have .
The Baire distance is used in denotational semantics where one considers and as words (of equal length, in the finite case), and then this distance is defined from a common length prefix, or left substring, in the two words. For a set of words, a prefix tree can be built to expedite word matching, and the Baire distance derived from this tree.
We have . Identical and have Baire distance equal to . The Baire distance is a 1bounded ultrametric.
The Baire ultrametric defines a hierarchy, which can be expressed as a multiway tree, on a set of numbers, . So the number , indexed by , , is of precision . It is actually simple to determine this hierarchy. The partition at level has clusters defined as all those numbers indexed by that share the same 1st digit. The partition at level has clusters defined as all those numbers indexed by that share the same 2nd digit; and so on, until we reach . A strictly finer, or identical, partition is to be found at each successive level (since once a pair of numbers becomes dissimilar, , this nonzero distance cannot be reversed). Identical numbers at level have distance . Identical numbers at level have distance . Identical numbers at level have distance ; and so on, to level , when distance .
3.3.3 Analysis: Baire Ultrametrization from Numerical Precision
In this section we use (i) a random projection of vectors into a 1dimensional space (so each chemical structure is mapped onto a scalar value, by design
and ) followed by (ii) implicit use of a prefix tree constructed on the digits of the set of scalar values. First we will look at this procedure. Then we will return to discuss its properties.We seek all such that:

for all ,


to fixed precision
Recall that is an ordered set. We impose a user specified upper limit on precision, .
Now rather than separate tests for equality (point 1 above), a sufficient condition is that for a set of weights . What helps in making this sufficient condition for equality work well in practice is that many of the values are 0: cf. the approximate 8% matrix occupancy rate that holds here. We experimented with such possibilities as (i.e., and (i.e., . A first principal component would allow for the definition of the least squares optimal linear fit of the projections. The best choice of
values we found for uniformly distributed values in
: for each , .Sig. dig.  No. clusters 

4  6591 
4  6507 
4  5735 
3  6481 
3  6402 
3  5360 
2  2519 
2  2576 
2  2135 
1  138 
1  148 
1  167 
Table 3 shows, in immediate succession, results for three data sets. The normalizing column sums were calculated and applied independently to each of the three data sets. Insofar as is directly proportional, whether calculated on 7500 chemical structures or 1.2 million, leads to a constant of proportionality, only, between the two cases. As noted, a random projection was used. Finally, identical projected values were read off, to determine clusters.
3.3.4 Discussion: Random Projection and Hashing
Random projection is the finding of a low dimensional embedding of a point set – dimension equals 1, or a line or axis, in this work – such that the distortion of any pair of points is bounded by a function of the lower dimensionality [77]. There is a burgeoning literature in this area, e.g. [16]. While random projection per se will not guarantee a bijection of best match in original and in lower dimensional spaces, our use of projection here is effectively a hashing method ([47] uses MD5 for nearest neighbor search), in order to deliberately find hash collisions – thereby providing a sufficient condition for the mapped vectors to be identical.
Collision of identically valued vectors is guaranteed, but what of collision of nonidentically valued vectors, which we want to avoid?
To prove such a result may require an assumption of what distribution our original data follow. A general class is referred to as a stable distribution [29]: this is a distribution such that a limited number of weighted sums of the variables is also itself of the same distribution. Examples include both Gaussian and longtailed or power law distributions.
Interestingly, however, very high dimensional (or equivalently, very low sample size or low ) data sets, by virtue of high relative dimensionality alone, have points mostly lying at the vertices of a regular simplex or polygon [55, 27]. This intriguing aspect is one reason, perhaps, why we have found random projection to work well. Another reason is the following: if we work on normalized data, then the values on any two attributes will be small. Hence and are small. Now if the random weight for this attribute is , then the random projections are, respectively, and . But these terms are dominated by the random weights. We can expect near equal and terms, for all , to be mapped onto fairly close resultant scalar values.
Further work is required to confirm these hypotheses, viz., that high dimensional data may be highly “regular” or “structured” in such a way; and that, as a consequence, hashing is particularly wellbehaved in the sense of nonidentical vectors being nearly always collisionfree. There is further discussion in [8].
We remark that a prefix tree, or trie, is wellknown in the searching and sorting literature [26], and is used to expedite the finding of longest common prefixes. At level one, nodes are associated with the first digit. At level two, nodes are associated with the second digit, and so on through deeper levels of the tree.
3.3.5 Simple Clustering Hierarchy from the Baire Space Embedding
The Baire ultrametrization induces a (fairly flat) multiway tree on the given data set.
Consider a partition yielded by identity (over all the attribute set) at a given precision level. Then for precision levels we have, at each, a partition, such that all member clusters are ordered by reverse embedding (or set inclusion): . Call each such sequence of embeddings a chain. The entire data set is covered by a set of such chains. This sequence of partitions is ordered by set inclusion.
The computational time complexity is as follows. Let the number of chemicals be denoted ; the number of attributes is ; and the total number of digits precision is . Consider a particular number of digits precision, , where . Then the random projection takes operations. A sort follows, requiring operations. Then clusters are read off with operations. Overall, the computational effort is bounded by (where are constants), which is equal to or .
Further evaluation and a number of further case studies are covered in [8].
4 Hierarchy in a pAdic Number System
A dendrogram is widely used in hierarchical, agglomerative clustering, and is induced from observed data. In this article, one of our important goals is to show how it lays bare many diverse symmetries in the observed phenomenon represented by the data. By expressing a dendrogram in padic terms, we open up a wide range of possibilities for seeing symmetries and attendant invariants.
4.1 pAdic Encoding of a Dendrogram
We will introduce now the onetoone mapping of clusters (including singletons) in a dendrogram into a set of padically expressed integers (a forteriori, rationals, or ). The field of padic numbers is the most important example of ultrametric spaces. Addition and multiplication of padic integers, (cf. expression in subsection 1.4), are welldefined. Inverses exist and no zerodivisors exist.
A terminaltoroot traversal in a dendrogram or binary rooted tree is defined as follows. We use the path , where is a given object specifying a given terminal, and are the embedded classes along this path, specifying nodes in the dendrogram. The root node is specified by the class comprising all objects.
A terminaltoroot traversal is the shortest path between the given terminal node and the root node, assuming we preclude repeated traversal (backtrack) of the same path between any two nodes.
By means of terminaltoroot traversals, we define the following padic encoding of terminal nodes, and hence objects, in Figure 6.
(2)  
If we choose the resulting decimal equivalents could be the same: cf. contributions based on and . Given that the coefficients of the terms () are in the set (implying for the additional terms: ), the coding based on is required to avoid ambiguity among decimal equivalents.
A few general remarks on this encoding follow. For the labeled ranked binary trees that we are considering (for discussion of combinatorial properties based on labeled, ranked and binary trees, see [52]), we require the labels and for the two branches at any node. Of course we could interchange these labels, and have these and labels reversed at any node. By doing so we will have different padic codes for the objects, .
The following properties hold: (i) Unique encoding: the decimal codes for each (lexicographically ordered) are unique for ; and (ii) Reversibility: the dendrogram can be uniquely reconstructed from any such set of unique codes.
The padic encoding defined for any object set can be expressed as follows for any object associated with a terminal node:
(3) 
In greater detail we have:
(4) 
Here is the level or rank (root: ; terminal: 1), and is an object index.
In our example we have used: for a left branch (in the sense of Figure 6), for a right branch, and when the node is not on the path from that particular terminal to the root.
A matrix form of this encoding is as follows, where denotes the transpose of the vector.
Let be the column vector .
Let be the column vector .
Define a characteristic matrix of the branching codes, and , and an absent or nonexistent branching given by , as a set of values where , the indices of the object set; and , the indices of the dendrogram levels or nodes ordered increasingly. For Figure 6 we therefore have:
(5) 
For given level , , the absolute values give the membership function either by node, , which is therefore read off columnwise; or by object index, , which is therefore read off rowwise.
(6) 
Here, x is the decimal encoding, is the matrix with dendrogram branching codes (cf. example shown in expression (5)), and p is the vector of powers of a fixed integer (usually, more restrictively, fixed prime) .
The tree encoding exemplified in Figure 6, and defined with coefficients in equations (3) or (4), (5) or (6), with labels and was required (as opposed to the choice of 0 and 1, which might have been our first thought) to fully cater for the ranked nodes (i.e. the total order, as opposed to a partial order, on the nodes).
We can consider the objects that we are dealing with to have equivalent integer values. To show that, all we must do is work out decimal equivalents of the padic expressions used above for . As noted in [25], we have equivalence between: a padic number; a padic expansion; and an element of (the padic integers). The coefficients used to specify a padic number, [25] notes (p. 69), “must be taken in a set of representatives of the class modulo . The numbers between 0 and are only the most obvious choice for these representatives. There are situations, however, where other choices are expedient.”
We note that the matrix is used in [9]. A somewhat trivial view of how “hierarchical trees can be perfectly scaled in one dimension” (the title and theme of [9]) is that padic numbering is feasible, and hence a one dimensional representation of terminal nodes is easily arranged through expressing each padic number with a real number equivalent.
4.2 pAdic Distance on a Dendrogram
We will now induce a metric topology on the padically encoded dendrogram, . It leads to various symmetries relative to identical norms, for instance, or identical tree distances.
We use the following longest common subsequence, starting at the root: we look for the term in the padic codes of the two objects, where is the lowest level such that the values of the coefficients of are equal.
Let us look at the set of padic codes for above (Figure 6 and relations 4.1), to give some examples of this.
For and , we find the term we are looking for to be , and so .
For and , we find the term we are looking for to be , and so .
For and , we find the term we are looking for to be , and so .
This longest common prefix metric is also known as the Baire distance, and has been discussed in section 3.3. In topology the Baire metric is defined on infinite strings [42]. It is more than just a distance: it is an ultrametric bounded from above by 1, and its infimum is 0 which is relevant for very long sequences, or in the limit for infinitelength sequences. The use of this Baire metric is pursued in [62] based on random projections [77], and providing computational benefits over the classical hierarchical clustering based on all pairwise distances.
The longest common prefix metric leads directly to a padic hierarchical classification (cf. [5]). This is a special case of the “fast” hierarchical clustering discussed in section 3.2.
Compared to the longest common prefix metric, there are other related forms of metric, and simultaneously ultrametric. In [23], the metric is defined via the integer part of a real number. In [3], for integers we have: where is prime, and order is the exponent (nonnegative integer) of in the prime decomposition of an integer. Furthermore let be a series: . ( are the natural numbers.) The order of is the rank of its first nonzero term: order. (The series that is all zero is of order infinity.) Then the ultrametric similarity between series is: .
4.3 ScaleRelated Symmetry
Scalerelated symmetry is very important in practice. In this subsection we introduce an operator that provides this symmetry. We also term it a dilation operator, because of its role in the wavelet transform on trees (see section 5.3 below, and [58] for discussion and examples). This operator is padic multiplication by .
Consider the set of objects with its padic coding considered above. Take . (Nonuniqueness of corresponding decimal codes is not of concern to us now, and taking this value for is without any loss of generality.) Multiplication of by gives: . Each level has decreased by one, and the lowest level has been lost. Subject to the lowest level of the tree being lost, the form of the tree remains the same. By carrying out the multiplicationby operation on all objects, it is seen that the effect is to rise in the hierarchy by one level.
Let us call product with the operator . The effect of losing the bottom level of the dendrogram means that either (i) each cluster (possibly singleton) remains the same; or (ii) two clusters are merged. Therefore the application of to all implies a subset relationship between the set of clusters and the result of applying , .
Repeated application of the operator gives , , , . Starting with any singleton, , this gives a path from the terminal to the root node in the tree. Each such path ends with the null element, which we define to be the padic encoding corresponding to the root node of the tree. Therefore the intersection of the paths equals the null element.
Benedetto and Benedetto [1, 2] discuss as an expansive automorphism of , i.e. formpreserving, and locally expansive. Some implications [1] of the expansive automorphism follow. For any , let us take as a sequence of open subgroups of , with , and . This is termed an inductive sequence of , and itself is the inductive limit ([68], p. 131).
5 Tree Symmetries through the Wreath Product Group
In this section the wreath product group, used up to now in the literature as a framework for tree structuring of image or other signal data, is here used on a 2way tree or dendrogram data structure. An example of wreath product invariance is provided by the wavelet transform of such a tree.
5.1 Wreath Product Group Corresponding to a Hierarchical Clustering
A dendrogram like that shown in Figure 6 is invariant as a representation or structuring of a data set relative to rotation (alternatively, here: permutation) of left and right child nodes. These rotation (or permutation) symmetries are defined by the wreath product group (see [20, 21, 18] for an introduction and applications in signal and image processing), and can be used with any mary tree, although we will treat the binary or 2way case here.
For the group actions, with respect to which we will seek invariance, we consider independent cyclic shifts of the subnodes of a given node (hence, at each level). Equivalently these actions are adjacency preserving permutations of subnodes of a given node (i.e., for given , with , the permutations of ). We have therefore cyclic group actions at each node, where the cyclic group is of order 2.
The symmetries of are given by structured permutations of the terminals. The terminals will be denoted here by Term . The full group of symmetries is summarized by the following generative algorithm:

For level down to 1 do:

Selected node, node at level .

And permute subnodes of .
Subnode is the root of subtree . We denote simply by . For a subnode undergoing a relocation action in step 3, the internal structure of subtree is not altered.
The algorithm described defines the automorphism group which is a wreath product of the symmetric group. Denote the permutation at level by . Then the automorphism group is given by:
where wr denotes the wreath product.
5.2 Wreath Product Invariance
Call Term the terminals that descend from the node at level . So these are the terminals of the subtree with its root node at level . We can alternatively call Term the cluster associated with level .
We will now look at shift invariance under the group action. This amounts to the requirement for a constant function defined on Term . A convenient way to do this is to define such a function on the set Term via the root node alone, . By definition then we have a constant function on the set Term .
Let us call a space of functions that are constant on Term . That is to say, the functions are constant in clusters that are defined by the subset of objects. Possibilities for that were considered in [58] are:

Basis vector with components, with 0 values except for value 1 for component .

Set (of cardinality ) of dimensional observation vectors.
Consider the resolution scheme arising from moving from
Term , Term to
Term . From the hierarchical clustering point of view it is
clear what this represents, simply, an agglomeration of two clusters
called Term and Term , replacing them with a new
cluster, Term .
Let the spaces of functions that are constant on subsets corresponding to the two cluster agglomerands be denoted and . These two clusters are disjoint initially, which motivates us taking the two spaces as a couple: .
5.3 Example of Wreath Product Invariance: Haar Wavelet Transform of a Dendrogram
Let us exemplify a case that satisfies all that has been defined in the context of the wreath product invariance that we are targeting. It is the algorithm discussed in depth in [58]. Take the constant function from to be . Take the constant function from to be . Then define the constant function, the scaling function, in to be . Next define the zero mean function, , the wavelet function, as follows:
in the support interval of , i.e. Term , and
in the support interval of , i.e. Term .
Since we have the zero mean requirement.
We now illustrate the Haar wavelet transform of a dendrogram with a case study.
The discrete wavelet transform is a decomposition of data into spatial and frequency components. In terms of a dendrogram these components are with respect to, respectively, within and between clusters of successive partitions. We show how this works taking the data of Table 4.
Sepal.L  Sepal.W  Petal.L  Petal.W  

1  5.1  3.5  1.4  0.2 
2  4.9  3.0  1.4  0.2 
3  4.7  3.2  1.3  0.2 
4  4.6  3.1  1.5  0.2 
5  5.0  3.6  1.4  0.2 
6  5.4  3.9  1.7  0.4 
7  4.6  3.4  1.4  0.3 
8  5.0  3.4  1.5  0.2 
The hierarchy built on the 8 observations of Table 4 is shown in Figure 7. Here we note the associations of irises 1 through 8 as, respectively: .
Something more is shown in Figure 7, namely the detail signals (denoted ) and overall smooth (denoted ), which are determined in carrying out the wavelet transform, the socalled forward transform.
The inverse transform is then determined from Figure 7 in the following way. Consider the observation vector . Then this vector is reconstructed exactly by reading the tree from the root: . Similarly a path from root to terminal is used to reconstruct any other observation. If is a vector of dimensionality , then so also are and , as well as all other detail signals.
s7  d7  d6  d5  d4  d3  d2  d1  

Sepal.L  5.146875  0.253125  0.13125  0.1375  0.05  0.05  
Sepal.W  3.603125  0.296875  0.16875  0.125  0.05  
Petal.L  1.562500  0.137500  0.02500  0.0000  0.000  0.050  0.00  
Petal.W  0.306250  0.093750  0.050  0.00  0.000  0.00 
This procedure is the same as the Haar wavelet transform, only applied to the dendrogram and using the input data.
This wavelet transform for the data in Table 4, based on the “key” or intermediary hierarchy of Figure 7, is shown in Table 5.
Wavelet regression entails setting small and hence unimportant detail coefficients to 0 before applying the inverse wavelet transform. More discussion can be found in [58].
6 Remarkable Symmetries in Very High Dimensional Spaces
In the work of [66, 67] it was shown how as ambient dimensionality increased distances became more and more ultrametric. That is to say, a hierarchical embedding becomes more and more immediate and direct as dimensionality increases. A better way of quantifying this phenomenon was developed in [55]. What this means is that there is inherent hierarchical structure in high dimensional data spaces.
It was shown experimentally in [66, 67, 55] how points in high dimensional spaces become increasingly equidistant with increase in dimensionality. Both [27] and [13] study Gaussian clouds in very high dimensions. The latter finds that “not only are the points [of a Gaussian cloud in very high dimensional space] on the convex hull, but all reasonablesized subsets span faces of the convex hull. This is wildly different than the behavior that would be expected by traditional lowdimensional thinking”.
That very simple structures come about in very high dimensions is not as trivial as it might appear at first sight. Firstly, even very simple structures (hence with many symmetries) can be used to support fast and perhaps even constant time worst case proximity search [55]. Secondly, as shown in the machine learning framework by [27], there are important implications ensuing from the simple high dimensional structures. Thirdly, [59] shows that very high dimensional clustered data contain symmetries that in fact can be exploited to “read off” the clusters in a computationally efficient way. Fourthly, following [11], what we might want to look for in contexts of considerable symmetry are the “impurities” or small irregularities that detract from the overall dominant picture.
See Table 6 exemplifying the change of topological properties as ambient dimensionality increases. It behoves us to exploit the symmetries that arise when we have to process very high dimenionsal data.
No. points  Dimen.  Isosc.  Equil.  UM 

Uniform  
100  20  0.10  0.03  0.13 
100  200  0.16  0.20  0.36 
100  2000  0.01  0.83  0.84 
100  20000  0  0.94  0.94 
Hypercube  
100  20  0.14  0.02  0.16 
100  200  0.16  0.21  0.36 
100  2000  0.01  0.86  0.87 
100  20000  0  0.96  0.96 
Gaussian  
100  20  0.12  0.01  0.13 
100  200  0.23  0.14  0.36 
100  2000  0.04  0.77  0.80 
100  20000  0  0.98  0.98 
6.1 Application to Very High Frequency Data Analysis: Segmenting a Financial Signal
We use financial futures, circa March 2007, denominated in euros from the DAX exchange. Our data stream is at the millisecond rate, and comprises about 382,860 records. Each record includes: 5 bid and 5 asking prices, together with bid and asking sizes in all cases, and action. We extracted one symbol (commodity) with 95,011 single bid values, on which we now report results. See Figure 8.
Embeddings were defined as follows.

Windows of 100 successive values, starting at time steps: 1, 1000, 2000, 3000, 4000, , 94000.

Windows of 1000 successive values, starting at time steps: 1, 1000, 2000, 3000, 4000, , 94000.

Windows of 10000 successive values, starting at time steps: 1, 1000, 2000, 3000, 4000, , 85000.
The histograms of distances between these windows, or embeddings, in respectively spaces of dimension 100, 1000 and 10000, are shown in Figure 9.
Note how the 10000length window case results in points that are strongly overlapping. In fact, we can say that 90% of the values in each window are overlapping with the next window. Notwithstanding this major overlapping in regard to clusters involved in the pairwise distances, if we can still find clusters in the data then we have a very versatile way of tackling the clustering objective. Because of the greater cluster concentration that we expect (cf. Table 6) from a greater embedding dimension, we use the 86 points in 10000dimensional space, notwithstanding the fact that these points are from overlapping clusters.
We make the following supposition based on Figure 8: the clusters will consist of successive values, and hence will be justifiably termed segments.
From the distances histogram in Figure 9
, bottom, we will carry out Gaussian mixture modeling followed by use of the Bayesian information criterion (BIC,
[71]) as an approximate Bayes factor, to determine the best number of clusters (effectively, histogram peaks).
We fit a Gaussian mixture model to the data shown in the bottom histogram of Figure 9. To derive the appropriate number of histogram peaks we fit Gaussians and use the Bayesian information criterion (BIC) as an approximate Bayes factor for model selection [36, 64]. Figure 10
shows the succession of outcomes, and indicates as best a 5Gaussian fit. For this result, we find the means of the Gaussians to be as follows: 517, 885, 1374, 2273 and 3908. The corresponding standard deviations are: 84, 133, 212, 410 and 663. The respective cardinalities of the 5 histogram peaks are: 358, 1010, 1026, 911 and 350. Note that this relates so far only to the histogram of pairwise distances. We now want to determine the corresponding clusters in the input data.
While we have the segmentation of the distance histogram, we need the segmentation of the original financial signal. If we had 2 clusters in the original financial signal, then we could expect up to 3 peaks in the distances histogram (viz., 2 intracluster peaks, and 1 intercluster peak). If we had 3 clusters in the original financial signal, then we could expect up to 6 peaks in the distances histogram (viz., 3 intracluster peaks, and 3 intercluster peaks). This information is consistent with asserting that the evidence from Figure 10 points to two of these histogram peaks being approximately colocated (alternatively: the distances are approximately the same). We conclude that 3 clusters in the original financial signal is the most consistent number of clusters. We will now determine these.
One possibility is to use principal coordinates analysis (Torgerson’s, Gower’s metric multidimensional scaling) of the pairwise distances. In fact, a 2dimensional mapping furnishes a very similar pairwise distance histogram to that seen using the full, 10000, dimensionality. The first axis in Figure 11 accounts for 88.4% of the variance, and the second for 5.8%. Note therefore how the scales of the planar representation in Figure 11 point to it being very linear.
Benzécri ([4], chapter 7, section 3.1) discusses the Guttman effect, or Guttman scale, where factors that are not mutually correlated, are nonetheless functionally related. When there is a “fundamentally unidimensional underlying phenomenon” (there are multiple such cases here) factors are functions of Legendre polynomials. We can view Figure 11 as consisting of multiple horseshoe shapes. A simple explanation for such shapes is in terms of the constraints imposed by lots of equal distances when the data vectors are ordered linearly (see [56], pp. 4647).
Another view of how embedded (hence clustered) data are capable of being well mapped into a unidimensional curve is Critchley and Heiser [9]. Critchley and Heiser show one approach to mapping an ultrametric into a linearly or totally ordered metric. We have asserted and then established how hierarchy in some form is relevant for high dimensional data spaces; and then we find a very linear projection in Figure 11. As a consequence we note that the Critchley and Heiser result is especially relevant for high dimensional data analysis.
Knowing that 3 clusters in the original signal are wanted, we could use Figure 11. There are various ways to do so.
We will use an adjacencyconstrained agglomerative hierarchical clustering algorithm to find the clusters: see Figure 12. The contiguityconstrained complete link criterion is our only choice here if we are to be sure that no inversions can come about in the hierarchy, as explained in [53]. As input, we use the coordinates in Figure 11. The 2dimensional Figure 11 representation relates to over 94% of the variance. The most complete basis was of dimensionality 85. We checked the results of the 85dimensionality embedding which, as noted below, gave very similar results.
Reading off the 3cluster memberships from Figure 12 gives for the signal actually used (with a very initial segment and a very final segment deleted): cluster 1 corresponds to signal values 1000 to 33999 (points 1 to 33 in Figure 12); cluster 2 corresponds to signal values 34000 to 74999 (points 34 to 74 in Figure 12); and cluster 3 corresponds to signal values 75000 to 86999 (points 75 to 86 in Figure 12). This allows us to segment the original time series: see Figure 13. (The clustering of the 85dimensional embedding differs minimally. Segments are: points 1 to 32; 33 to 73; and 74 to 86.)
To summarize what has been done:

the segmentation is initially guided by the peakfinding in the histogram of distances

with high dimensionality we expect simple structure in a low dimensional mapping provided by principal coordinates analysis

either the original high dimensional data or the principal coordinates analysis embedding are used as input to a sequenceconstrained clustering method in order to determine the clusters

which can then be displayed on the original data.
In this case, the clusters are defined using a complete link criterion, implying that these three clusters are determined by minimizing their maximum internal pairwise distance. This provides a strong measure of signal volatility as an explanation for the clusters, in addition to their average value.
7 Conclusions
Among themes not covered in this article are data stream clustering. To provide background and motivaton, in [60], we discuss permutation representations of a data stream. Since hierarchies can also be represented as permutations, there is a ready way to associate data streams with hierarchies. In fact, early computational work on hierarchical clustering used permutation representation to great effect (cf. [73]). To analyze data streams in this way, in [57] we develop an approach to ultrametric embedding of timevarying signals, including biomedical, meteorological, financial and other. This work has been pursued in physics by Khrennikov.
Let us now wrap up on the exciting perspectives opened up by our work on the theme of symmetryfinding through hierarchy in very large data collections.
“My thesis has been that one path to the construction of a nontrivial theory of complex systems is by way of a theory of hierarchy.” Thus Simon ([74], p. 216). We have noted symmetry in many guises in the representations used, in the transformations applied, and in the transformed outputs. These symmetries are nontrivial too, in a way that would not be the case were we simply to look at classes of a partition and claim that cluster members were mutually similar in some way. We have seen how the padic or ultrametric framework provides significant focus and commonality of viewpoint.
Furthermore we have highlighted the computational scaling properties of our algorithms. They are fully capable of addressing the data and information deluge that we face, and providing us with the best interpretative and decisionmaking tools. The full elaboration of this last point is to sought in each and every application domain, and face to face with old and new problems.
In seeking (in a general way) and in determining (in a focused way) structure and regularity in massive data stores, we see that, in line with the insights and achievements of Klein, Weyl and Wigner, in data mining and data analysis we seek and determine symmetries in the data that express observed and measured reality.
References
 [1] J.J. Benedetto and R.L. Benedetto. A wavelet theory for local fields and related groups. The Journal of Geometric Analysis, 14:423–456, 2004.
 [2] R.L. Benedetto. Examples of wavelets for local fields. In D. Larson C. Heil, P. Jorgensen, editor, Wavelets, Frames, and Operator Theory, Contemporary Mathematics Vol. 345, pages 27–47. 2004.
 [3] J.P. Benzécri. L’Analyse des Données. Tome I. Taxinomie. Dunod, Paris, 2nd edition, 1979.
 [4] J.P. Benzécri. L’Analyse des Données. Tome II, Correspondances. Dunod, Paris, 2nd edition, 1979.
 [5] P.E. Bradley. Mumford dendrograms. Computer Journal, 53:393–404, 2010.
 [6] L. Brekke and P.G.O. Freund. pAdic numbers in physics. Physics Reports, 233:1–66, 1993.
 [7] P. Chakraborty. Looking through newly to the amazing irrationals. Technical report, 2005. arXiv: math.HO/0502049v1.
 [8] P. Contreras. Search and Retrieval in Massive Data Collections. PhD thesis, Royal Holloway, University of London, 2010. Forthcoming.
 [9] F. Critchley and W. Heiser. Hierarchical trees can be perfectly scaled in one dimension. Journal of Classification, 5:5–20, 1988.
 [10] B.A. Davey and H.A. Priestley. Introduction to Lattices and Order. Cambridge University Press, 2nd edition, 2002.
 [11] F. Delon. Espaces ultramétriques. Journal of Symbolic Logic, 49:405–502, 1984.
 [12] S.B. Deutsch and J.J. Martin. An ordering algorithm for analysis of data arrays. Operations Research, 19:1350–1362, 1971.
 [13] D.L. Donoho and J. Tanner. Neighborliness of randomlyprojected simplices in high dimensions. Proceedings of the National Academy of Sciences, 102:9452–9457, 2005.
 [14] B. Dragovich and A. Dragovich. pAdic modelling of the genome and the genetic code. Computer Journal, 53:432–442, 2010.
 [15] B. Dragovich, A.Yu. Khrennikov, S.V. Kozyrev, and I.V. Volovich. On padic mathematical physics. PAdic Numbers, Ultrametric Analysis, and Applications, 1:1–17, 2009.
 [16] D. Dutta, R. Guha, P. Jurs, and T. Chen. Scalable partitioning and exploration of chemical spaces using geometric hashing. Journal of Chemical Information and Modeling, 46:321–333, 2006.
 [17] R.A. Fisher. The use of multiple measurements in taxonomic problems. The Annals of Eugenics, pages 179–188, 1936.
 [18] R. Foote. An algebraic approach to multiresolution analysis. Transactions of the American Mathematical Society, 357:5031–5050, 2005.
 [19] R. Foote. Mathematics and complex systems. Science, 318:410–412, 2007.
 [20] R. Foote, G. Mirchandani, D. Rockmore, D. Healy, and T. Olson. A wreath product group approach to signal and image processing: Part I – multiresolution analysis. IEEE Transactions on Signal Processing, 48:102–132, 2000.
 [21] R. Foote, G. Mirchandani, D. Rockmore, D. Healy, and T. Olson. A wreath product group approach to signal and image processing: Part II – convolution, correlations and applications. IEEE Transactions on Signal Processing, 48:749–767, 2000.
 [22] P.G.O. Freund. pAdic strings and their applications. In Z. Rakic B. Dragovich, A. Khrennikov and I. Volovich, editors, Proc. 2nd International Conference on pAdic Mathematical Physics, pages 65–73. American Institute of Physics, 2006.
 [23] L. Gajić. On ultrametric space. Novi Sad Journal of Mathematics, 31:69–71, 2001.
 [24] B. Ganter and R. Wille. Formal Concept Analysis: Mathematical Foundations. Springer, 1999. Formale Begriffsanalyse. Mathematische Grundlagen, Springer, 1996.
 [25] F.Q. Gouvêa. pAdic Numbers: An Introduction. Springer, 2003.
 [26] D. Gusfield. Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology. Cambridge University Press, 1997.
 [27] P. Hall, J.S. Marron, and A. Neeman. Geometric representation of high dimensional, low sample size data. Journal of the Royal Statistical Society B, 67:427–444, 2005.

[28]
P. Hitzler and A.K. Seda.
The fixedpoint theorems of PriessCrampe and Ribenboim in logic programming.
Fields Institute Communications, 32:219–235, 2002.  [29] P. Indyk, A. Andoni, M. Datar, N. Immorlica, and V. Mirrokni. Locallysensitive hashing using stable distributions. In T. Darrell, P. Indyk, and G. Shakhnarovich, editors, Nearest Neighbor Methods in Learning and Vision: Theory and Practice, pages 61–72. MIT Press, 2006.
 [30] A.K. Jain and R.C. Dubes. Algorithms For Clustering Data. PrenticeHall, 1988.
 [31] A.K. Jain, M.N. Murty, and P.J. Flynn. Data clustering: a review. ACM Computing Surveys, 31:264–323, 1999.
 [32] M.F. Janowitz. An order theoretic model for cluster analysis. SIAM Journal on Applied Mathematics, 34:55–72, 1978.
 [33] M.F. Janowitz. Cluster analysis based on abstract posets. Technical report, 2005–2006. http://dimax.rutgers.edu/melj.
 [34] M. Jansen, G.P. Nason, and B.W. Silverman. Multiscale methods for data on graphs and irregular multidimensional situations. Journal of the Royal Statistical Society B, 71:97–126, 2009.
 [35] S.C. Johnson. Hierarchical clustering schemes. Psychometrika, 32:241–254, 1967.
 [36] R.E. Kass and A.E. Raftery. Bayes factors and model uncertainty. Journal of the American Statistical Association, 90:773–795, 1995.
 [37] A.Yu. Khrennikov. Gene expression from polynomial dynamics in the 2adic information space. Technical report, 2006. arXiv:qbio/06110682v2.
 [38] S. V. Kozyrev. Wavelet theory as padic spectral analysis. Izvestiya: Mathematics, 66:367–376, 2002.
 [39] S. V. Kozyrev. Wavelets and spectral analysis of ultrametric pseudodifferential operators. Sbornik: Mathematics, 198:97–116, 2007.
 [40] M. Krasner. Nombres semiréels et espaces ultramétriques. ComptesRendus de l’Académie des Sciences, Tome II, 219:433, 1944.
 [41] I.C. Lerman. Classification et Analyse Ordinale des Données. Dunod, Paris, 1981.
 [42] A. Levy. Basic Set Theory. Dover, Mineola, NY, 2002. (Springer, 1979).
 [43] S.C. Madeira and A.L. Oliveira. Biclustering algorithms for biological data analysis: a survey. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 1:24–45, 2004.
 [44] S.T. March. Techniques for structuring database records. Computing Surveys, 15:45–79, 1983.
 [45] W.T. McCormick, P.J. Schweitzer, and T.J. White. Problem decomposition and data reorganization by a clustering technique. Operations Research, 20:993–1009, 1982.
 [46] I. Van Mechelen, H.H. Bock, and P. De Boeck. Twomode clustering methods: a structured overview. Statistical Methods in Medical Research, 13:363–394, 2004.
 [47] M.L. Miller, M.A. Rodriguez, and I.J. Cox. Audio fingerprinting: nearest neighbor search in high dimensional binary spaces. Journal of VLSI Signal Processing, 41:285–291, 2005.
 [48] B. Mirkin. Mathematical Classification and Clustering. Kluwer, 1996.
 [49] B. Mirkin. Clustering for Data Mining. Chapman and Hall/CRC, Boca Raton, FL, 2005.
 [50] F. Murtagh. A survey of recent advances in hierarchical clustering algorithms. Computer Journal, 26:354–359, 1983.
 [51] F. Murtagh. Complexities of hierarchic clustering algorithms: state of the art. Computational Statistics Quarterly, 1:101–113, 1984.
 [52] F. Murtagh. Counting dendrograms: a survey. Discrete Applied Mathematics, 7:191–199, 1984.
 [53] F. Murtagh. Multidimensional Clustering Algorithms. PhysicaVerlag, Heidelberg and Vienna, 1985.
 [54] F. Murtagh. Comments on: Parallel algorithms for hierarchical clustering and cluster validity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14:1056–1057, 1992.
 [55] F. Murtagh. On ultrametricity, data coding, and computation. Journal of Classification, 21:167–184, 2004.
 [56] F. Murtagh. Correspondence Analysis and Data Coding with R and Java. Chapman and Hall/CRC Press, 2005.
 [57] F. Murtagh. Identifying the ultrametricity of time series. European Physical Journal B, 43:573–579, 2005.
 [58] F. Murtagh. The Haar wavelet transform of a dendrogram. Journal of Classification, 24:3–32, 2007.
 [59] F. Murtagh. The remarkable simplicity of very high dimensional data: application to modelbased clustering. Journal of Classification, 26:249–277, 2009.
 [60] F. Murtagh. Symmetry in data mining and analysis: a unifying view based on hierarchy. Proceedings of Steklov Institute of Mathematics, 265:177–198, 2009.
 [61] F. Murtagh. The correspondence analysis platform for uncovering deep structure in data and information (sixth Annual Boole Lecture). Computer Journal, 53:304–315, 2010.
 [62] F. Murtagh, G. Downs, and P. Contreras. Hierarchical clustering of massive, high dimensional data sets by exploiting ultrametric embedding. SIAM Journal on Scientific Computing, 30:707–730, 2008.

[63]
F. Murtagh, J.L. Starck, and M. Berry.
Overcoming the curse of dimensionality in clustering by means of the wavelet transform.
Computer Journal, 43:107–120, 2000.  [64] F. Murtagh and J.L. Starck. Quantization from Bayes factors with application to multilevel thresholding. Pattern Recognition Letters, 24:2001–2007, 2003.
 [65] A. Ostrowski. Über einige Lösungen der Funktionalgleichung . Acta Mathematica, 41:271–284, 1918.
 [66] R. Rammal, J.C. Angles d’Auriac, and B. Doucot. On the degree of ultrametricity. Le Journal de Physique – Lettres, 46:L–945–L–952, 1985.
 [67] R. Rammal, G. Toulouse, and M.A. Virasoro. Ultrametricity for physicists. Reviews of Modern Physics, 58:765–788, 1986.
 [68] H. Reiter and J.D. Stegeman. Classical Harmonic Analysis and Locally Compact Groups. Oxford University Press, Oxford, 2nd edition, 2000.
 [69] A.C.M. Van Rooij. NonArchimedean Functional Analysis. Marcel Dekker, 1978.
 [70] W.H. Schikhof. Ultrametric Calculus. Cambridge University Press, Cambridge, 1984. (Chapters 18, 19, 20, 21).
 [71] G. Schwarz. Estimating the dimension of a model. Annals of Statistics, 6:461–464, 1978.

[72]
A.K. Seda and P. Hitzler.
Generalized distance functions in the theory of computation.
Computer Journal, 53:443–464, 2010.  [73] R. Sibson. Slink: an optimally efficient algorithm for the singlelink cluster method. Computer Journal, 16:30–34, 1980.
 [74] H.A. Simon. The Sciences of the Artificial. MIT Press, Cambridge, MA, 1996.
 [75] D. Steinley. Kmeans clustering: a halfcentury synthesis. British Journal of Mathematical and Statistical Psychology, 59:1–3, 2006.
 [76] D. Steinley and M.J. Brusco. Initializing Kmeans batch clustering: a critical evaluation of several techniques. Journal of Classification, 24:99–121, 2007.
 [77] S.S. Vempala. The Random Projection Method. American Mathematical Society, 2004. Vol. 65, DIMACS Series in Discrete Mathematics and Theoretical Computer Science.
 [78] I.V. Volovich. Number theory as the ultimate physical theory. Technical report, 1987. Preprint No. TH 4781/87, CERN, Geneva.
 [79] I.V. Volovich. pAdic string. Classical Quantum Gravity, 4:L83–L87, 1987.
 [80] H. Weyl. Symmetry. Princeton University Press, 1983.

[81]
Rui Xu and D. Wunsch.
Survey of clustering algorithms.
IEEE Transactions on Neural Networks
, 16:645–678, 2005.