 # Kernel Treelets

A new method for hierarchical clustering is presented. It combines treelets, a particular multiscale decomposition of data, with a projection on a reproducing kernel Hilbert space. The proposed approach, called kernel treelets (KT), effectively substitutes the correlation coefficient matrix used in treelets with a symmetric, positive semi-definite matrix efficiently constructed from a kernel function. Unlike most clustering methods, which require data sets to be numeric, KT can be applied to more general data and yield a multi-resolution sequence of basis on the data directly in feature space. The effectiveness and potential of KT in clustering analysis is illustrated with some examples.

## Code Repositories

### kernel_treelets

Kernel Treelets, a clustering method based on Treelets

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Treelets, introduced by Lee, Nadler, and Wasserman [1, 2], is a method to produce a multiscale, hierarchical decomposition of unordered data. The central premise of Treelets is to exploit sparsity and capture intrinsic localized structures with only a few features, represented in terms of an orthonormal basis. The hierarchical tree constructed by the treelet algorithm provides a scale-based partition of the data that can be used for classification, specially for cluster analysis .

Cluster analysis, also called clustering, is concerned with finding a partition of a set such that its corresponding equivalence class captures similarity of its elements. The Treelet approach is an example of hierarchical clustering (HC) , which is a type of methods that provides a nested and multiscale clustering. The typical complexity of HC methods is (where denotes the number of data in the dataset) but Treelets, like single linkage HC  and complete linkage HC , can be done in operations. Most of these clustering methods are only applicable to numerical dataset only. However, many modern datasets do not have clear representations in due for example to missing data, length difference, and non-numeric attributes. A typical solution to this problem usually involves finding a projection from each observation to

as is the case for example in text vectorization

, array alignment 

, and missing-data imputation

. These particular projections pose considerable challenges and might raise the bias of the model if false assumptions are made.

In this paper we propose a HC method that combines Treelets with a projection on a feature space that is a Reproducing Kernel Hilbert Space (RKHS). We call this method Kernel Treelets (KT). It effectively substitutes the correlation coefficient matrix, used by the original treelet method as a measure of similarity among variables, with a symmetric, positive semi-definite matrix constructed from a (Mercer) kernel function. The intuition behind this approach is that inner products provide a measure of similarity and a projection into a RKHS, done via the so-called Kernel trick [10, 11], is a natural and efficient way to construct appropriate similarity matrices for a wide variety of data sets, including those mentioned above. We present some examples that demonstrate the potential of KT as an effective tool for clustering analysis.

## 2 Background Information

We provide in this section a brief description of the Treelet algorithm [1, 2] and the Kernel method . Treelets are based on the repeated application of two dimensional (Jacobi) rotations to a matrix measuring the similarity of variables. So we start by reviewing Jacobi (also called Givens) rotations first.

### 2.1 Jacobi Rotations

A Jacobi rotation matrix

is an orthogonal matrix with at most 4 entries different from the identity, or more generally, a rotation operator on a 2 dimensional subspace generated by two coordinate axes. For a given symmetric matrix

and entry and rotation matrix is constructed so that

 [JTMJ]pq=[JTMJ]qp=0.

The construction of is equivalent to finding the cosine (c) and sine (c) of the angle of rotation, which satisfy

 [c−ssc][MppMpqMqpMqq][cs−sc]=[d100d2]

subject to the constraint . The matrix is then given

• For other entries , .

A numerical stable way of computing this problem is as follows:

• Assume , and compute

 b=Mpp−Mqq2Mpq.
• Let be 1 if and -1 otherwise, then we define

 t=sgn(b)|b|+√b2+1.
• From which we can calculate and .

The complexity of storing a Given’s rotation matrix is , and Jacobi rotation over a matrix uses space with time complexity .

### 2.2 Treelets

The Treelets algorithm [1, 2] was designed to construct a multiscale basis and a corresponding hierarchical clustering over the attributes of some datasets in , to exploit sparsity. In its most efficient implementation  it is an algorithm. The algorithm starts with a regularization, hyper-parameter and computing a (empirical) covariance matrix . The initial scaling indices are defined as the set . With base case and , each step and for can be constructed inductively as follows:

1. Construct matrix of the same shape as entry-wise:

 [Mk]ij= ⎷[Ak−1]2ij[Ak−1]ii[Ak−1]jj+λ|[Ak−1]ij|.
2. Find the two indices such that

 αk,βk=argmaxα,β∈Sk−1[Mk]αβ.

.

3. Calculate Jacobi rotation matrix for and matrix .

4. Without loss of generality, and is interchangeable, so we require that , and record and .

5. Define .

#### 2.2.1 Treelets Transform and Treelets Basis

The Jacobi rotations produce a Treelets basis for each . The sequence of matrices provides a basis for , defined as

 Bk=JTkJTk−1⋯JT2JT1,

such that

 Ak=BkA0BTk.

So for every vector , there is a th basis representation . Furthermore, there is a compressed th basis representation obtained by dropping insignificant () non-scaling indices of . That is, if we define to be the

th column of the identity matrix, the compressed

th basis representation is given by

 τk(v)=Bkv−∑i∉Si|Bkv⋅ei|<ϵ(Bkv⋅ei)ei.

#### 2.2.2 Treelets Hierarchical Clustering

Treelets is also a hierarchical clustering method over the attributes. The hierarchical clustering structure is stored in . We start with trivial clustering where each element is in its own cluster and labeled by itself. For each , we merge clusters labeled and and label it . This is feasible because each step the set of all cluster labels is exactly . This operation gives a hierarchical tree for clustering use on the attributes.

### 2.3 Kernel Method

The Kernel method  allow us to map variables into a new feature space via a kernel function. We now review briefly the basic concepts and ideas of this approach (see for example ).

A kernel over some set is defined as a function . A symmetric and positive semi-definite (SPSD) kernel has the properties:

 K(x1,x2) =K(x2,x1), for all x1,x2∈X. (1) s∑i=1s∑j=1cicjK(xi,xj) ≥0, for all {x1,...,xs}∈X and all {c1,...,cs}∈R (2)

If is finite, then is SPSD if and only if is a SPSD matrix. If , then is SPSD if and only if there exists a function , where denotes the Hilbert space, such that for all ,

 K(x1,x2)=⟨ΦK(x),ΦK(y)⟩H. (3)

The space here is called a reproducing kernel Hilbert space (RKHS). The following are two common examples of SPSD kernels:

1. Radial basis function (RBF) kernel

 K(x1,x2)=exp{−||x1−x2||22σ2}.
2. Polynomial kernel

 K(x1,x2)=(α⟨x1,x2⟩+c0)r.

A kernel for a set can be restricted to a subset , and SPSD property is preserved during restriction. If the task is clustering over a finite set, the selected kernel needs only be SPSD on the set of all samples, which is generally finite, and we only need to check that the kernel matrix is SPSD. If we need to extend the clustering outcome to other data, e.g. clustering boosted classification, then has to include the whole data space as a subset.

### 2.4 K Nearest Neighbors (KNN)

K-nearest neighbors algorithm is a multi-class classification algorithm . By specifying and a metric, the algorithm can, given a test data, predict its labels by the majority vote of a subset of closest elements in distance metric from training data. If an inner product is specified instead of distance, we can compute the distance between two point in the following way:

 ∥x1−x2∥2=⟨x1−x2,x1−x2⟩=⟨x1,x1⟩+⟨x2,x2⟩−2⟨x1,x2⟩.

If the metric is kernelized,

 ∥x1−x2∥2 =⟨x1,x1⟩H+⟨x2,x2⟩H−2⟨x1,x2⟩H =K(x1,x1)+K(x2,x2)−2K(x1,x2).

### 2.5 Kernel Support Vector Machine (SVM)

Support Vector Machine (SVM) is a classification method by finding optimal hyper-planes. Kernel SVM  is a classification method towards nonlinear problems that performs SVM in RKHS generated by the kernel. When we only apply KT to a small sample, we may use kernel SVM with the same kernel to assign labels for data outside of this sample. This can be viewed as clustering attributes with treelets and using SVM to assign labels to other attributes in RKHS.

## 3 The KT Model

The task of KT is to find a clustering for some set given a SPSD kernel measuring the similarity among variables. We combine Treelets with kernels by replacing the covariance with kernel matrix, and apply the rest of the steps of Treelets algorithm. The exact steps are as follows:

1. First we draw a sample with size

from uniform distribution on

and some sample size . If more information about is given, it may be possible to draw a sample that better represent with smaller sample size.

2. Then, we calculate the kernel matrix . is a SPSD matrix because is SPSD, and thus we can apply Treelets algorithm with hyper-parameter using instead of the (empirical) covariance matrix. can be set to 0 or tuned experimentally as in Treelets. In this step, theTreelets method provides a hierarchical clustering tree of each columns of , which corresponds to each observation in .

3. If , we are finished on the step above. Otherwise, we need to cluster the elements in based on clusters we have from elements in . We use kernel SVM to complete this task. Given and its corresponding cluster labels, we train the kernel SVM with the same kernel , and then apply to predict the cluster labels of . K-Nearest Neighbors with distance induced by kernel

 d(v1,v2)2=K(v1,v1)+K(v2,v2)−2K(v1,v2)

is an alternative to kernel SVM.

### 3.1 Theory

We now prove that the kernel projection is equivalent to working with a symmetric positive definite matrix defined by the inner product in and evaluated through the kernel. We also suggest a definition of a clustering setting and clustering equivalence that allows us to connect the results of the clustering analysis for the original set with those of the transformed, projected set.

###### Lemma 1.

For every finite dataset and an SPSD kernel , there exists an orthonormal Hilbert basis in the RKHS such that

 [ΦK(di)]B=[δi0],

where and is symmetric and positive semi-definite.

###### Proof.

We apply Gram-Schmidt orthogonalization process to the maximal linearly independent subset of

and get a set of orthonormal vectors

, where

 η=dim(span{ΦK(di):i=1,2,...,n})≤n.

We may extend this set to a orthonormal Hilbert basis . Then , is 0 for all entries after and consequently after , so there exists such that

 [ΦK(di)]^B=[^di0].

As

is a square matrix, we may compute its singular value decomposition

 [^d1^d2⋯^dn]=UΣVT.

We can now define a new orthonormal Hilbert basis through the change of basis matrix . Let for all , then

 [ΦK(di)]B=[VUT00I][ΦK(di)]^B=[VUT^di0]=[δi0].

The projected data in basis is and the matrix

 [δ1δ2⋯δn]=QUT[^d1^d2⋯^dn]=QΣQT

is symmetric and positive definite. ∎

###### Corollary 1.

If we denote such that for all ,

 [Ψ(v)∗]=[ΦK(v)]B.

or in other words, is the first components of in the basis . Then for all ,

 [Ψ(di)0]=[ΦK(di)]B,

that is . From the lemma, we have that is symmetric and positive definite and

 ⟨Ψ(D),Ψ(D)⟩=[δ1δ2⋯δn]2=⟨ΦK(D),ΦK(D)⟩H.

#### 3.1.1 Clustering Equivalences

A clustering setting is a pair where is an finite ordered dataset and is a measurement on the dataset . We define an equivalence on the clustering setting that if and only if . For any measurement based clustering method, using measurement on provides the same exact clustering outcome on the labels as using measurement on . An example of clustering equivalences is that if kernel corresponds to projection , then there is , and therefore .

#### 3.1.2 Kernel Treelets Equivalences

For a dataset and a kernel , we already know that there is a clustering equivalence . From the corollary of lemma 1, there is , which provides the equivalence . As is symmetric, . As a conclusion, , which implies that a clustering method measured with inner product on dataset provides a clustering of measured with kernel . Therefore, Treelets on without centering provides a hierarchical clustering of attributes of based on attribute inner product (covariance matrix), which is a hierarchical clustering of based on inner product. According to clustering setting equivalences, this hierarchical clustering is equivalent to a hierarchical clustering of based on kernel . Furthermore, a property of Treelets is that does not necessarily need to be computed. The ”covariance matrix” of without centering has a easier computation method:

 Cov(Ψ(D))=Ψ(D)Ψ(D)T=Ψ(D)2=⟨Ψ(D),Ψ(D)⟩=⟨ϕK(D),ϕK(D)⟩H=K(D,D).

So we may avoid the costly spectral decomposition to compute and define of Treelets as

 A0=Cov(Ψ(D))=K(D,D).

### 3.2 Complexity

The complexity of this algorithm is , where is the complexity of applying kernel function to a pair of data and if the data is numeric. In this model, the choice of kernel determines the expected outcome of the prediction and the choice of sample determines the variability of the outcome. A small sample size speeds up the algorithm with the cost of generating false clustering by unrepresentative samples, while large sample size slow down the algorithm and also produces numerical issues because data is more likely to be close to orthogonal as the dimension of projected space grows, and Treelets method would be forced to stop if all remaining components are almost orthogonal. The optimal sample size depends on the floating number accuracy and computation time allowed and should be as large as possible without exceeding the time limit and accuracy limit.

## 4 Examples

We implemented KT and the following examples in Python with package Numpy , Scikit-learn , and plots were generated with Matplotlib . The Treelets part of our implementation is not optimized, so it is runtime in the followings examples rather than as designed by Lee et al 

. The hyperparameter

is set to 0 for all the experiments below.

### 4.1 Clustering for 6 Datasets

To illustrate how KT works as a hierarchical clustering method, we use an example from scikit-learn  which consists of 6 datasets, each of which has 1500 two-dimensional data points (i.e. and ), and we can visualize each dataset and each cluster by plotting each observation as a point in the plane. Each of the first five datasets consists of data drawn from multiple shapes with an error in distance. The sixth dataset consists of a uniform random sample from to show how clustering method work for uniform distributed data. Figure 1 shows how KT with different kernels works on these datasets compared to the performance of some other clustering methods. The number of clusters and hyper-parameters are tuned for each method and the sample sizes are set to 1000 for each KT method. Each row of this image represents a dataset and each column represents a clustering method. The method each column represents and and its runtime on each dataset is in recorded in Table 1.

In this experiment, KT with RBF kernel is the only method that performs clustering closest to human intuition for all first five datasets. The sixth dataset is a uniform distribution in which we may see how KT is affected by the relative density deficiency in some area due to sampling. Its high performance on the first five datasets is expected as these datasets are to some extent Euclidean distance-based, which corresponds to the assumptions for RBF kernels. Fig.2 shows how difference of number of sample points affects the clustering result. Each column represents KT using RBF kernel with different sample sizes. The hyper-parameter is tuned towards case and is used for all other sample sizes. Notice that as KT1500 is of full sample size, it does not trigger kernel SVM whereas KT1499 do. Their number of clusters and runtime is recorded in Table 2. From here we can see that more sample data implies more runtime and more stable outcome. The minimum optimal number of samples required for the first 5 datasets are 1000, 100, 1000, 200, 50, respectively, which shows that different datasets requires different amount of samples to explain its shape. Furthermore, the fourth dataset shows that optimal hyper-parameter is number-of-sample dependent. RBF kernel can be considered as a weighted average of distance and connectivity, where a larger means a higher weight on distance. For the same , as sample size gets larger, the clustering result becomes more distance-based rather than connectivity based, demonstrating that optimal for those sample sizes are actually smaller.

### 4.2 Clustering for Social Network Dataset

To illustrate how KT works in network analysis we use an example from Stanford Network Analysis Project  . This is a dataset consisting of ’circles’ (or ’friends lists’) from Facebook. It has surveyed individual (vertices) and each two of them is connected with vertices if they are friends and not if they are not friends, which are the edges. The edges are undirected and not weighted, and the total number of edges is 88234. We use KT to do clustering on this dataset with full sample size (). Denote the set of vertices on the graph as , and define a kernel function such that

 K(v1,v2)=⎧⎨⎩1045v1=v21v1,v2 are connected0otherwise

The number 1045 is computed and chosen as the largest degree of all vertices. Notice that is a SPSD kernel on because is a symmetric matrix and is also dominant by the positive diagonal, as

 ∑j≠i|K(V,V)i,j|=deg(vi)≤maxζdeg(vζ)=1045=K(V,V)i,i.

To estimate the performance of KT as a multi-scale clustering method on this dataset, we use an evaluation as follows. For each cluster partition in the hierarchy, we compute its matching matrix and its corresponding true positive rate as well as false positive rate. Matching matrix, a type of confusion matrix, is a 2 by 2 matrix recording the number of true positives, true negatives, false positives, and false negatives for pairwise associations. True positive rate measure the proportion of two nodes being in the same cluster given the two nodes are connected and false positive rate measures the proportion of two nodes being in the same cluster given the two nodes are not connected. Each pair of true positive rate and false positive rate produces a point on the plane, and interpolating the set of points of all clustering results in the hierarchy (with order) produces the Receiver operating characteristic (ROC) curve, and the numerical integral over

interval of this curve is known as Area Under Curve (AUC). Figure 3 demonstrates the performance of KT on the dataset, which provides good clusterings for the dataset because it has an AUC as high as .

### 4.3 Clustering for Dataset with Missing Infomation

To illustrate how KT works on dataset with missing information, we use Mice Protein Expression (MPE) dataset 

from UCI Machine Learning Repository as an example. This is a dataset consisting of 1080 observations for 8 classes of mice, each of which containing 77 expression levels of different proteins with some of the entries are not avalible. We use KT to do clustering on this dataset. First we normalize these attributes so that each of them has empirical mean 0 and standard deviation 1. Then we define a RBF kernel for dataset with missing data such that for all observation

,

 K(u,v)=exp{−32|Euv|∑i∈Euv∥ui−vi∥2}

Where is the set of indices that is avalible (not missing) in both and . We check that so that it is well-defined. The number 32 is a parameter tuned with experiments. We compare the predicted clusters and the true labels according to pairwise scores. Fig.4 shows how KT performs compared to KMeans clustering. We measure the true positive rate as the proportion of two record being in the same cluster given that they are from mice of the same type, and the false positive rate as the proportion of two record being in the same cluster given that they are from mice of different type. Similar as the example of network dataset, we draw its ROC curve and calculate its AUC. Also, we use KMeans with multiple number of clusters for comparison. The AUC of KT is much higher than the AUC of KMeans (), demonstrating KT is a much better clustering method for this dataset than KMeans.

## 5 Conclusion

In the paper we describe a novel approach, kernel treelets (KT), for hierarchical clustering. The method relies on applying the treelet algorithm to a matrix measuring similarities among variables in a feature, reproducing kernel Hilbert space. We show with some examples that KT is as useful as other hierarchical clustering methods and is especially competitive for datasets without numerical matrix representation and or missing data. The KT approach also shows significant potential for semi-supervised learning tasks and as a pre-processing, post-processing step in deep-learning. Work in these directions is underway.

## References

•  Ann B Lee and Boaz Nadler. Treelets— a tool for dimensionality reduction and multi-scale analysis of unstructured data. In Artificial Intelligence and Statistics, pages 259–266, 2007.
•  Ann B. Lee, Boaz Nadler, and Larry Wasserman. Treelets: an adaptive multi-scale basis for sparse unordered data. The Annals of Applied Statistics, 2(2):435–471, 2008.
•  Robert C. Tryon. Cluster analysis: Correlation profile and orthometric (factor) analysis for the isolation of unities in mind and personality. Edwards brother, Incorporated, lithoprinters and publishers, 1939.
•  Stephen C Johnson. Hierarchical clustering schemes. Psychometrika, 32(3):241–254, 1967.
•  Robin Sibson. Slink: an optimally efficient algorithm for the single-link cluster method. The computer journal, 16(1):30–34, 1973.
•  Daniel Defays. An efficient algorithm for a complete link method. The Computer Journal, 20(4):364–366, 1977.
•  David S Doermann. An introduction to vectorization and segmentation. In International Workshop on Graphics Recognition, pages 1–8. Springer, 1997.
•  Yongjin Wang, Foteini Agrafioti, Dimitrios Hatzinakos, and Konstantinos N Plataniotis. Analysis of human electrocardiogram for biometric recognition. EURASIP journal on Advances in Signal Processing, 2008(1):148658, 2007.
•  Fiona M Shrive, Heather Stuart, Hude Quan, and William A Ghali. Dealing with missing data in a multi-question depression scale: a comparison of imputation methods. BMC medical research methodology, 6(1):57, 2006.
•  T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Data Mining, Inference, and Prediction. Springer, New York, second edition, 2016.
•  S. Theodoridis. Machine Learning. A Bayesian and Optimization Perspective. Academic Press, London, 2015.
•  Mark A Aizerman.

Theoretical foundations of the potential function method in pattern recognition learning.

Automation and remote control, 25:821–837, 1964.
•  Naomi S Altman. An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician, 46(3):175–185, 1992.
•  Bernhard E Boser, Isabelle M Guyon, and Vladimir N Vapnik.

A training algorithm for optimal margin classifiers.

In

Proceedings of the fifth annual workshop on Computational learning theory

, pages 144–152. ACM, 1992.
•  Travis E Oliphant. A guide to NumPy, volume 1. Trelgol Publishing USA, 2006.
•  F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
•  John D. Hunter. Matplotlib: A 2d graphics environment. Computing In Science & Engineering, 9(3):90–95, 2007.
•  Jure Leskovec and Julian J Mcauley. Learning to discover social circles in ego networks. In Advances in neural information processing systems, pages 539–547, 2012.
•  Clara Higuera, Katheleen J Gardiner, and Krzysztof J Cios. Self-organizing feature maps identify proteins critical to learning in a mouse model of down syndrome. PloS one, 10(6):e0129126, 2015.