 # A Simple Approach to Sparse Clustering

Consider the problem of sparse clustering, where it is assumed that only a subset of the features are useful for clustering purposes. In the framework of the COSA method of Friedman and Meulman, subsequently improved in the form of the Sparse K-means method of Witten and Tibshirani, a natural and simpler hill-climbing approach is introduced. The new method is shown to be competitive with these two methods and others.

## Code Repositories

### SAS_Hill_Climb

Source code for the paper "A Simple Approach to Sparse Clustering".

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Consider a typical setting for clustering items based on pairwise dissimilarities, with denoting the dissimilarity between items . For concreteness, we assume that and for all . In principle, if we want to delineate clusters, the goal is (for example) to minimize the average within-cluster dissimilarity. In detail, a clustering into groups may be expressed as an assignment function , meaning that indexes the cluster that observation is assigned to. Let denote the class of clusterings of items into groups. For , its average within-cluster dissimilarity is defined as

 Δ[C]:=∑k∈[κ]1|C−1(k)|∑ ∑i,j∈C−1(k)δ(i,j). (1)

This dissimilarity coincides with the within-cluster sum of squares commonly used in k-means type of clustering algorithms, with . If under the Euclidean setting, we further define cluster centers

 μk=1n∑i∈C−1(k)xi   with k∈[κ], (2)

then the within-cluster dissimilarity can be rewritten as follows,

 Δ[C]=∑k∈[κ]1|C−1(k)|∑ ∑i,j∈C−1(k)∥xi−xj∥2=∑k∈[κ]∑i∈C−1(k)∥xi−μk∥2. (3)

Since this paper deals with non-Euclidean settings also, we will use the more general within-cluster dissimilarity defined in (1). The resulting optimization problem is the following:

 Given (δ(i,j):i,j∈[n]), minimize Δ[C] over C∈Cnκ. (4)

This problem is combinatorial and quickly becomes computationally too expensive, even for small datasets. A number of proposals have been suggested (ESL, )

, ranging from hierarchical clustering approaches to K-medoids.

Following in the footsteps of friedman2004clustering , we consider a situation where we have at our disposal not 1 but measures of pairwise dissimilarities on the same set of items, with denoting the -th dissimilarity between items . Obviously, these measures of dissimilarity could be combined into a single measure of dissimilarity, for example,

 δ(i,j)=∑aδa(i,j). (5)

Our working assumption, however, is that only a few of these measures of dissimilarity are useful for clustering purposes, but we do not know which ones. This is the setting of sparse clustering, where the number of useful measures is typically small compared to the whole set of available measures.

We assume henceforth that all dissimilarity measures are equally important (for example, when we do not have any knowledge a priori on the relative importance of these measures) and that they all satisfy

 ∑i,j∈[n]δa(i,j)=1,∀a∈[p], (6)

which, in practice, can be achieved via normalization, meaning,

 δa(i,j)←δa(i,j)∑i,jδa(i,j). (7)

This assumption is important when combining measures in the standard setting (5) and in the sparse setting (8) below.

Suppose for now that we know that at most measures are useful among the measures that we are given. For , define the -dissimilarity as

 δS(i,j)=∑a∈Sδa(i,j), (8)

and the corresponding average within-cluster -dissimilarity for the cluster assignment as

 ΔS[C]:=∑k∈[κ]1|C−1(k)|∑ ∑i,j∈C−1(k)δS(i,j). (9)

If the goal is to delineate clusters, then a natural objective is the following:

 Given (δa(i,j):a∈[p],i,j∈[n]),minimize ΔS[C] over S⊂[p] of size s and over C∈Cnκ. (10)

In words, the goal is to find the measures (which play the role of features in this context) that lead to the smallest optimal average within-cluster dissimilarity. The problem stated in (10) is at least as hard as the problem stated in (4), and in particular, is computationally intractable even for small item sets.

###### Remark 0.

In many situations, but not all, measurements of possibly different types are taken from each item

, resulting in a vector of measurements

. This vector is not necessarily in a Euclidean space, although this is an important example — see Section 2.2 below. We recover our setting when we have available a dissimilarity measure between and . This special case justifies our using the terms ‘feature’ and ‘attribute’ when referring to a dissimilarity measure.

## 2 Related work

The literature on sparse clustering is much smaller than that of sparse regression or classification. Nonetheless, it is substantial and we review some of the main proposals in this section. We start with the contributions of friedman2004clustering and witten2010framework , which inspired this work.

### 2.1 COSA, Sparse K-means and Regularized K-means

friedman2004clustering propose clustering objects on subsets of attributes (COSA), which (in its simplified form) amounts to the following optimization problem

 minimize ∑k∈[κ]α(|C−1(k)|)∑i,j∈C−1(k)∑a∈[p](waδa(i,j)+λwalogwa), (11) over any clustering C and any weights w1,…,wp≥0 subject to ∑a∈[p]wa=1. (12)

Here is some function and is a tuning parameter. When , the objective function can be expressed as

 ∑a∈[p](waΔa[C]+λwalogwa). (13)

When , the minimization of (13) over (12) results in any convex combination of attributes with smallest average within-cluster dissimilarity. If this smallest dissimilarity is attained by only one attribute, then all weights will concentrate on this attribute, with weights for this attribute and for the others. In general, , and the term it multiplies is the negative entropy of the weights seen as a distribution on . This penalty term encourages the weights to spread out over the attributes. Minimizing over the weights first leads to

 minimize Δcosa[C]:=minw∑a∈[p](waΔa[C]+λwalogwa) (14) over any clustering C. (15)

where the minimum is over the ’s satisfying (12). (Note that the needs to be tuned.) The minimization is carried out using an alternating strategy where, starting with an initialization of the weights (say all equal, for all ), the procedure alternates between optimizing with respect to the clustering assignment and optimizing with respect to the weights. (There is a closed-form expression for that derived in that paper.) The procedure stops when achieving a local minimum. witten2010framework observe that an application of COSA rarely results in a sparse set of features, meaning that the weights are typically spread out. They propose an alternative method, which they call Sparse K-means, which, under (6), amounts to the following optimization problem

 maximize ∑a∈[p]wa(1n−Δa[C]), (16) over any clustering C and any weights w1,…,wp≥0 (17) with ∥w∥2≤1, ∥w∥1≤s. (18)

The penalty on results in sparsity for small values of the tuning parameter , which is tuned by the gap statistic of tibshirani2001estimating . The penalty is also important, as without it, the solution would put all the weight on only one the attribute with smallest average within-cluster dissimilarity. A similar minimization strategy is proposed, which also results in a local optimum.

As can be shown in later sections, Sparse K-means is indeed effective in practice. However, its asymptotic consistency remains unknown. sun2012regularized

propose Regularized K-means clustering for high-dimensional data and prove its asymptotic consistency. This method aims at minimizing a regularized

within-cluster sum of squares with an adaptive group lasso penalty term on the cluster centers:

 minimize 1n∑k∈[κ]∑i∈C−1(k)∥xi−μk∥2+∑a∈[p]λa√μ21a+⋯+μ2κa , (19) over any clustering C and any sets of centers μ1,μ2,⋯,μκ. (20)

### 2.2 Some methods for the Euclidean setting

Consider points in space (denoted in ) that we want to cluster. A typical dissimilarity is the Euclidean metric, denoted by . Decomposing this into coordinate components, with , and letting , we have

 δ(i,j)=∑a∈[p]δa(i,j). (21)

A normalization would lead us to consider a weighted version of these dissimilarities. But assuming that the data has been normalized to have (Euclidean) norm 1 along each coordinate, (6) holds and we are within the framework described above.

This Euclidean setting has drawn most of the attention. Some papers propose to perform clustering after reducing the dimensionality of the data (ghosh2002mixture, ; liu2002, ; tamayo2007metagene, ). However, the preprocessing step of dimensionality reduction is typically independent of the end goal of clustering, making such approaches non-competitive.

A model-based clustering approach is based maximizing the likelihood. Under the sparsity assumption made here, the likelihood is typically penalized. Most papers assume a Gaussian mixture model. Let

denote the density of the normal distribution with mean

and covariance matrix . The penalized negative log-likelihood (when the goal is to obtain clusters) is of the form

 −∑i∈[n]log[∑k∈[κ]πkfk(xi;μk,Σk)]+pλ(Θ), (22)

where gathers all the parameters, meaning, the mixture weights , the group means , and the group covariance matrices . For instance, assuming that the data has been standardized so that each feature has sample mean

and variance

, pan2007penalized use

 pλ(Θ)=λ∑k∈[κ]∥μk∥1. (23)

This may be seen as a convex relaxation of

 pλ(Θ)=λ∑a∈[p]∑k∈[κ]I{μka≠0}=λ∑k∈[κ]∥μk∥0. (24)

Typically, this optimization will result in some coordinates set to zero and thus deemed not useful for clustering purposes. In another variant, wang2008variable use

 pλ(Θ)=λ∑a∈[p]maxk∈[κ]|μka|. (25)

To shrink the difference between every pair of cluster centers for each variable , guo2010pairwise use the pairwise fusion penalty

 pλ(Θ)=λ∑a∈[p]∑1≤k≤k′≤κ|μka−μk′a|. (26)

Taking into account the covariance matrices, and assuming they are diagonal, xie2008penalized use

 pλ(Θ)=λ1∑k∈[κ]∑a∈[p]|μka|+λ2∑k∈[κ]∑a∈[p]|σ2ka−1|. (27)

The assumption that the covariance matrices are diagonal is common in high-dimensional settings and was demonstrated to be reasonable in the context of clustering (fraley2006mclust, ). Note that none of these proposals make the optimization problem (22) convex or otherwise tractable. The methods are implemented via an EM-type approach.

Another line of research on sparse clustering is based on coordinate-wise testing for mixing. This constitutes the feature selection step. The clustering step typically amounts to applying a clustering algorithm to the resulting feature space. For example, jin2014important use a Kolmogorov-Smirnov test against the normal distribution, while jin2015phase use a (chi-squared) variance test. The latter is also done in (azizyan2013, ) and in (verzelen2014detection, )

. This last paper also studies the case where the covariance matrix is unknown and proposes an approach via moments. In a nonparametric setting,

chan2010using use coordinate-wise mode testing.

## 3 Our method: Sparse Alternate Sum (SAS) Clustering

Hill-climbing methods are iterative in nature, making ‘local’, that is, ‘small’ changes at each iteration. They have been studied in the context of graph partitioning, e.g., by kernighan1970efficient and carson2001hill , among others. In the context of sparse clustering, we find the K-medoids variant of aggarwal1999fast , which includes a hill-climbing step.

Many of the methods cited in Section 2 use alternate optimization in some form (e.g., EM), which can be interpreted as hill-climbing. Our method is instead directly formulated as a hill-climbing approach, making it simpler and, arguably, more principled than COSA or Sparse K-means.

### 3.1 Our approach: SAS Clustering

Let be an algorithm for clustering based on dissimilarities. Formally, , where is a class of dissimilarity matrices and , and for with of dimension , . Note that could be a hill-climbing method for graph partitioning, or K-medoids (or K-means if we are provided with points in a vector space rather than dissimilarities), or a spectral method, namely, any clustering algorithm that applies to dissimilarities. (In this paper, we will use K-means for numerical data and K-medoids for categorical data using hamming distances as dissimilarities.) For , define

 δS=(δa(i,j):a∈S;i,j∈[n])andδ=δ[p]. (28)

Our procedure is described in Algorithm 1.

The use of algorithm in Step 1 is an attempt to minimize over . The minimization in Step 2 is over of size and it is trivial. Indeed, the minimizing is simply made of the indices corresponding to the smallest . For the choice of parameters and , any standard method for tuning parameters of a clustering algorithm applies, for example, by optimization of the gap statistic of tibshirani2001estimating . We note that the initialization phase, by itself, is a pure coordinate-wise approach that has analogs in the Euclidean setting as mentioned Section 2.2. The hill-climbing process is the iteration phase.

###### Remark 0.

We tried another initialization in Algorithm 1 consisting of drawing a feature set at random. We found that the algorithm behaved similarly. (Results not reported here.)

Compared with COSA and Sparse K-means, and other methods based on penalties, we note that the choice of features in our SAS algorithm is much simpler, using a hill-climbing approach instead.

### 3.2 Number of iterations needed

A first question of interest is whether the iterations improve the purely coordinate-wise method, defined as the method that results from stopping after one pass through Steps 1-2 in Algorithm 1 (no iteration). Although this is bound to vary with each situation, we examine an instance where the data come from the mixture of three Gaussians with sparse means. In detail, the setting comprises 3 clusters with 30 observations each and respective distributions , and , with having 50 ’s and 450 zeros. We assume that and are both given, and we run the SAS algorithm and record the Rand indexes (rand1971objective, ) and symmetric differences

as the end of each iteration of Steps 1-2. The setting is repeated 400 times. The means and confidence intervals under different regimes (

, , , ) are shown in Figure 1. At least in this setting, the algorithm converges in a few iterations and, importantly, these few iterations bring significant improvements, particularly over the purely coordinate-wise algorithm. Figure 1: Means (and 95% confidence intervals of the means) of Rand indexes and symmetric differences.

### 3.3 Selection of the sparsity parameter

We consider the problem of selecting , the number of clusters, as outside of the scope of this work, as it is intrinsic to the problem of clustering and has been discussed extensively in the literature — see (tibshirani2001estimating, ; kou2014estimating, ) and references therein. Thus we assume that is given. Besides , our algorithm has one tuning parameter, the sparsity parameter , which is the number of useful features for clustering, meaning, the cardinality of set in (10).

Inspired by the gap statistic of tibshirani2001estimating , which was designed for selecting the number of clusters in standard K-means clustering, we propose a permutation approach for selecting . Let denote the average within-cluster dissimilarity of the clustering computed by the algorithm on the original data with input number of features . Let denote the same quantity but obtained from a random permutation of the data — a new sample is generated by independently permuting the observations within each feature. The gap statistic (for ) is then defined as

 gap(s)=logΔobss−E(logΔperms). (29)

In practice, the expectation is estimated by Monte Carlo, generating

random permuted datasets. A large gap statistic indicates a large discrepancy between the observed amount of clustering and that expected of a null model (here a permutation of the data) with no salient clusters.

The optimization of the gap statistics over is a discrete optimization problem. An exhaustive search for would involve computing gap statistics, each requiring runs of the SAS algorithm. This is feasible when and are not too large.222In our experiments, we choose as in the code that comes with (witten2010framework, ). See Algorithm 2, which allows for coarsening the grid.

To illustrate the effectiveness of choosing using the gap statistic, we computed the gap statistic for all in the same setting as that of Section 3.2 with . The result of the experiment is reported in Figure 2. Note that, in this relatively high SNR setting, the gap statistic achieves its maximum at the correct number of features. Figure 2: A plot of the gap statistic for each s∈[p] for a Gaussian mixture with 3 components (30 observations in each cluster) in dimension p=500.

In this experiment, at least, the gap statistic seems unimodal (as a function of ). If it were the case, we could use a golden section search, which would be much faster than an exhaustive grid search.

## 4 Numerical experiments

We performed a number of numerical experiments, both on simulated data and on real (microarray) data to compare our method with other proposals. Throughout this section, we standardize the data coordinate-wisely, we assume that the number of clusters is given, and we use the gap statistic of tibshirani2001estimating to choose the tuning parameter in our algorithm.

### 4.1 A comparison of SAS Clustering with Sparse K-means and IF-PCA-HCT

We compare our Algorithm 1 with IF-PCA-HCT (jin2014important, ) and Sparse K-means (witten2010framework, ) in the setting of Section 3.2. We note that IF-PCA-HCT was specifically designed for that model and that Sparse K-means was shown to numerically outperform a number of other approaches, including standard K-means, COSA (friedman2004clustering, ), model-based clustering (raftery2006variable, ), the penalized log-likelihood approach of (pan2007penalized, ) and the classical PCA approach. We use the gap statistic to tune the parameters of SAS Clustering and Sparse K-means. (SAS_gs uses a grid search while SAS_gss uses a golden section search.) IF-PCA-HCT is tuning-free — it employs the higher criticism to automatically choose the number of features.

In Table 1a, we report the performance for these three methods in terms of Rand index (rand1971objective, ) for various combinations of and . Each situation was replicated 50 times. As can be seen from the table, SAS Clustering outperforms IF-PCA-HCT, and performs at least as well as Sparse K-means and sometimes much better (for example when and ). We examine a dataset from this situation in depth, and plot the weights resulted from Sparse K-means on this dataset, see Figure 3. As seen in this figure, and also as mentioned in (witten2010framework, ), Sparse K-means generally results in more features with non-zero weights than the truth. These extraneous features, even with small weights, may negatively impact the clustering result. In this specific example, the Rand index from Sparse K-means is 0.763 while our approach gives a Rand index of 0.956. Let denote the true feature set and the feature set that our method return. In this example, .

While both SAS Clustering and Sparse K-means use the gap statistic to tune the parameters, IF-PCA-HCT tunes itself analytically without resorting to permutation or resampling, and (not surprisingly) has the smallest computational time among these three methods. However, as can be seen from Table 1a, the clustering results given by IF-PCA-HCT are far worse than those resulted from the other two methods. In Table 1b, we report the performance of SAS Clustering and Sparse K-means in terms of the running time, under the same setting as that in Table 1a but with tuning parameters for both of the methods given (so that the comparisons are fair). As can be seen in Table 1b, SAS Clustering shows a clear advantage over Sparse K-means in terms of the running time, and as increases, the advantage becomes more obvious. (Note that both SAS and Sparse K-means are implemented in R code and, in particular, the code is not optimized.) Figure 3: A typical example of the weights that Sparse K-means returns.

### 4.2 A more difficult situation (same covariance)

In Section 4.1, the three groups had identity covariance matrix. In this section, we continue comparing our approach with Sparse K-means and IF-PCA-HCT under a more difficult situation, where each of the 3 clusters have 30 points sampled from different -variate normal distributions (), with different mean vectors

 μ1=[1.02,1.04,...,2,0,...,0p−50 zeros% ],
 μ2=[1.02+δμ,1.04+δμ,...,2+δμ,0,...,0p−50 zeros],
 μ3=[1.02+2δμ,1.04+2δμ,...,2+2δμ,0,...,0p−50 zeros],

and same diagonal covariance matrix

across groups, a random matrix with eigenvalues in

. We used 50 repeats and varied from to . The results are reported in Table 2a. We see there that, in this setting, our method is clearly superior to Sparse K-means and IF-PCA-HCT. We also report the symmetric difference between the estimated feature set and the true feature set , as can be seen in Table 2b. Our algorithm is clearly more accurate in terms of feature selection.

### 4.3 A more difficult situation (different covariances)

In both Section 4.1 and Section 4.2, the three groups have the same covariance matrix. In this section, we continue comparing our approach with Sparse K-means and IF-PCA-HCT under an even more difficult situation, where the mean vectors are the same as in Section 4.2 with , but now the covariances are different: , and are random matrices with eigenvalues in , and , respectively. We used 50 repeats in this simulation. The results, reported in Table 3, are consistent with the results of Section 4.2: our method clearly outperforms Sparse K-means and IF-PCA-HCT, both in terms of clustering and feature selection.

Notice that the 3 clusters are well separated in the first 50 features as can be seen from the construction of the data, but when 450 noise features are present in the datasets, the task of clustering becomes difficult. See Figure 4(b) as an example where we project a representative dataset onto the first two principal components of the whole data matrix. However, if we are able to successfully select out the first features and apply classical clustering algorithms, then we are able to achieve better results. See Figure 4(a), where we project the same dataset onto the first two principal components of the data submatrix consisting of the first 50 columns (features). To illustrate the comparisons, we also plot in Figure 4 the clustering results by these three methods.

### 4.4 Clustering non-euclidean data

In the previous simulations, all the datasets were Euclidean. In this section, we apply our algorithm on categorical data (with Hamming distance) and compare its performance with Sparse K-medoids333We modified the function of Sparse K-means in the R package ‘sparcl’, essentially replacing K-means with K-medoids, so that it can be used to cluster categorical data.. In this example, we generate clusters with data points each from three different distributions on the Hamming space of dimension

. Each distribution is the tensor product of Bernoulli distributions with success probabilities

for . For the first distribution, for and otherwise. For the second distribution, for and otherwise. For the third distribution, for and otherwise. See Table 4, where we compare these two methods in terms of Rand index for various combination of and . Each situation was replicated 50 times. As can be seen from the table, SAS Clustering significantly outperforms Sparse K-medoids in most situations. We examined why, and it turns out that Sparse K-medoids works well if the tuning parameter in equation (18) is given, but it happens that the gap statistic often fails to give a good estimate of in this categorical setting. We are not sure why.

### 4.5 Comparisons as the number of clusters κ increases

In Sections 4.14.4, we have fixed the number of clusters to be and considered the effects of cluster separation , sparsity () and cluster shape (Identity covariance, same and different covariance matrices across groups) in the comparisons. In this section, we continue to compare our approach with Sparse K-means and IF-PCA-HCT as the number of clusters increases from to . The set-up here is different from the above sections. We sample sub-centers from a 50-variate normal distribution 444The constant 0.4 was chosen to make the task of clustering neither too easy nor too difficult. and concatenate each of the sub-centers with 450 zeros to have random centers, , of length , which carry at least noise features. Once the centers are generated, we construct clusters with ( in the second set-up) observations each, sampled from respective distributions with . Each setting is repeated times. The means and confidence intervals with different ’s are shown in Figure 5(a) and Figure 5(b). Once again, the results were consistent with earlier results in that SAS Clustering outperforms IF-PCA-HTC and performs at least as well as Sparse K-means with different ’s. We also notice that the clustering results given by all these three methods become better as increases. This can be explained by the increased effective sample sizes ( or ) as increases.

### 4.6 Applications to gene microarray data

We compare our approach with others on real data from genetics. Specifically, we consider the same microarray datasets (listed in Table 5) used by jin2014important to evaluate their IF-PCA method. Each of these 10 data sets consists of measurements of expression levels of genes in patients from different classes (e.g., normal, diseased). We notice from Table 5 that is much greater than , illustrating a high-dimensional setting. We also mention that, although the true labels are given by the groups the individuals belong to, they are only used as the ground truth when we report the classification errors of the different methods in Table 6. For detailed descriptions and the access to these 10 datasets, we refer the reader to (jin2014important, ).

In Table 6, we report the classification errors of 10 different methods on these datasets. Among these 10 methods, the results from K-means, K-means++ (arthur2007k, ), hierarchical clustering, SpectralGem (lee2010spectral, ) and IF-PCA-HCT (jin2014important, ) are taken from (jin2014important, ). We briefly mention that K-means++ is Lloyd’s algorithm for K-means but with a more careful initialization than purely random; hierarchical clustering is applied to the normalized data matrix directly without feature selection; and SpectralGem is PCA-type method. In addition to these 5 methods, we also include 3 other methods: AHP-GMM (wang2008variable, ), which is an adaptively hierarchically penalized Gaussian-mixture-model based clustering method, Regularized K-means (sun2012regularized, ), and Sparse K-means (witten2010framework, ).

We can offer several comments. First, our method is overall comparable to Sparse K-means and IF-PCA, which in general outperform the other methods. It is interesting to note that SAS_gss outperforms SAS_gs on a couple of datasets. However, we caution the reader against drawing hard conclusions based on these numbers, as some of the datasets are quite small. For example, the Brain dataset has groups and a total sample size of , and is very high-dimensional with . Second, for Breast Cancer, Prostate Cancer, SRBCT and SuCancer, all methods perform poorly with the best error rate exceeding . However, we note that even when the task is classification where class labels in the training sets are given, these data sets are still hard for some well-known classification algorithms (dettling2004bagboosting, ; yousefi2010reporting, ). Third, we notice that in (sun2012regularized, ), clustering results of the Leukemia and Lymphoma datasets have also been compared. The error rate on Lymphoma given by Regularized K-means in (sun2012regularized, )

is the same as reported here, however, the error rate on Leukemia is smaller than the result reported here. This is due to the fact that they applied preprocessing techniques to screen out some inappropriate features and also imputed the missing values using 5 nearest neighbors on this data set. Interestingly,

wang2008variable also reported a better error rate on SRBCT data using their AHP-GMM method. However, they split the data into training set and testing set, fit the penalized Gaussian mixture model and report the training error and testing error respectively.

## 5 Conclusion

We presented here a simple method for feature selection in the context of sparse clustering. The method is arguably more natural and simpler to implement than COSA or Sparse K-means. At the same time, it performs comparably or better than these methods, both on simulated and on real data.

At the moment, our method does not come with any guarantees, other than that of achieving a local minimum if the iteration is stopped when no improvement is possible. Just like other iterative methods based on alternating optimization, such as Lloyd’s algorithm for K-means, proving a convergence to a good local optimum (perhaps even a global optimum) seems beyond reach at the moment. COSA and Sparse K-means present similar challenges and have not been analyzed theoretically. IF-PCA has some theoretical guarantees developed in the context of a Gaussian mixture model (jin2014important, ) — see also jin2015phase . More theory for sparse clustering is developed in (azizyan2013, ; verzelen2014detection, ; chan2010using, ).

## References

• (1) T. Hastie, R. Tibshirani, J. Friedman, The elements of statistical learning, Springer, New York, 2009.
• (2) J. H. Friedman, J. J. Meulman, Clustering objects on subsets of attributes (with discussion), Journal of the Royal Statistical Society: Series B (Statistical Methodology) 66 (4) (2004) 815–849.
• (3) D. M. Witten, R. Tibshirani, A framework for feature selection in clustering, Journal of the American Statistical Association 105 (490).
• (4) R. Tibshirani, G. Walther, T. Hastie, Estimating the number of clusters in a data set via the gap statistic, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63 (2) (2001) 411–423.
• (5) W. Sun, J. Wang, Y. Fang, et al., Regularized k-means clustering of high-dimensional data and its asymptotic consistency, Electronic Journal of Statistics 6 (2012) 148–167.
• (6) D. Ghosh, A. M. Chinnaiyan, Mixture modelling of gene expression data from microarray experiments, Bioinformatics 18 (2) (2002) 275–286.
• (7)

J. S. Liu, et al., Bayesian clustering with variable and transformation selections, Bayesian Statistics 7 (2003) 245–275.

• (8) P. Tamayo, et al., Metagene projection for cross-platform, cross-species characterization of global transcriptional states, Proceedings of the National Academy of Sciences 104 (14) (2007) 5959–5964.
• (9)

W. Pan, X. Shen, Penalized model-based clustering with application to variable selection, The Journal of Machine Learning Research 8 (2007) 1145–1164.

• (10) S. Wang, J. Zhu, Variable selection for model-based high-dimensional clustering and its application to microarray data, Biometrics 64 (2) (2008) 440–448.
• (11) J. Guo, E. Levina, G. Michailidis, J. Zhu, Pairwise variable selection for high-dimensional model-based clustering, Biometrics 66 (3) (2010) 793–804.
• (12) B. Xie, et al., Penalized model-based clustering with cluster-specific diagonal covariance matrices and grouped variables, Electronic journal of statistics 2 (2008) 168.
• (13) C. Fraley, A. E. Raftery, Mclust version 3: an r package for normal mixture modeling and model-based clustering, Tech. rep., DTIC Document (2006).
• (14) J. Jin, W. Wang, Important feature pca for high dimensional clustering, arXiv preprint arXiv:1407.5241.
• (15) J. Jin, Z. T. Ke, W. Wang, Phase transitions for high dimensional clustering and related problems, arXiv preprint arXiv:1502.06952.
• (16) M. Azizyan, A. Singh, L. Wasserman, Minimax theory for high-dimensional gaussian mixtures with sparse mean separation, Neural Information Processing Systems (NIPS).
• (17) N. Verzelen, E. Arias-Castro, Detection and feature selection in sparse mixture models, arXiv preprint arXiv:1405.1478.
• (18) Y.-b. Chan, P. Hall, Using evidence of mixed populations to select variables for clustering very high-dimensional data, Journal of the American Statistical Association 105 (490) (2010) 798–809.
• (19)

B. W. Kernighan, S. Lin, An efficient heuristic procedure for partitioning graphs, Bell system technical journal 49 (2) (1970) 291–307.

• (20) T. Carson, R. Impagliazzo, Hill-climbing finds random planted bisections, in: Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms, Society for Industrial and Applied Mathematics, 2001, pp. 903–909.
• (21) C. C. Aggarwal, J. L. Wolf, P. S. Yu, C. Procopiuc, J. S. Park, Fast algorithms for projected clustering, in: ACM SIGMoD Record, Vol. 28, ACM, 1999, pp. 61–72.
• (22) W. M. Rand, Objective criteria for the evaluation of clustering methods, Journal of the American Statistical association 66 (336) (1971) 846–850.
• (23) J. Kou, Estimating the number of clusters via the gud statistic, Journal of Computational and Graphical Statistics 23 (2) (2014) 403–417.
• (24) A. E. Raftery, N. Dean, Variable selection for model-based clustering, Journal of the American Statistical Association 101 (473) (2006) 168–178.
• (25) D. Arthur, S. Vassilvitskii, k-means++: The advantages of careful seeding, in: Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, Society for Industrial and Applied Mathematics, 2007, pp. 1027–1035.
• (26) A. B. Lee, D. Luca, K. Roeder, et al., A spectral graph approach to discovering genetic ancestry, The Annals of Applied Statistics 4 (1) (2010) 179–202.
• (27) M. Dettling, Bagboosting for tumor classification with gene expression data, Bioinformatics 20 (18) (2004) 3583–3593.
• (28) M. R. Yousefi, J. Hua, C. Sima, E. R. Dougherty, Reporting bias when using real data sets to analyze classification performance, Bioinformatics 26 (1) (2010) 68–76.