A robust and sparse K-means clustering algorithm

01/29/2012 ∙ by Yumi Kondo, et al. ∙ 0

In many situations where the interest lies in identifying clusters one might expect that not all available variables carry information about these groups. Furthermore, data quality (e.g. outliers or missing entries) might present a serious and sometimes hard-to-assess problem for large and complex datasets. In this paper we show that a small proportion of atypical observations might have serious adverse effects on the solutions found by the sparse clustering algorithm of Witten and Tibshirani (2010). We propose a robustification of their sparse K-means algorithm based on the trimmed K-means algorithm of Cuesta-Albertos et al. (1997) Our proposal is also able to handle datasets with missing values. We illustrate the use of our method on microarray data for cancer patients where we are able to identify strong biological clusters with a much reduced number of genes. Our simulation studies show that, when there are outliers in the data, our robust sparse K-means algorithm performs better than other competing methods both in terms of the selection of features and also the identified clusters. This robust sparse K-means algorithm is implemented in the R package RSKC which is publicly available from the CRAN repository.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 17

page 18

Code Repositories

cytofviz

Simple PCA and RSKM clustering on CyTOF data


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Robust and Sparse K-Means

The K-means clustering algorithm finds clusters , …, that minimize the within-clusters sum of squares

(1)

where , …, are the disjoint sets of cluster indices, is the number of observations in the -th cluster, and

is the (additive) dissimilarity measure between the and observations. When our observations are vectors , , and is the squared Euclidean distance between the -th and -th points have , , …, and

where is the sample mean of the observations in the -th cluster. A popular algorithm to find local solutions to (1) is as follows Lloyd (1982). First randomly select K initial “centers” , …, and iterate the following two steps until convergence:

  1. Given cluster centers , …, , assign each point to the cluster with the closest center.

  2. Given a cluster assignment, update the cluster centers to be the sample mean of the observations in each cluster.

Although this algorithm decreases the objective function at each iteration it may be trapped in different local minima. Hence, it is started several times and the best solution is returned.

Cuesta-Albertos et al. (1997) proposed a modification of this algorithm in order to obtain outlier-robust clusters. The main idea is to replace step (b) above by

  1. Given a cluster assignment, trim the observations with largest distance to their cluster centers, and update the cluster centers to the sample mean of the remaining observations in each cluster.

The tuning parameter regulates the amount of trimming and is selected by the user.

Since the total sum of squares

does not depend on the cluster assignments, minimizing the within-cluster sum of squares is equivalent to maximizing the between-cluster sum of squares

The SK-means algorithm of Witten and Tibshirani (2010) introduces non-negative weights , , …, for each feature and then solves

(2)

subject to , and , , …, , where determines the degree of sparcity (in terms of non-zero weights) of the solution. The optimization problem in (2) can be solved by iterating the following steps:

  1. Given weights and cluster centers , …, solve

    which is obtained by assigning each point to the cluster with closest center using weighted Euclidean squared distances.

  2. Given weights and cluster assignments , …, , update the cluster centers to be the weighted sample mean of the observations in each cluster.

  3. Given cluster assignments , …, and centers , …, solve

    where . There is a closed form expression for the vector of weights that solves this optimization problem (see Witten and Tibshirani, 2010).

A naive first attempt to incorporate robustness to this algorithm is to use trimmed K-means with weighted features, and then optimize the weights using the trimmed sample. In other words, to replace step (b) above with (b’) where , , …, are calculated without the observations flagged as outliers. A potential problem with this approach is that if an observation is outlying in a feature that received a small weight in steps (a) and (b’), it might not be trimmed. In this case, the variable where the outlier is more evident will receive a very high weight in step (c) (because this feature will be associated with a very large ). This may in turn cause the weighted K-means steps above to form a cluster containing this single point, which will then not be downweighted (since its distance to the cluster centre will be zero). This phenomenom is illustrated in the following example.

We generated a synthetic data set with observations and features. The observations follow a multivariate normal distribution with covariance matrix equal to the identity and centers at with and 2 for each cluster respectively. Only the first 2 features contain information on the clusters. Figure 1 contains the scatterplot of these 2 clustering features. Colors and shapes indicate the true cluster labels.

(a) True cluster labels
(b) Naive
(c) Sparse
(d) RSK-means
Figure 1: This is the figure

To illustrate the problem mentioned above, we replaced the 4th and 5th entries of the first 3 observations with large outliers. The naive robustification described above returns a dissapointing partition because it fails to correctly identify the clustering features, placing all the weight on the noise ones. The result is shown in Figure 1 (b). As expected, sparse K-means also fails in this case assigning all the weights to the noise features and forming one small cluster with the 3 outliers (see Figure 1 (c)). Finally, Figure 1 (d) shows the partition found by the Robust Sparse K-Means algorithm below.

The key step in our proposal is to use two sets of trimmed observations which we call the weighted and un-weighted trimmed sets. By doing this, zero weights will not necessarily mask outliers in the noise features. Our algorithm can be described as follows:

  1. Perform Trimmed K-means on the weighted data set:

    1. Given weights and cluster centers , …, solve

      which is obtained by assigning each point to the cluster with closest center using weighted Euclidean squared distances.

    2. Given weights and cluster assignments, trim the observations with largest distance to their cluster centers, and update the cluster centers to the sample mean of the remaining observations in each cluster.

    3. Iterate the two steps above until convergence.

  2. Let be the subscripts of the cases labelled as outliers in the final step of the weighted trimmed k-means procedure above.

  3. Using the partition returned by trimmed k-means, calculate the unweighted cluster centers , . For each observation , let be the unweighted distance to its cluster centre, i.e.: where . Let be the subscripts of the largest distances .

  4. Form the set of trimmed points .

  5. Given cluster assignments , …, , centers , …, and trimmed points , find a new set of weights by solving

    where , , are calculated without the observations in .

We call this algorithm RSK-means, and it is readily available on-line in the R package RSKC from the CRAN repository: http://cran.r-project.org.

The RSK-means algorithm requires the selection of three parameters: the bound, the trimming proportion and the number of clusters . Regarding the first two, we recommend considering different combinations and comparing the corresponding results. In our experience the choice suffices for most applications. The choice of the bound determines the degree of sparcity and hence can be tuned to achieve a desired number of selected features. In practice, one can also consider several combinations of these parameters and compare the different results. To select the number of clusters we recommend using the Clest algorithm of Dudoit and Fridlyand (2002). This is the default method implemented in the RSKC package.

2 Example - Does this data set have missing values?

In this section we consider the problem of identifying clinically interesting groups and the associated set of discriminating genes among breast cancer patients using microarray data. We used a data set consisting on 4751 gene expressions for 78 primary breast tumors. The dataset was previously analyzed by van’t Veer et al. (2002) and it is publicly available at: http://www.nature.com/nature/journal/v415/n6871/suppinfo/415530a.html. These authors applied a supervised classification technique and found subset of 70 genes that helps to discriminate patients that develop distant metastasis within 5 years from others.

To further explore these data we applied both the Sparse and the Robust Sparse K-means algorithms. We set the bound to 6 in order to obtain approximately 70 non-zero weights. As in Hardin et al. (2007), we used a dissimilarity measure based on Pearson’s correlation coefficient between cases. This can be done easily using the relationship between the squared Euclidean distance between a pair of standardized rows and their correlation coefficient. More specifically, let denote the standardized -th case, where we substracted the average of the

-th case and divided by the corresponding standard deviation. Then we have

Hence, as a dissimilarity measure, we used the squared Euclidean distance calculated on the standardized data. To avoid the potential damaging effect of outliers we standardized each observation using its median and MAD instead of the sample mean and standard deviation.

Since we have no information on the number of potential outliers in these data, we ran Clest with four trimming proportions: , , and . Note that using corresponds to the original sparse -means method. Interestingly, Clest selects for the four trimming proportions mentioned above.

It is worth noticing that the four algorithms return clusters closely associated with an important clinical outcome: the estrogen receptor status of the sample (ER-positive or ER-negative). There is clinical consensus that ER positive patients show a better response to treatment of metastatic disease (see Barnes et al., 1998).

To evaluate the strenght of the 70 identified genes we considered the level of agreement between the clustering results obtained using these genes and the clinical outcome. To measure this agreement we use the classification error rate (CER) (see Section 3 for its definition). Since outliers are not considered members of either of the two clusters, we exclude them from the CER calculations. However, it is important to note that, given a trimming proportion , the robust sparse k-means algorithm will always flag a pre-determined number of observations as potential outliers. The actual status of these suspicious cases is decided comparing their weighted distance to the cluster centers with a threshold based on the distances to the cluster centers for all the observations. Specifically, outliers are those cases above the corresponding median plus 3.5 MADs of these weighted distances. Using other reasonable values instead of 3.5 produced the same conclusions. The resulting CERs are 0.06 for the robust sparse k-means with , 0.08 when and when , and 0.10 for the non-robust sparse k-means (). To simplify the comparison we restrict our attention to SK-means and RSK-means with .

These sparse clustering methods can also be used to identify genes that play an important role for classifying patients. In this regard, 62% of the genes with positive weights agree in both methods. On the other hand, 10% of the largest positive weights in each group (i.e. larger than the corresponding median non-zero weight) are different. The complete list genes selected by each method with the corresponding weights is included in the suplementary materials available on-line from our website

http://www.stat.ubc.ca/~matias/pubs.html.

To compare the strength of the clusters found by each of the two methods we use the following silhoutte-like index to measure how well separated they are. For each observation let and be the squared distance to the closest and second closest cluster centers, respectively. The “silhouette” of each point is defined as . Note that with larger values indicating a stronger cluster assignment. The average silouhette of each cluster measures the overall strength of the corresponding cluster assignments. Figure 2 contains the silhouettes of the clusters found by each method. Each bar corresponds to an observation and with their height representing their silhouette. The numbers inside each cluster denote the corresponding average silhouette for that cluster. Darker lines identify outlying observations according to the criterion mentioned above.

(a) SK-Means
(b) RSK-means with =10/78
Figure 2: Silhouette plots for the partitions returned by SK-means and RSK-means with . Each bar corresponds to an observation and their heights correspond to their silhouette numbers. For each cluster, points are ordered according to their silhouette values. The numbers inside each cluster block indicate the corresponding average silhouette (over the observations in that cluster). Higher values indicate stronger clusters. Darker bars indicate potentially outlying cases.

Note that both clusters identified by RSK-means are considerably stronger than those found by SK-means (the average silhouettes are 0.91 and 0.91, versus 0.85 and 0.79, respectively). Also note that the average silhouettes of the outliers identified by SK-means (0.44 and 0.55 for each cluster, respectively) are higher than those identified by RSK-means (0.42 and 0.30 for each cluster, respectively). One can conclude that the outliers identified by RSK-means are more extreme (further on the periphery of their corresponding clusters) than those found by SK-means.

3 Numerical results

In this section we report the results of a simulation study to investigate the properties of the proposed robust sparse K-means algorithm (RSK-means) and compare it with K-means (KM), trimmed K-means (TKM), and sparse K-Means (SKM). For this study we used our implementation of RSK-means in the R package RSKC, which is publicly available on-line at http://cran.r-project.org.

Our simulated datasets contain observations generated from multivariate normal distributions with covariance matrix equal to the identity. We form three clusters of equal size by setting the mean vector to be , where with 50 entries equal to 1 followed by 450 zeroes, and , 0, and , respectively. Note that the clusters are determined by the first 50 features only (which we call clustering features), the remaining 450 being noise features.

We will assess the performance of the different cluster procedures regarding two outcomes: the identification of the true clusters and the identification of the true clustering features. To measure the degree of cluster agreement we use the classification error rate (CER) proposed by Chipman and Tibshirani (2005). Given two partitions of the data set (clusters) the CER is the proportion of pairs of cases that are together in one partition and apart in the other. To assess the correct identification of clustering features we adapt the average precision measure (reference?). This is computed as follows. First, sort the features in decreasing order according the their weights and count the number of true clustering features appearing among the 50 higher-ranking features.

We consider the following two contamination configurations:

  • Model 1: We replace a single entry of a noise feature () with an outlier at 25.

  • Model 2: We replace a single entry of a clustering feature () with an outlier at 25.

Several other configurations where we vary the number, value and location of the outliers were also studied and are reported in the accompanying supplementary materials. The general conclusions of all our studies agree with those reported here.

The bound for each algorithm was selected in such a way that approximately 50 features would receive positive weights when used on clean datasets. More specifically, we generated 50 data sets without outliers and, for each algorithm, considered the following 11 bounds: 5.7, 5.8, …, 6.7. The one resulting in the number of non-zero weights closest to 50 was recorded. In our simulation study we used the corresponding average of selected bounds for each algorithm. For both SK-means and RSK-means this procedure yielded an bound of 6.2. The proportion of trimmed observations in TKM and RSK-means was set equal to 1/60, the true proportion of outliers.

We generated 100 datasets from each model. Figure 3 shows the boxplots of CERs between the true partition and the partitions returned by the algorithms. When the data do not contain any outliers we see that the performance of K-means, SK-means, and RSK-means are comparable. The results of Models 1 and 2 show that the outliers affect the peformance of K-means and SK-means, and also that the presence of 450 noise features upsets the performance of the TK-means. However, the RSK-means algorithm retains small values of CER for both types of contamination.

(a) Clean data
(b) Model 1
(c) Model 2
Figure 3: Boxplots of CERs calculated between the true partition and the partitions from four algorithms. They correspond to 100 simulated data sets of size and features (with 50 true clustering features). “K”, “SK”, “TK” and “RSK” denote K-means, Sparse K-means, Trimmed K-means and Robust Sparse K-means, respectively.

To compare the performance of the different algorithms with regards to the features selected by them we consider the median weight assigned to the true clustering features. The results are shown in Figure

4. We can see that when there are no outliers in the data both the SK-means and RSK-means algorithms assign very similar weights to the correct clustering features. The presence of a single outlier, however, results in the SK-means algorithm to assign much smaller weights to the clustering features.

(a) Clean data
(b) Model 1
(c) Model 2
Figure 4: The boxplots of median proportions of weights on the 50 clustering features over 100 simulations. The dotted line represents the ideal amount of weights on each clustering feature. K=K-means, SK=SK-means,TK=TK-means, RSK=RSK-means.

Table 1 below contains the average number of non-zero weights returned by each algorithm, and average number of true clustering features among the 50 features receiving highest weights. When the data are clean, both SK-means and RSK-means return approximately 50 clustering features, and they are among the ones with highest weights. A single outlier (with either contamination Model 1 or 2) results in SK-means selecting almost all 500 features, while RSK-means remains unaffected.

Non-zero weights Average precision
No outliers RSK-means 49.2 (1.28) 48.9 (0.31)
SK-means 49.3 (1.12) 48.9 (0.26)
Model 1 RSK-means 49.2 (1.00) 48.9 (0.31)
SK-means 498.5 (1.18) 47.5 (0.64)
Model 2 RSK-means 49.2 (1.11) 49.0 (0.17)
SK-means 494.0 (44.0) 47.8 (1.00)
Table 1: The first column contains the average number of non-zero weights over the 100 data sets (SD in parentheses). The second column shows the average number of true clustering features among the 50 features with largest weights (SD in parentheses).

4 Conclusion

In this paper we propose a robust algorithm to simultaneously identify clusters and features using K-means. The main idea is to adapt the Sparse K-means algorithm of Witten and Tibshirani (2010) by trimming a fixed proportion of observations that are farthest away from their cluster centers (using the approach of the Trimmed K-means algorithm of Cuesta-Albertos et al. (1997)). Sparcity is obtained by assigninig weights to features and imposing an upper bound on the norm of the vector of weights. Only those features for which the optimal weights are positive are used to determine the clusters. Because possible outliers may contain atypical entries in features that are being downweighted, our algorithm also considers the distances from each point to their cluster centers using all available features. Our simulation studies show that the performance of our algorithm is very similar to that of Sparse K-means when there are no oultiers in the data, and, at the same time, it is not severely affected by the presence of outliers in the data. We used our algorithm to identify relevant clusters in a data set containing gene expressions for breast cancer patients, and we were able to find interesting biological clusters using a very small proportion of genes. Furthermore, the clusters found by the robust algorithm are stronger than those found by Sparse K-means. Our robust Sparse K-means algorithm is implemented in the R package RSKC, which is available at the CRAN repository.

References

  • Barnes et al. (1998) D. Barnes, R. Millis, L. Beex, S. Thorpe, and R. Leake. Increased use of immunohistochemistry for oestrogen receptor measurement in mammary carcinoma: the need for quality assurance. European Journal of Cancer, 34, 1998.
  • Chipman and Tibshirani (2005) H. Chipman and R. Tibshirani.

    Hybrid hierarchical clustering with applications to microarray data.

    Biostatistics, 7(2):286–301, 2005.
  • Cuesta-Albertos et al. (1997) J. A. Cuesta-Albertos, A. Gordaliza, and C. Matran. Trimmed k-means: An attempt to robustify quantizers. The Annals of Statistics, 25(2):553–576, 1997.
  • Dudoit and Fridlyand (2002) S. Dudoit and J. Fridlyand.

    A prediction-based resampling method for estimating the number of clusters in a dataset.

    Genome Biology, 3(7), 2002.
  • Hardin et al. (2007) J. Hardin, A. Mitani, L. Hicks, and B. VanKoten. A robust measure of correlation between two genes on a microarray. BMC Bioinformatics, 8:220, 2007.
  • Lloyd (1982) S. P. Lloyd. Least squares quantization in pcm. IEEE Transactions on information theory, 28(2):129–136, 1982.
  • MacQueen (1967) J. MacQueen. Some methods for classification and analysis of multivariate observations.

    Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, eds. L.M. LeCam and J. Neyman

    , 1967.
  • Steinhaus (1956) H. Steinhaus. Sur la division des corps matriels en parties. Bull. Acad. Polon. Sci, 4(12):801–804, 1956.
  • van’t Veer et al. (2002) L. J. van’t Veer, H. Dal, M. J. van de Vijver, Y. D. He, A. A. M. Hart, M. Mao, H. L. Peterse, K. van der Kooy, M. J. Marton, A. T. Witteveen, G. J. Schreiber, R. M. Kerkhoven, C. Roberts, P. S. Linsley, R. Bernards, and S. H. Friend. Gene expression profiling predicts clinical outcome of breast cancer. Nature, 415, 2002.
  • Witten and Tibshirani (2010) D. M. Witten and R. Tibshirani. A framework for feature selection in clustering. Journal of the American Statistical Association, 105(490):713–726, 2010.