Subspace clustering of high-dimensional data: a predictive approach

03/05/2012 ∙ by Brian McWilliams, et al. ∙ Imperial College London 0

In several application domains, high-dimensional observations are collected and then analysed in search for naturally occurring data clusters which might provide further insights about the nature of the problem. In this paper we describe a new approach for partitioning such high-dimensional data. Our assumption is that, within each cluster, the data can be approximated well by a linear subspace estimated by means of a principal component analysis (PCA). The proposed algorithm, Predictive Subspace Clustering (PSC) partitions the data into clusters while simultaneously estimating cluster-wise PCA parameters. The algorithm minimises an objective function that depends upon a new measure of influence for PCA models. A penalised version of the algorithm is also described for carrying our simultaneous subspace clustering and variable selection. The convergence of PSC is discussed in detail, and extensive simulation results and comparisons to competing methods are presented. The comparative performance of PSC has been assessed on six real gene expression data sets for which PSC often provides state-of-art results.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, vast amounts of digital information has been generated which continues to grow ever more rapidly. For instance, within the realm of genomic research, high-throughput microarray technologies provide quantitative measurements of tens of thousands of genes for each biological entity under observation, such as a sample tissue; these measurements are then analysed to identify gene expression patterns that are predictive of a clinical outcome or to detect sub-populations. A number of other technologies, such as those developed for digital imaging, surveillance, and e-commerce, amongst many others, all generate observations which can be seen as random vectors living in a large dimensional space.

Very often, even though the available observations are high dimensional, it has been found that their intrinsic dimensionality is in fact much smaller. For instance, although the number of pixels or voxels in a digital image can be extremely large, it is often the case that only a few important dimensions are able to capture some salient aspects of the available images, and those are sufficient to detect meaningful patterns. In genomics, despite the fact that the expression levels of tens of thousands of genes are being measured, a much smaller number of dimensions may suffice to identify different underlying biological processes which characterise the available samples.

The abundance of such high-dimensional data and the fact that their low-dimensional representations are often interpretable and informative has caused projection-based dimensionality reduction techniques to become popular. When each data point is treated as an independent realisation from a given distribution with support in a low-dimensional subspace, Principal Component Analysis (PCA) is commonly used to recover the low-dimensional representation of the data [12]. PCA and related techniques, however, assume that the available data points are drawn from a single underlying distribution that is typical of the population from which the observations have been obtained. Whereas this can be a valid assumption in some cases, there are also many applications in which the underlying population is suspected to be highly heterogeneous; in those cases, each observation may have been drawn from one of many alternative distributions, and a single low-dimensional representation of the data would not be sufficient.

Partitioning the sample data points according to the low-dimensional subspace that best describes them has become an important research question in various domains. In cancer genomics, for instance, the biological tissues for which high-dimensional gene expression profiles have been observed may be representative of a number of cancer sub-types; the identification of these sub-populations is important because each one of them may be associated with a distinct clinical outcome, such as life expectancy after therapy [34]

. In the analysis of digital images of human faces, each subspace may be associated with a particular person for which a number of images have been collected, for instance under different lighting conditions, and the identification of these subspaces will aid face recognition

[16]. When the observations are deemed to be representative of distinct low-dimensional subspaces, the problem of subspace clustering then consists in the simultaneous estimation of these subspaces and the unobserved membership of data points to subspaces [29]. A formal definition of this problem as well as a brief survey of existing models and algorithms in provided in Section 2.

In this work we introduce a novel approach to subspace clustering and present extensive comparisons to competing methods using both simulated and real high-dimensional genomic data sets. We exploit the fact that PCA provides the low-dimensional representation of the data that minimises the reconstruction error, and propose a criterion derived from the out-of-sample error as the building block for a subspace clustering algorithm. An overview of the contributions presented here is in order.

First, for a single PCA model, we propose a computationally efficient approach to detect influential observations, namely data points exerting a larger effect on the estimated PCA parameters compared to other points. Our motivation for detecting influential observations is that, under the assumption that alternative low-dimensional representations of a data point might exist, a measure of influence will provide a useful metric for assigning data points to subspaces. A common way to quantify the influence of a data point consists of examining the changes in the model parameters when the parameters have been re-estimated after removal of that observation [2]

. In the context of ordinary least squares (OLS), this is equivalent to computing the leave-one-out (LOO) estimate of the regression coefficients with respect to each observation. The LOO prediction error is then evaluated using the remaining observations, and influential observations are identified as those with a large LOO prediction error relative to other observations. Following up previous work done in the context of partial least squares regression

[15], here we propose a closed-form expression to compute LOO errors under a PCA model, known at the PRESS statistic, which only requires a single PCA model to be fitted to the entire data set. Armed with this analytical expression for the PCA PRESS statistic, we propose a notion of predictive influence that an observation exerts on the PCA model, and elaborate on our previous work [16]; intuitively, data points that are highly influential under a particular PCA model hay have been generated by a different model. This methodology is presented in Section 3.

Building on this notion of predictive influence, we develop a clustering algorithm that works by inferring the distinct low-dimensional spaces that are representative of each cluster. The optimality criterion we propose for driving the data partitioning process is such that total within-cluster predictive influence is minimised. The resulting algorithm, called Predictive Subspace Clustering (PSC) because of its direct dependence upon out-of-sample prediction errors, is an iterative one and is guaranteed to convergence to a local minimum. The convergence of the proposed algorithm has been studied, and we provide a detailed account. Although inferring the unknown number of clusters is a notoriously difficult problem, model selection can be somewhat naturally incorporated into the proposed subspace clustering framework by making use of the PCA PRESS statistic. Furthermore, motivated by genomic applications in which the detection of a small number of informative variables is important, we also discuss a variation of the PSC algorithm which provides sparse low-dimensional representations of the data in each cluster. Forcing sparse solutions within our clustering algorithm is accomplished by taking a penalised approach to PCA. These developments are presented in Section 4.

In Section 5 we present extensive simulation results based on artificial data and discuss a number of situations where the proposed approach is expected to provide superior clustering results, compared to standard clustering methods as well as other subspace algorithms that have been proposed in the literature. Different scenarios are considered in which both the number of subspaces and their dimensions is allowed to vary, and where some dimensions may be totally uninformative for clustering purposes and only contribute to noise. The difficult model selection problem, that is the problem of learning both the number of clusters and the dimensionality of each subspace, is also discussed.

In Section 6 the performance of the proposed PSC algorithm is compared to other subspace clustering algorithms using six high-dimensional genomic data sets in which several thousands of gene expression measurements have been made on various biological samples. For these data sets, which are all publicly available, both the number of clusters and the cluster memberships can be considered known, and this information can be used to test and compare the performance of competing clustering methods. It has often been observed that only certain variables (i.e. genes) are important for determining the separation between clusters. It is therefore desirable to be able to identify and remove any uninformative variables. Our experimental results demonstrate that the assumptions underlying the proposed PSC algorithm seem plausible for real gene expression data sets on which we have obtained state-of-the-art clustering performance.

2 Subspace clustering: problem definition and existing methods

We assume to have observed data points, , where each and the dimension is usually very large. Each point is assumed to belong to one of non-overlapping clusters, . We further assume that the points in the cluster lie in a dimensional subspace, where . As in [29], we assume that each subspace is defined in the following way


with and , where is a basis for whose columns are restricted to be mutually orthonormal. The point is the low dimensional representation of and is an arbitrary point in , typically chosen to be .

When only one cluster exists (i.e. ), a subspace of this form can be estimated by fitting a single global PCA model. Alternatively, where and the assignment of points to clusters is known, each one of the subspaces of form (1) can be estimated by fitting a PCA model independently in each cluster. However, the cluster assignments are typically unknown, and the problem consists in the simultaneous partitioning the data into clusters and estimating cluster-specific subspaces. There are several fundamental difficulties associated with this problem: (a) identifying the true subspaces is dependent on recovering the true clusters and vice-versa; (b) subspaces can intersect at several locations which causes difficulties when attempting to assign points to subspaces at these intersections, and standard clustering techniques such as -means may not be suitable; (c) the subspace parameters and the cluster assignments are dependent on both number of clusters and the dimensionality of their respective subspaces, which pose difficult estimation challenges.

Furthermore, in problems where is large, some of the variables may be uninformative for clustering. In some applications, there might also be an interest in selecting a specific subset of dimensions that are highly discriminative between clusters, and can be more easily interpreted. In such situations, we may be interested in carrying out sparse

subspace clustering by adding some form of regularisation during the estimation of the eigenvectors in

, such that each eigenvector contains a predetermined number of zero coefficients. To the best of our knowledge, the problem of variable selection in subspace clustering has not been widely studied until now.

A variety of approaches have been proposed to solve the subspace clustering problem. Several methods are based on generalising the widely used -means algorithm to -subspaces [4, 30]. These methods iteratively fit PCA models to the data and assign points to clusters until the PCA reconstruction error in each cluster is minimised. Although the approach based on minimising the within-cluster PCA reconstruction error is simple and has shown promising results, it is also prone to over-fitting. For instance, the data may be corrupted by noise or lie on the intersection between subspaces and so points within clusters may be geometrically far apart; we illustrate this problem using artificial data in Section 5. Furthermore, each subspace may have a different intrinsic dimensionality. The PCA reconstruction error decreases monotonically as the dimensionality increases, so points may be wrongly assigned to the cluster with the largest dimensionality. Such an approach therefore limits the number of dimensions to be the same in each cluster. Other approaches to subspace clustering have been taken based on mixtures of probabilistic PCA [28]. The recently proposed mixtures of common

-factor analysers (MCtFA) attempts to overcome the problems posed by over-fitting and potential outliers by assuming that the clusters share common latent factors which, instead of being normally distributed, follow a multivariate

-distribution [1].

A different class of subspace clustering algorithms involves the computation of a measure of distance between each pair of points which captures the notion that points may lie on different subspaces. The distances are then used to construct an affinity matrix which is partitioned using standard spectral clustering techniques

[13]. There have been several successful approaches to defining such a distance measure. The method of Generalized PCA (GPCA) fits polynomials of varying order to the data and measures the distances between the gradient of the polynomials computed at each point [14]. Sparse subspace clustering (SSC) obtains a local representation of the subspace at each point as a sparse weighted sum of all other points [9]; this is obtained by minimising the reconstruction error subject to a constraint on the norm of the weights so that the few non-zero weights correspond to points lying on the same subspace. Spectral curvature clustering (SCC) constructs a multi-way distance between randomly sampled groups of points by measuring the volume of the of the simplex formed by each group [7]. Points which lie on the same subspace will define a simplex of volume zero. Spectral local best flats (SLBF) estimates a local subspace for each point by fitting a PCA model to its nearest neighbours; it then computes pair-wise distances between the locally estimated subspaces corresponding to each point [35]. Although spectral methods have achieved state-of-the art results in some application domains such as clustering digital images, they come with their own limitations. Computing local subspaces for each point can be computationally intensive and requires additional tuneable parameters.

3 Detecting influential observations in PCA

3.1 Influential observations in ordinary least squares

An influential observation is defined as one which, either individually or together with several other observations, has a demonstrably larger impact on the model estimates than is the case for most other observations [2]. Unlike outliers, influential observations are not necessarily apparent through simple visualisations of the data, and therefore a number of approaches have been proposed to detect them [21]. Several popular methods rely on the computation of the leave-one-out (LOO) residual error [2, 18]. For instance, in the context of ordinary least squares (OLS), a common strategy consists in quantifying the effects that removing a single observation and re-fitting the model using the remaining observations has on the estimated regression coefficients [6].

When data points are available, where each is a covariate and is the associated univariate response, the LOO error for the observation is defined as where has been estimated using all but the observation. The sum of LOO errors is then as obtained as


A naive computation of requires fitting OLS models, each one using observations. For each model fit, the inverse of sample the covariance matrix, , has to be estimated. In order to contain the number of computations needed to evaluate , each LOO error can instead be found in closed-form after fitting a single regression model with all the data points. This is particularly beneficial when either or is large. Since each differs from by only one pair of observations, , a recursive formulation of that only depends on is obtained by applying the Sherman-Morrison theorem [25], as follows


In this formulation, each is readily available as a function of , without the need to explicitly remove any observations, and without having to re-compute inverse covariance matrices. When this approach is taken, Eq. (2) is known as the Predicted REsidual Sum of Squares (PRESS) statistic [2].

In the following Section we propose an analogous PRESS statistic for PCA and describe a methodology for detecting influential observations under a fitted PCA model. This approach will then be used to define an objective function for subspace clustering in Section 4.

3.2 An analytic PCA PRESS statistic

PCA is a ubiquitous method for dimensionality reduction when dealing with high-dimensional data. It relies on the assumption that the high-dimensional observations can be well approximated by a much lower-dimensional linear subspace which is found by estimating the best low-rank linear approximation of the original data [12]. One way of achieving this is by estimating a set of

mutually orthonormal vectors

which minimize the reconstruction error, defined as


On defining a matrix with rows , which we assume to be mean-centred, the vectors which minimise (4

) are obtained by computing the singular value decomposition (SVD) of

, given by Here, and are orthonormal matrices whose columns are the left and right singular vectors of , respectively, and is a diagonal matrix whose entries are the singular values of in descending order.

It can also be noted that, when taking the first principal component, the residual error can be written as a quadratic function of ,


where . This formulation suggests that the eigenvector can be rewritten in the form of a least squares estimator,


where we have defined the vector ; analogous expressions exist for the remaining eigenvectors, . Clearly, each depends on , and these eigenvectors are obtained as usual, using the SVD. This least squares interpretation will provide a starting point for deriving an efficient PRESS statistic for PCA.

In the context of PCA, the LOO error has often been used to drive model selection as well as detect influential observations [19, 5]. When principal components are considered, this reconstruction error is given by


where each is the right singular vector of the SVD estimated using all but the observation of , and the sum of all LOO reconstruction errors is


The usual approach for the evaluation of requires SVD computations to be performed. Clearly, this approach is computationally expensive when either or is large, especially if (8) has to be evaluated a number of times.

We propose a novel closed-form approximation of (8) which is computationally cheap and such that the approximation error is negligible for any practical purposes. The only assumption we make is that, for a sufficiently large sample size , the error made by estimating the eigenvectors using the SVD of the reduced data matrix with rows, that is after removal of the observation, is sufficiently small, compared to the analogous estimation using the full data matrix. Under this assumption, we have that and therefore for . We note here that a similar assumption has been made in other applications involving high-dimensional streaming data, where it is called Projection Approximation Subspace Tracking (PAST) [33].

With only one principal component, this approximated PCA PRESS statistic can be written as


where , as before. Since can be expressed as the solution to a least squares fit, as in (6), we can obtain in closed-form through least squares estimation. Using the relationships

we arrive at an expression for by applying a single-observation, down-dating operation, as follows


The advantage of this reformulation is that Eq. (10) can be evaluated using the same recursive form given by Eq. (3). Using the same recursive approach, we then obtain an expression for in terms of the original eigenvector ,


where , , and has been obtained by computing the SVD of the complete data matrix in the usual way. Using this recursive expression, the LOO reconstruction error is obtained in closed-form


By substitution, using the fact that the the reconstruction error is , we then obtain


which can be computed in closed form, without the need for explicit LOO steps. This construction can be easily extended to the case of principal components. When , we have

Adding and subtracting from both sides we obtain

where . Therefore, with principal components, the approximated PCA PRESS statistic is

Figure 1: The top plot shows the mean squared approximation error between the leave-one-out cross validation and our analytic PRESS statistic as a function of the number of samples, . We report on averages over Monte Carlo simulations. It can be seen that the empirical approximation error scales according to the theoretical error, which is shown as a dashed line. The bottom plot shows that the difference in computational time between and the increases super-linearly with .

This expression only depends on quantities estimated using a single PCA model fit, and can be computed cheaply. The approximated is found to be close to the true PCA PRESS under the assumption that the error made by estimating the principal components by instead of observations is small. In related work, we have previously shown that the approximation error depends on the number of observations as [15]. This result can also be checked by simulation. Figure 1 shows the mean squared approximation error as a function of using data simulated under a PCA model with variables. It can be seen that the decrease in the approximation error follows the theoretical error (plotted as a dashed line) and the difference in computational time increases super-linearly.

3.3 A measure of predictive influence in PCA

In this section we make use of the closed-form PCA PRESS statistic of Eq. (14), and propose an analytic influence measure for the detection of influential observations under a fitted PCA model. Specifically, we introduce the notion of predictive influence of an observation under a given PCA model which quantifies how much influence exerts on the LOO prediction error of the model. By making explicit use of the out-of-sample reconstruction error, this influence measure is particularly robust against over fitting compared to the in-sample reconstruction error that is minimised by PCA, as in Eq. (4).

As before, we assume that a PCA model has been fit to the data, and is the orthonormal matrix whose columns are the right singular vectors of . Analytically, the predictive influence of observation , denoted here is defined as the gradient of the PCA PRESS given in Eq. (14) with respect to observation , that is


Using the results from the previous section, an analytical expression for the gradient in Eq. (15) can be derived, and is found to be


The full details of this derivation are given in Appendix A. It can be seen that the predictive influence of a point, under a PCA model has a closed form which is related to the leave-one-out error of that point, .

It was observed in [8] that single deletion methods for identifying influence, such as the PRESS, are only sensitive to large errors. Instead they propose observing the change in the error as each observation is weighted between 0 (equivalent to LOOCV) and 1 (equivalent to the standard residual) and then computing the derivative of the residual with respect to the weight. On the other hand, our proposed predictive influence measures the sensitivity of the prediction error in response to an incremental change in . The rate of change of the PCA PRESS at this point is given by the magnitude of the predictive influence vector, . If the magnitude of the predictive influence is large, this implies a small change in the observation will result in a large change in the prediction error relative to other points. In this case, removing such a point from the model would cause a large improvement in the prediction error. We can then identify the most influential observations as those for which the increase in the PRESS is larger relative to other observations. Since we take the gradient of the PRESS with respect to the observations, we arrive at a quantity which is more sensitive to individual influential observations than the original PRESS function. In this work, the predictive influence measure has been used in the context of subspace clustering, which is introduced next.

4 Predictive subspace clustering

4.1 The PSC algorithm

The proposed algorithm for subspace clustering relies on the following observation. If the cluster assignments were known and a separate PCA model was fit to the data in each cluster, then the predictive influence of a point belonging to cluster would be smaller when evaluated using the correct PCA model for that cluster, than using any of the remaining PCA models. In this respect, the predictive influence provides a metric that can be used to drive the clustering process. Since our clustering algorithm is based on a measure of predictive ability of the estimated subspaces, we call it Predictive Subspace Clustering (PSC).

The objective of the clustering algorithm is to partition the observations into one of non-overlapping clusters such that each cluster contains exactly observations and and where the points in each cluster lie on a subspace of the form described in Eq. (1). In our proposal, this will be accomplished by searching for the cluster allocations and PCA model parameters that minimise the following objective function,


where is the predictive influence of a point under the PCA model. Since the cluster allocation and PCA model parameters depend on each other, there is no analytic solution to this problem and we must resort to an iterative procedure.

This problem can be approached by considering the two related optimisation tasks. At a generic iteration , given the subspaces with parameters which were estimated in the previous iteration , we search for the cluster assignments minimising (17),


Using the predictive influences , which were computed at the end of the previous iteration for all and all , a solution for (18) is found by assigning each point to the cluster for which the magnitude predictive influence is smallest, that is


Then, in the second step, given the new cluster assignments , we re-estimate the parameters of the subspaces minimising (17) by solving


Using the current cluster assignments, a solution for Eq. (20) is found by re-fitting all PCA models because, from Eq. (16), minimising the PCA construction error is also seen to minimise the predictive influences. Once all the parameters have been re-estimated, the predictive influence measures for all and , are updated for the subsequent iteration.

The algorithm is initialised by generating an initial random partitioning, , which is used to estimate the initial PCA models, find the initial parameters , and compute the corresponding predictive influences, for and . At each iteration , the algorithm alternates between the two steps described above until convergence is reached and the objective function can no further be reduced.

4.2 Convergence of the PSC algorithm

In this section we demonstrate that, for any initial configuration, the PSC algorithm is guaranteed to converge to a local minimum of the objective function (17). In order to keep the notation simple, and without any loss of generality, we only discuss the case of the first principal component. Furthermore, we denote the set of all PCA parameters at a generic iteration of the algorithm, ; we also use to denote the set of clustering assignments at the same iteration. With this notation in place, in order to show the algorithm converges we must demonstrate that, at each iteration , the following inequalities are satisfied:

  • , after the first step, (18);

  • , after the second step, (20).

Checking that the first inequality holds after the first step of the algorithm is straightforward because, by definition, the cluster assignments obtained by the assignment rule (19) directly minimise the objective function when the PCA parameters are held fixed. The second inequality, however, requires a more elaborated argument.

Assuming that the clustering memberships are given and held fixed, we first note that solving (20) is equivalent to solving the following problem


separately for each cluster. The following lemma provides an alternative formulation of this optimisation problem.

Lemma 1.

Solving the minimisation problem (21) is equivalent to solving the following maximisation problem


where is a diagonal matrix with diagonal elements .

The proof of this lemma is provided in Appendix B. The maximisation in (22) can be recognised as an eigenproblem where each observation has been weighted by a function of its leverage under the PCA model. We denote the optimal parameters that provide a solution to this problem as . If such solution was available, it would satisfy inequality (b) above, so that

The solution depends on the diagonal element of , which in turn depend on the PCA parameters and are non-linear function of , and therefore a closed-form solution to (22) cannot be found simply by eigendecomposition of . However, using this formulation, we are able to prove that the PCA parameters at iteration are closer to the optimal solution than the parameters from the previous iteration, i.e. .

Lemma 2.

For a single cluster , we define the error between the the optimal parameters obtained by solving (22) directly and the PCA parameters from the previous iteration, , as

Analogously, the error between the optimal parameters and the PCA parameters obtained at the current iteration is defined as

These two error terms satisfy the inequality


The proof of this lemma is provided in Appendix C. We are now ready to formulate the main result stating the convergence of the PSC algorithm.

Theorem 1.

Starting with any cluster configuration, , at each iteration of the PSC algorithm the objectives function (17) is never decreased, and the algorithm converges to a local minimum.


The proof of this theorem simply follows from the observations made above, and the two lemmas, which show that the inequalities (a) and (b) are satisfied at each iteration.

4.3 Sparse subspace clustering

As mentioned in Section 1, there may be many unimportant variables that only contribute to noise and therefore could be discarded. Moreover, in some applications, there might be the need to select only a handful of informative variables that best represent the subspace in each cluster. Since the principal components involve linear combinations of all variables in , we present a variation of the PSC algorithm that build on a sparse PCA model estimation procedure.

Following the strategy described by [24], sparse loading vectors can be obtained by imposing an penalty on the PCA objective function (4). When , this new optimisation problem become


where denotes the squared Frobenius norm. This problem can be solved by first obtaining and where , and are the first left and right singular vectors and corresponding singular value computed from the SVD of . A sparse solution is obtained by applying the following iterative soft thresholding procedure to the elements of :


The updates (25) and (26) are applied iteratively until the change in between iterations falls below some threshold. Subsequent sparse loadings can be found by deflating the data matrix and repeating the above steps as in [24]. The complexity of this procedure when sparse principal components are required is , which keeps the computational burden relatively low.

A penalised version of the PSC algorithms is obtained by modifying the second step of the algorithm so that (20) is replaced accordingly; in the case of the first principal component, this amounts to solving


where the parameter which controls the level of sparsity, , can in principle be different for each cluster. It can be seen that there are inequality constraints, one for each of the subspaces. This sparse version of the PSC algorithm is detailed in Algorithm 1. Clearly, when the sparsity parameters are taken to be zero, we obtain the the unpenalised version of the algorithm and no variable selection is performed.

Input: , , , , , for
Output: , }
while not converged do
       for  do
             // Compute the sparse subspace parameters
             for  do
                   while change in  do
                   end while
             end for
             Compute PRESS, ;
             for  do
                   Compute predictive influence, ;
             end for
       end for
      // Assign points to clusters
       for  do
       end for
      // Check for convergence
       if  then
       end if
end while
Algorithm 1 The penalised PSC algorithm.

4.4 Model selection

Model selection in subspace clustering consists of learning both the number of clusters and the dimensionality of each subspace. Each one of these two problems is particularly challenging on its own. Since the PSC algorithm is a non-probabilistic one, traditional model selection techniques for learning the number of clusters such as the Bayesian Information Criterion and related methods do not apply. Moreover, it has already been noted that such methods, when can be used, do not always select optimal models that maximise clustering accuracy [1].

In the subspace clustering literature, the problem of model selection is still in its infancy. A method called second order difference (SOD) has recently been proposed to determine the correct number of clusters [35]. The SOD method can be seen as an extension of the gap statistic originally proposed by [27], and works as follows. For each value of up a maximum number of clusters, , the cluster assignments and corresponding subspaces as in (1) are estimated. For each value of , we compute the distance between points and subspaces within each cluster, as follows

The second derivative of the within-cluster residual error with respect to the number of clusters is approximated by

and this quantity is used as a search criterion. The optimal number of clusters, is then chosen to be the value that maximises this criterion,

This optimal choice corresponds to the point beyond which increasing the number of clusters has less effect on the within-cluster residual error.

Apart from the SOD method, our proposed PCA PRESS statistic (14) also naturally provides an alternative criterion for model selection. As with SOD, we initially assume that the dimensionality of each subspace is known, and is the same for all subspaces, so that for all . Using a varying number of clusters up to a pre-determined maximum, , we run the PSC algorithm and, for each , record the corresponding value of the PCA PRESS statistic, that is . The optimal number of cluster is found as

This approach is somewhat similar to SOD, however instead of measuring the within-cluster PCA residual error, which may suffer from over-fitting, we use the within cluster LOO cross-validation error, which negates the requirement to compute the second derivative with respect to . The PCA PRESS is robust against over fitting and has been found to work generally well in our experiments presented in the next section.

The problem of learning the subspace dimensions has not been well studied in the subspace clustering literature, and is very much an open question. However, the PRESS has often been used to perform model selection in standard PCA, and seems to be well suited for this problem. With a given fixed , at each generic iteration of the PSC algorithm, we evaluate the PCA PRESS using all values of from one to a pre-determined maximum, , and then select the set of subspace dimensions that minimises the PCA PRESS at that iteration. Although computationally more expensive, this strategy has often been found successful in learning these parameters. Some experimental results are reported in the next section.

Finally, an important feature of the PSC algorithm is its ability to estimate sparse PCA models. In order to obtain sparse models, we need to specify additional parameters which control the number of variables to be retained within each cluster. This introduces further complications to the issue of model selection. In the context of sparse predictive modelling, it has often been observed that prediction-based methods such as the PRESS do not perform well for selecting the optimal sparsity parameter. Recently, subset resampling methods such as stability selection have shown promising results for accurately selecting regularisation parameters [17]. However, implementing such data resampling schemes within the PSC algorithm would be computationally prohibitive.

5 Simulation experiments

We start by presenting a number of simple simulation studies in low dimensions to illustrate the type of clusters that can be detected by the PSC algorithm, and compare the performance of the proposed algorithm to existing methods. Clusters of data points were generated and, within each cluster, the points were distributed uniformly on a one, two and three-dimensional linear subspace embedded in a three-dimensional space. To define each subspace, we generated a set of orthonormal basis vectors, each of dimension , where each element was sampled from a standard normal distribution. For each cluster we then sampled

-dimensional points from a uniform distribution which were then projected onto its corresponding subspace.

In particular, we present four simulation scenarios consisting of points which lie on: (a) two straight lines; (b) a straight line and a 2-D plane; (c) two 2-D planes; (d) a straight line, a 2-D plane and a 3-D sphere. A typical realisation of each scenarios is illustrated in Figure 2, and for each case we show the original data points in dimensions (left), the clustering assignments using -means clustering in the original three-dimensional space (centre), and the clustering assignment using PSC (right). It can be noted that the subspaces always intersect so points belonging to different clusters will lie close to each other at these intersections. The -means algorithm, which uses the Euclidean distance between points and has been applied directly to the three-dimensional observations, consistently fails to recover the true clusters, as expected in these cases. On the other hand, PSC correctly recovers both the true clusters and the intrinsic dimensionality of the subspaces. Despite the simplicity of these initial illustrations, they highlight the benefits of subspace clustering.

(a) Two lines
(b) Two subspaces: Line and plane
(c) Two planes
(d) Three subspaces: Line, plane and sphere
Figure 2: Four illustrations of data clusters existing in subspaces of

and the clusters recovered by two algorithms, K-means and the proposed PSC algorithm. In these examples PSC recovers the true cluster assignments and estimates the subspaces correctly.

A Monte Carlo simulation study was carried out to compare the performance of five competing clustering methods: the proposed PSC algorithm, the -means algorithm (as a benchmark), GPCA [14], -subspaces [30], SCC [7] and Mixture of Common t-Factor Analysers (MCtFA) [1]

. In our studies, in order to make the comparisons fair, we provide each of these methods with the true number of clusters and the true dimensionality of each subspace. In addition to the four simulation settings considered above, we also considered an additional scenario, (e), consisting of four distinct subspaces: a 5-D hyperplane, 4-D hypersphere and two lines embedded in

dimensions. Since the dimensionality of this scenario is large, in order to use the GPCA algorithm we must first perform PCA to reduce the dimensionality to .

Results for this comparison are given in Table 1 which reports on the mean clustering accuracy over

Monte Carlo simulations. It can be seen that PSC achieves a consistently high clustering accuracy in all scenarios with a small standard error (reported in parenthesis). SCC also performs extremely well for scenarios (a)-(c) but does more poorly for (d) and (e). The adjusted rand index (ARI) for the same experiments is reported in Table

2. In this case, the relative difference in performance between PSC and two other subspace clustering algorithms, GPCA and -subspaces, increases significantly; PCS compares favourably in all scenarios. As before, SCC also shows performance comparable to PSC for (a)-(c), but does less well in (d)-(e). As expected, -means always performs poorly.

Algorithm a b c d e
PSC 0.980 (0.095) 0.999 (0.003) 1.00 (0.000) 0.942 (0.042) 0.943 (0.089)
GPCA 0.846 (0.226) 0.786 (0.199) 0.921 (0.176) 0.791 (0.088) 0.837 (0.068)
-means 0.503 (0.010) 0.525 (0.028) 0.595 (0.097) 0.635 (0.079) 0.702 (0.034)
-subspaces 0.702 (0.132) 0.804 (0.151) 0.829 (0.149) 0.766 (0.107) 0.877 (0.076)
SCC 0.999 (0.004) 1.00 (0.000) 1.00 (0.000) 0.754 (0.079) 0.893 (0.073)
MCtFA 0.812 (0.158) 0.984 (0.071) 0.883 (0.162) 0.942 (0.118) 0.974 (0.049)
Table 1: Mean clustering accuracy for six competing clustering algorithms obtained by using five simulated (non-sparse) data sets over Monte Carlo simulations. Standard errors are reported in parenthesis.
Algorithm a b c d e
PSC 0.960 (0.189) 0.999 (0.006) 1.00 (0.000) 0.877 (0.181) 0.943 (0.226)
GPCA 0.693 (0.451) 0.572 (0.397) 0.843 (0.351) 0.545 (0.179) 0.576 (0.171)
-means 0.007 (0.021) 0.055 (0.055) 0.190 (0.194) 0.198 (0.176) 0.213 (0.093)
-subspaces 0.401 (0.263) 0.609 (0.302) 0.659 (0.297) 0.484 (0.234) 0.687 (0.186)
SCC 0.998 (0.008) 1.00 (0.000) 1.00 (0.000) 0.454 (0.175) 0.719 (0.188)
MCtFA 0.625 (0.315) 0.966 (0.142) 0.767 (0.323) 0.879 (0.246) 0.936 (0.119)
Table 2: Mean ARI for 6 competing clustering algorithms obtained by using five simulated (non-sparse) data sets over Monte Carlo simulations. Standard errors are reported in parenthesis.

Finally, we explored the performance of these algorithms on a more challenging set of artificial data. Firstly, compared to the previous examples, we increased the dimensionality of all datasets to . Secondly, in order to construct the low-dimensional subspaces, we generated sparse loading vectors where only randomly chosen variables, out of , had non-zero coefficients, so that the remaining

variables do not contribute to clustering. Gaussian noise with zero-mean and variance

was added. Each of the loading vectors in each cluster had the same number of non-zero elements, but the sparsity pattern in each vector was different. This data generating mechanism allows us to test the ability to estimate subspace parameters and simultaneously recover cluster assignments when there is a large number of noisy, irrelevant variables. Both the true level of sparsity and the true number of clusters are assumed known. For this test case, we compared the performance of PSC using three different levels of sparsity: (correct sparsity), and (no sparsity). We compare to all of the previous methods but replace -means with Sparse -means (SKM), which is also able to perform sparse clustering [32], using the correct sparsity level.

Algorithm a b c d e
PSC - 10 vars 0.776 (0.067) 0.887 (0.058) 0.941(0.034) 0.781 (0.045) 0.812 (0.033)
PSC - 100 vars 0.731 (0.111) 0.740 (0.121) 0.549(0.061) 0.781 (0.045) 0.812 (0.031)
PSC - P vars 0.643 (0.149) 0.708 (0.150) 0.533(0.068) 0.680 (0.099) 0.712 (0.041)
GPCA 0.500 (0.021) 0.500 (0.014) 0.524 (0.009) 0.634 (0.034) 0.711 (0.016)
SKM - 10 vars 0.575 (0.081) 0.667 (0.137) 0.787 (0.147) 0.590 (0.027) 0.637 (0.027)
-subspaces 0.499 (0.005) 0.499 (0.004) 0.498 (0.005) 0.560 (0.006) 0.632 (0.005)
SCC 0.607 (0.078) 0.729 (0.091) 0.832 (0.067) 0.612 (0.033) 0.733 (0.034)
MCtFA 0.596 (0.115) 0.650 (0.156) 0.673 (0.161) 0.574 (0.091) 0.663 (0.104)
Table 3: Mean clustering accuracy for six competing clustering algorithms obtained by using five simulated (sparse) data sets over Monte Carlo simulations. Standard errors are reported in parenthesis.
Algorithm a b c d e
PSC - 10 vars 0.551 (0.135) 0.774 (0.116) 0.883 (0.069) 0.509 (0.098) 0.509 (0.085)
PSC - 100 vars 0.394 (0.221) 0.416 (0.321) 0.099 (0.122) 0.389 (0.108) 0.277 (0.086)
PSC - P vars 0.287 (0.277) 0.415 (0.318) 0.034 (0.099) 0.339 (0.174) 0.196 (0.073)
GPCA 0.096 (0.033) 0.099 (0.034) 0.132 (0.0033) 0.174 (0.077) 0.232 (0.043)
SKM - 10 vars 0.151 (0.163) 0.335 (0.275) 0.575 (0.292) 0.080 (0.057) 0.076 (0.052)
-subspaces 0.002 (0.009) 0.003 (0.009) 0.004 (0.009) 0.006 (0.014) 0.015 (0.013)
SCC 0.213 (0.156) 0.457 (0.182) 0.663 (0.134) 0.134 (0.071) 0.290 (0.091)
MCtFA 0.194 (0.229) 0.303 (0.311) 0.347 (0.320) 0.159 (0.135) 0.284 (0.159)
Table 4: Mean ARI for 6 competing clustering algorithms obtained by using five simulated (sparse) data sets over Monte Carlo simulations. Standard errors are reported in parenthesis.

Table 3 shows the mean clustering accuracy over Monte Carlo simulations. When the correct number of informative variables is used, PSC obtains the highest clustering accuracy out of all methods in all scenarios. This is not surprising though, given that the other algorithms to do have any built-in variable selection mechanisms. Amongst the other subspace clustering algorithms, SCC achieves the best results in all scenarios. The unpenalised version of PCS and SCC are directly comparable in this context, and they both achieve similar performances overall, which SCC performing very well for scenario (c). Clearly, when the incorrect number of variables are selected by PSC, its performance decreases. Importantly, PSC performs better than Sparse -means which underlies the importance of performing variable selection in each subspace separately since different variables may be important in each clusters. As we also expect, the performance of PSC degrades as the dimensionality of the subspaces increases. This is due to the constraint that basis vectors of each subspace must be mutually orthonormal. Therefore, if the incorrect sparsity pattern is estimated in the first loading, all subsequent loadings are also likely to be estimated incorrectly. However, since PSC still performs better than all other algorithms in settings (d) and (e) when some sparsity is imposed, this further highlights the benefit of estimating the underlying subspaces using only the truly important variables.

Algorithm a b c d e
SOD - only 0.86 0.66 0.90 0.45 0.35
PRESS - only 0.89 1.0 0.96 0.62 0.70
PRESS - both and 0.73 0.84 0.91 0.60 0.51
Table 5: Performance of PSC used jointly with SOD and the PRESS criterion for learning the number of clusters, . Reported here is the number of times the correct was selected across Monte Carlo simulations. The description of the five data sets is in the text.

The mean adjusted rand index reported in Table 4 further highlights that this particular setting is particularly challenging for all the algorithms. Here the degradation in performance of PSC when the wrong number of variables are selected appears more clearly. Without sparsity, PSC performs better than other methods in scenarios (a) and (d) and competitively with SCC in (b) and (e). Scenario (c) is particularly challenging and in this case both PSC without any sparsity and K-subspaces perform quite poorly. Here the discriminative subspaces consist of two planes which are corrupted by noise and there is a high degree of overlap between them; moreover, they can only be identified by those 10 relevant variables. However, imposing sparsity in the solution helps PSC identify the correct variables, which in turn produces the best ARI. This scenario highlights one of the main limitations of the unpenalised PSC algorithm which have been overcome by its penalised version.

Finally, we tested the ability of the PSC algorithm to perform model selection as described in Section 4.4 using the same sparse simulated data as described previously. Table 5 shows the number of times that the correct was selected, in each one of the five scenarios, over Monte Carlo simulations. Both the PRESS and SOD methods were used to learn the number of clusters. As can be seen in the Table, the PRESS achieves a good performance in the selection of in the first three scenarios, and its performance decreases when the data contain more clusters. SOD performs similarly in scenarios (a) and (c); however, for all other scenarios, it performs relatively poorly. The bottom row in Table 5 shows the performance obtained in learning when the dimensionality is also learned using the PRESS. Although earning both the dimension of all subspaces and the number of clusters is a more difficult problem, the PRESS still provides satisfactory results. Remarkably, in this more difficult setting, the PRESS is still competitive or even superior than SOD in many scenarios. Some degradation in performance can be explained by the presence of noise causing the algorithm to mistakenly identify 1-dimensional subspaces as 2-dimensional. This is particularly important in setting (a) where the SOD method performs well.

6 Applications to genomic data sets

In order to test the proposed method on real data sets, we applied it to clustering biological samples in six publicly available gene expression datasets. DNA microarrays measure levels of thousands of mRNAs. PCA is routinely used for the analysis and visualisation of these biological measurements because the expression levels of tens of hundreds of genes in samples drawn from the same underlying population are often observed to be highly correlated [23]. The individual datasets used for our experiments have been obtained in a preprocessed form from the MIT Broad institute222 In each of the datasets, the different classes correspond to different tumour or tissue types relating to different cancers. Each data set is characterised by the number of clusters, ranging from to , the sample size, , and the number of genes, . A summary of these features is reported in Table 6.

1 Leukemia [11] 3 38 999
2 CNS tumors [20] 5 48 1000
3 Normal tissues [22] 13 99 1277
4 St. Jude leukemia [34] 6 248 985
5 Lung cancer [3] 4 197 1000
6 Novartis multi-tissue [26] 4 103 1000
Table 6: Six gene expression data sets.

We compared the clustering performance obtained by PSC against the other state-of-art subspace clustering algorithms that were tested on simulated data sets. Of all the algorithms considered, both Mixture of Common t-Factor Analysers (MCtFA) and Sparse -means (SKM) have been previously applied to clustering gene expression data, and SKM in particular is able to perform gene selection. The PSC algorithm was run using up to latent factors, and different levels of sparsity, where the number of selected genes is between and . Clearly, when genes are selected, no sparse solutions are imposed. The SKM algorithm is run using the same levels of sparsity and, even in this case, selecting variables is equivalent to standard -means. For -subspaces and SKM we use random restarts and take the cluster configuration which minimises the within cluster sum of squares. SCC is run times, each time using up to dimensions and take the mean result. MCtFA is run with restarts, the results which minimise the negative log likelihood are taken as final.

In Tables 7 and 8 we report on the clustering accuracy and Adjusted Rand Index (ARI), respectively. Highlighted in each column is the best performance attained for each method. The PSC algorithm without sparsity and with two latent factors achieves a good clustering accuracy on all but one of the datasets indicating that samples belonging to different clusters can be well approximated by a two-dimensional linear subspace; these results also confirm that PSC is able to estimate this subspace accurately in high-dimensions. It can be seen that in all cases, maximum clustering accuracy can be achieved when some degree of sparsity is imposed. When , the best solution is achieved when or fewer variables have been selected. For larger values of , the best solution occurs when or fewer variables have been selected. For all values of , the non-sparse PSC algorithm never outperforms the best sparse PSC, which indicates that a certain number of genes in these data sets have a small contribution towards the determination of the various clusters.

It is important to observe that standard (non-penalised) -means performs almost equivalently to PSC on datasets and indicating that these clusters are well separated geometrically. However, as the level of sparsity is increased, the accuracy of -means typically decreases. This is because the sparsity is estimated using all samples and selected genes are necessarily present in all clusters. PSC achieves better performance because it selects the important genes within each cluster individually. Compared to -means, SCC achieves a greater clustering accuracy on all but datasets and . Increasing the dimensionality of the subspaces does not greatly affect the clustering accuracy. The large difference in clustering performance on dataset between SCC and PSC seems to suggest that PSC may perform better in higher dimensions.

When one a one-dimensional subspace is extracted, -subspaces achieves a good clustering accuracy and outperforms standard PSC on datasets , and . However, on these datasets, -subspaces is also extremely sensitive to the number of dimensions. In all cases, the clustering performance decreases monotonically when the number of dimensions is increased, which makes the problem of selecting the best model particularly difficult. GPCA displays similar performance to -subspaces, whereas MCtFA typically performs worse than PSC for all values of . However for , MCtFA performs equivalently to PSC on datasets , and .

The effect of changing the number of latent factors in PSC, SCC, MCtFA and GPCA can also be compared. For SCC, the effect on the clustering accuracy of changing is typically not large. However, for PSC and MCtFA, the effect of changing the subspace dimensionality generally varies between datasets and is non-monotonic. This non-monotonic behaviour was also observed by [1] for values of up to . As noted before, this makes model selection particularly challenging. It could be argued that, for some datasets, increasing the number of latent factors beyond three might be helpful for some algorithms; although this may be the case, increasing the number of latent factors in each clyster requires the estimation of a much larger set of parameters, and this seems unnecessary in light of the good performance of PSC, GPCA and -subspaces with only a few latent factors. As the number latent factors increases, we also observe that the performance of GPCA decreases. The corresponding results using ARI as a performance measure highlights a larger disparity between competing methods and PSC, in support for the latter. In particular, the results suggest that these other methods may miss smaller clusters. This is particularly evident in dataset which is especially challenging due to the presence of clusters of varying sizes.

We also attempted to learn the number of clusters in the six gene expression datasets using the PRESS based method described in Section 4.4 by setting and selecting 10 variables. The results are reported in Table 9. It can be seen that the PRESS is able to correctly identify the true number of clusters in three of the datasets. In datasets 4 and 5 the number of clusters is underestimated by one. However, the PRESS is unable to correctly identify the true number of clusters in dataset 3. This is perhaps not surprising because this data set contains clusters of normal tissues, which are quite similar to each other, whereas the other datasets contain sub-types of cancerous tissues which show a much higher degree of separation.

Data Set
Algorithm 1 2 3 4 5 6
PSC () 10 0.959 0.932 0.910 0.969 0.922 0.954
50 0.963 0.929 0.938 0.950 0.888 0.944
100 0.936 0.887 0.926 0.946 0.855 0.903
500 0.963 0.866 0.908 0.873 0.707 0.849
P 0.862 0.830 0.915 0.875 0.695 0.833
PSC () 10 0.911 0.905 0.923 0.963 0.815 0.989
50 1.000 0.914 0.929 0.961 0.839 0.980
100 1.000 0.892 0.923 0.968 0.866 0.980
500 1.000 0.858 0.906 0.949 0.715 0.980
P 1.000 0.893 0.918 0.961 0.621 0.980
PSC () 10 0.959 0.894 0.871 0.962 0.858 0.990
50 1.000 0.861 0.911 0.969 0.845 0.990
100 0.959 0.877 0.927 0.980 0.865 0.990
500 0.963 0.850 0.879 0.972 0.816 0.990
P 0.923 0.854 0.876 0.953 0.740 0.990
Sparse -means 10 0.812 0.695 0.871 0.774 0.547 0.755
50 0.852 0.720 0.898 0.777 0.580 0.765
100 0.885 0.734 0.913 0.776 0.580 0.765
500 0.959 0.734 0.916 0.776 0.582 0.765
P 0.959 0.734 0.913 0.776 0.582 0.765
-subspaces () 0.918 0.784 0.932 0.935 0.660 0.819
-subspaces () 0.684 0.761 0.921 0.814 0.593 0.747
-subspaces () 0.558 0.718 0.877 0.771 0.563 0.715
SCC () 0.675 0.770 0.865 0.869 0.517 0.897
SCC () 0.671 0.772 0.864 0.868 0.594 0.898
SCC () 0.672 0.774 0.865 0.869 0.598 0.901
MCtFA () 0.652 0.717 0.630 0.820 0.645 0.696
MCtFA () 0.693 0.522 0.642 0.932 0.732 0.943
MCtFA () 0.879 0.689 0.725 0.862 0.922 0.963
GPCA () 0.852 0.836 0.912 0.905 0.580 0.803
GPCA () 0.822 0.799 0.903 0.759 0.562 0.708
GPCA () 0.590 0.706 0.866 0.755 0.558 0.746

Table 7: Mean clustering accuracy of competing clustering algorithms on the six gene expression datasets described in table 6.
Data Set
Algorithm 1 2 3 4 5 6
PSC () 10 0.911 0.778 0.400 0.909 0.844 0.874
50 0.919 0.772 0.586 0.850 0.776 0.849
100 0.860 0.655 0.507 0.841 0.712 0.739
500 0.919 0.578 0.407 0.619 0.424 0.612
P 0.698 0.460 0.434 0.636 0.401 0.547
PSC () 10 0.911 0.697 0.495 0.889 0.634 0.973
50 1.000 0.732 0.542 0.881 0.681 0.946
100 1.000 0.667 0.466 0.906 0.735 0.946
500 1.000 0.577 0.440 0.849 0.441 0.946
P 1.000 0.673 0.493 0.885 0.639 0.946
PSC () 10 0.911 0.675 0.038 0.888 0.719 0.973
50 1.000 0.570 0.370 0.907 0.693 0.973
100 0.959 0.613 0.496 0.942 0.731 0.973
500 0.963 0.533 0.293 0.917 0.637 0.973
P 0.923 0.548 0.131 0.865 0.488 0.973
Sparse -means 10 0.605 0.128 0.236 0.342 0.115 0.426
50 0.685 0.203 0.379 0.345 0.176 0.421
100 0.749 0.212 0.435 0.355 0.176 0.421
500 0.911 0.212 0.457 0.348 0.181 0.421
P 0.911 0.212 0.455 0.348 0.181 0.421
-subspaces () 0.824 0.424 0.682 0.803 0.333 0.544
-subspaces () 0.320 0.269 0.490 0.431 0.205 0.356
-subspaces () 0.030 0.141 0.289 0.295 0.140 0.248
SCC () 0.291 0.302 0.173 0.579 0.215 0.717
SCC () 0.289 0.299 0.183 0.581 0.210 0.730
SCC () 0.310 0.316 0.193 0.571 0.203 0.705
MCtFA () 0.367 0.226 0.034 0.708 0.149 0.429
MCtFA () 0.830 0.317 0.055 0.809 0.598 0.921
MCtFA () 0.748 0.213 0.085 0.759 0.844 0.899

GPCA () 0.692 0.540 0.451 0.725 0.181 0.514
GPCA () 0.607 0.373 0.339 0.424 0.143 0.261
GPCA () 0.112 0.087 0.09 0.318 0.133 0.372
Table 8: Mean ARI of competing clustering algorithms on the six gene expression datasets described in table 6.
Dataset Real Estimated
1 3 3
2 5 5
3 13 6
4 6 5
5 4 5
6 4 4
Table 9: Real vs learned number of data clusters obtained by PCS with the PRESS using only the first principal component on the six gene expression data sets.

7 Conclusion

In this work we have introduced an efficient approach to subspace clustering for high-dimensional data. Our algorithm relies on a new measure of influence for PCA, derived from an approximated PCA PRESS statistic, which can also be used in other applications unrelated to clustering to detect influential observations. Compared to our initial work in [16], here we have presented the relevant methodology in much greater detail, along with a number of extensions and additional empirical evaluations, including a proof of convergence of the PSC algorithm.

A penalised version of the algorithm has also been introduced that can perform simultaneous subspace clustering and variable selection. For high-dimensional data, penalising the PCA solution in this way aids in the interpretability of the resulting partitions by identifying which variables contribute to each latent factor, and therefore which variables are important for explaining the directions of maximal variability in the data. Although the problem of variable selection in clustering has been discussed before (see, for instance, [32]), we are not aware of other subspace clustering algorithms which estimate sparse subspace parameters. Furthermore, more structured penalties could be used instead of the simple penalty [10]. For instance, non-negative cluster-wise parameters may be more appropriate in some situations [31].

Extensive simulation experiments have been presented here that compare a number of subspace clustering algorithms. For these experiments, we have selected challenging scenarios in which the data within each cluster have low-dimensional representations that need to be identified, including the case where these representations are sparse. Our results indicate the SPC is particularly competitive in a wide range of situations. Apart from comparisons on artificially constructed data, we have also tested whether a subspace approach is beneficial in clustering biological samples for which a thousand gene expression levels have been measured. Our empirical evaluation using six different publicly available data sets suggest that, although the clusters in some data sets might be discovered using more traditional algorithms that exploits geometric structures, subspace clustering is very useful in many other datasets due to the fact that gene expressions levels within each cluster can be approximated well by a few principal components. In our experiments, PSC always appears to be very competitive and, in several situations, has also been shown to out-performs other competing methods. Apart from gene expression data, the PSC algorithm had been previously shown to perform particularly well in clustering digital images of human faces collected under different lighting conditions for which a cluster-wise PCA reconstruction is also appropriate [16].

The PSC algorithm maintains some similarity to the -subspaces algorithm. As with -subspaces, we iteratively fit cluster-wise PCA models and reassign points to clusters until a certain optimality condition is met. However, rather than trying to minimise the residuals under the individual PCA models, we introduce an objective function that exploits the predictive nature of PCA in a way that makes it particularly robust against overfitting. Along with the PRESS statistic, the PSC is able to learn both the number of clusters and the dimensionality of each subspace, although this is a particularly difficult problem and more investigation is required. The difficult issue of selecting the correct level of sparsity within each subspace will also be explored further in further work.

Appendix A Derivation of predictive influence

Using the chain rule, the gradient of the PRESS for a single latent factor is

For notational convenience we drop the superscript in the following. Using the quotient rule, the partial derivative of the leave-one-out error has the following form