Subspace Clustering via Optimal Direction Search

06/12/2017 ∙ by Mostafa Rahmani, et al. ∙ University of Central Florida 0

This letter presents a new spectral-clustering-based approach to the subspace clustering problem. Underpinning the proposed method is a convex program for optimal direction search, which for each data point d finds an optimal direction in the span of the data that has minimum projection on the other data points and non-vanishing projection on d. The obtained directions are subsequently leveraged to identify a neighborhood set for each data point. An alternating direction method of multipliers framework is provided to efficiently solve for the optimal directions. The proposed method is shown to notably outperform the existing subspace clustering methods, particularly for unwieldy scenarios involving high levels of noise and close subspaces, and yields the state-of-the-art results for the problem of face clustering using subspace segmentation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In many applications of signal processing and machine learning, the data can be well-approximated with low-dimensional subspaces 

[1]

. Subspace recovery methods have been instrumental in reducing dimensionality and recognizing intrinsic patterns in data. Principal Component Analysis (PCA) is a standard tool which approximates the data with a single low-dimensional subspace that has minimum distance from the data points 

[2, 3, 4]. However, in many applications the data admits clustering structures, wherefore a union of subspaces can better model the data [5].

In the subspace clustering problem, the data points lie in a union of an unknown number of unknown linear subspaces whose dimensions are also generally unknown. The role of a subspace segmentation algorithm then is to learn these low-dimensional subspaces and to cluster the data points to their respective subspaces. This data model has been widely applied to many modern signal processing and machine learning applications, including computer vision

[5, 6], gene expression analysis [7, 8], and image processing [9].

Many different approaches to subspace clustering were devised in related work, including statistical-based approaches [10, 11, 12, 13], spectral clustering [14], the algebraic-geometric approach [15], the innovation pursuit approach [16], and iterative methods [17, 18]. We refer the reader to [5, 14, 16] for an overview of the topic. Much of the recent work has focused on spectral-clustering [19] based methods [20, 21, 14, 22, 23, 24, 25, 26, 27], which all share a common structure. Specifically, a neighborhood set for each data point is first identified to construct a similarity matrix. Subsequently, spectral clustering [19] is applied to the similarity matrix. Spectral-clustering-based methods differ mostly in the first step.

There exists several recent spectral-clustering-based methods with superior empirical performance. SSC is a popular spectral-clustering-based algorithm, which finds a sparse representation for each data point with respect to the rest of data to construct the similarity matrix [14]. It was shown in [24] that SSC can yield exact clustering even for subspaces with intersections under certain conditions. A different algorithm called Low-Rank Representation (LRR) [23] uses nuclear norm minimization to build the similarity matrix. In [22], the inner product between the data points is used as measure of similarity to find a neighborhood set for each data point.

I-a Summary of contributions

This paper presents a new spectral-clustering-based subspace segmentation method dubbed Direction search based Subspace Clustering (DSC). Underlying our approach is a direction search program that associates an optimal direction with each data point. For each data point, the algorithm finds an optimal direction in the column space of the data matrix that has minimum projection on the rest of the data and non-vanishing projection on that data point. An optimization framework is presented to find all the directions by solving one convex program. Subsequently, the similarity matrix is formed using the obtained directions. The presented numerical experiments demonstrate that DSC often outperforms existing spectral-clustering-based methods, and remarkably improves over the state-of-the-art result for the problem of face clustering using subspace segmentation. In addition, an iterative method to efficiently solve the proposed direction search optimization is provided.

I-B Notation and data model

Bold-face upper-case letters are used to denote matrices and bold-face lower-case letters are used to denote vectors. For a vector

, denotes its -norm. Given two matrices and with an equal number of rows, the matrix is the matrix formed from the concatenation of and . Given a vector , is the vector of absolute values of the elements of . Given a matrix , denotes its column, , and , its column space, and its trace. In addition, returns a vector of the diagonal elements of . The symbol denotes the direct sum operator.

In this paper, the data is assumed to follow the subspace clustering structure expressed in the following data model.

Data Model 1. The data matrix can be represented as , where is an arbitrary permutation matrix. The columns of lie in , where is an -dimensional linear subspace, for , and, .

We define as an orthonormal basis for , where is the rank of . If the data is noisy, the matrix is formed using the dominant left singular vectors of . In addition, is defined as .

Ii Direction search clustering

The proposed approach consists of identical optimization problems, one per data point. The optimization problem

(1)

corresponding to , searches for a direction in the column space of the projected data with non-zero projection on and minimum projection on the rest of the data. In this paper, we use or for the -norm. The linear constraint enforces the optimal point of (1) to have strong coherence with . In practice, the data points within a subspace are mutually coherent, wherefore the optimal point of (1) will have large projection on other data points in the subspace containing . Accordingly, if we sample few of the columns of corresponding to the elements of with largest values, they will all lie in the subspace containing . Thus, we exploit the obtained directions to construct a neighborhood set for each data point in order to construct a similarity matrix, hence the name Direction search Subspace Clustering (DSC). Algorithm 1 describes the proposed DSC method. The first step finds all the directions in one shot via solving a convex optimization problem. The similarity matrix is formed in the second step, then in the final step the spectral clustering algorithm is applied to the similarity matrix. For more information about Steps 2.2 and 3, the reader is referred to [19, 5].

Fig. 1: Measures of similarity and adopted by DSC and TSC to identify a neighborhood set for the first data point. First row and , second row and , third row and .

Initialization: Set equal to the cardinality of a neighborhood set. Set

equal to a zero matrix and set

equal to 1 or 2.

Normalize the -norm of the columns of (i.e., set equal to ). Form matrix .

1. Define as the optimal points of

(2)

where is the vector of all ones.

2. For to
2.1 Set equal to the index set of the largest elements of .
2.2 , where acos and exp are the element-wise inverse cosine and exponential functions, respectively, the columns of indexed by , the row of , and the elements of indexed by .
2. End For

3. Apply spectral clustering to the matrix .

Algorithm 1 Direction search based Subspace Clustering (DSC)

Sparse regularization: If the data matrix is low rank, each vector in can be represented as a sparse combination of the columns of . For such setting, we can rewrite (2) as

(3)

where . The sparse representation can further enhance the robustness of the proposed approach to noise. The singular vectors corresponding to the noise component do not admit sparse representations in the data – that is, are normally obtained through linear combinations of a large number of data points. Thus, enforcing a sparse representation for the optimal direction averts a solution for

(4)

that lies in close proximity with the noise singular vectors.

Ii-a Connection and contrast to TSC and iPursuit

We point out some similarities and fundamental differences between DSC and some of the more related approaches. DSC and TSC bear some resemblance from a structural standpoint, yet are conceptually very different concerning how data similarity is viewed and measured, and thus how neighborhoods are constructed. Specifically, underlying DSC is the convex program (2) whereby optimal directions are obtained in Step 1 to be used in Step 2.1 of Algorithm 1 to construct the similarity matrix. This is fundamentally different from the thresholding-based subspace clustering (TSC) algorithm [22], which uses the data points themselves as directions. Thus, in TSC the equivalent of set is formed from the indices of the largest elements of . Hence, the performance of TSC greatly declines when the subspaces are in close proximity.

As an example, suppose the columns of lie in the union of 4 10-dimensional subspaces , each with a 100 data points, where . is a random -dimensional subspace, and are random -dimensional subspaces. Thus, the dimension of the intersections between the subspaces is equal to

with high probability. We solve (

1) with . The first two columns of Fig. 1 illustrate the values of for and adopted by DSC as a measure of similarity to build the neighborhood set of the first data point, and the last column displays the values of adopted by TSC. In the first row and , corresponding to independent subspaces that do not intersect. For, the second row (i.e., closer subspaces) and in the last row and . As desired, the largest values of used to form the set in Step 2.1 consistently correspond to the first subspace. When , the subspaces are not very close to each other and TSC can build a correct neighborhood for since the data columns corresponding to the largest elements of all lie in the same subspace . However, in the second and third row where , TSC cannot form a proper neighborhood as the data points corresponding to the largest elements of do not lie in the same cluster. Despite the close proximity of the subspaces, (1) finds a direction in the data span that is strongly coherent with the first subspace and has small projection on the other subspaces. This feature notably empowers DSC to distinguish the data clusters.

In [16], we developed an iterative subspace clustering approach termed iPursuit (short for innovation pursuit). Akin to DSC, iPursuit leverages some direction search module for subspace identification, albeit the approach is very different. To describe the connection to, and difference from, DSC we need the following definition.

Definition 1.

(Innovation subspace) Suppose follows Data Model 1, is an orthonormal basis for , is an orthonormal basis for , and (i.e., does not lie completely in the direct sum of the other subspaces). Then, the innovation subspace corresponding to , denoted , is defined as the linear subspace in spanned by the columns of .

In [16], it was shown that if , and is sufficiently close to , then the optimal point of

(5)

lies in . Therefore, iPursuit exploits this result, combined with the fact that is orthogonal to per Definition 1, to directly separate out the different subspaces successively. In contrast, DSC is a spectral-clustering-based approach which uses the outcome of direction search to build a similarity matrix. The main restriction of iPursuit is that it requires every subspace to carry innovation relative to the other subspaces. In other words, iPursuit requires that no subspace lies in the direct sum of the other subspaces. DSC does not have such restrictions. For illustration, the first row of Fig. 1 indeed shows the orthogonality of the optimal direction to when . However, when in the last row of Fig. 1, the requirement of iPursuit is violated and iPursuit cannot yield correct clustering. On the other hand, DSC samples few columns corresponding to the largest elements of , which all lie in the first cluster. Therefore, DSC can form a proper neighborhood set even if the subspaces do not have relative innovations.

Fig. 2: Performance of the algorithms versus the dimension of intersection for different noise levels.

Ii-B Solving the proposed optimization problem

In this section, we use an Alternating Direction Method of Multipliers (ADMM) [28] to develop an efficient algorithm for solving (3) which is a generalized form of (2). The optimization problem (3) is equivalent to

(6)

The Lagrangian function of (6) can be written as

(7)

where is the regularization parameter. The ADMM approach is an iterative procedure. Define as the optimization variables and as the Lagrange multipliers at the iteration. Define , , and define the element-wise function as . Define a column-wise operator as follows: set equal to zero if , otherwise set . Each iteration consists of the following steps:

(8)

These steps are repeated until the algorithm converges or the number of iterations exceeds a predefined threshold.

The complexity of the initialization step (obtaining the matrices and ) is roughly . The order of complexity of each iteration is also . Thus, the overall complexity is , where is the number of iterations and the second term corresponds to the complexity of calculating the matrix .

Iii Numerical Simulations

In this section, we study the performance of DSC with both synthetic and real data. In the experiments with synthetic data, the data lies in a union of subspaces where . The subspace is a random -dimensional subspace and are random -dimensional subspaces. Hence, the dimension of the intersection between the subspaces is equal to . The data points are distributed uniformly at random within the subspaces, i.e., a data point lying in an -dimensional subspace is generated as , where the elements of

are sampled independently from a standard normal distribution

and is an orthonormal basis for . If is the number of misclassified data points, the clustering error is defined as . DSC is compared against SSC [14], LRR [23], TSC [22], SSC-OMP [20], and SCC [26]. In the simulations with synthetic data, the performance of DSC with and are similar. However, in the face clustering example, DSC yields better perfromance with . Thus, we report all the results with . In all experiments, and .

Iii-a Noisy data

In this section, we study the performance of DSC with noisy data. The data points lie in a union of 20 10-dimensional linear subspaces and . There are 100 data points in each cluster. The noisy data matrix is obtained as , where follows Data Model 1, the elements of are sampled from , and determines the relative power of the noise component. Fig. 2 shows the performance of the different algorithms versus the dimension of intersection for equal to , , and . It is worth noting that in this experiment not all subspaces have relative innovations, which excludes iPursuit as a feasible choice. As shown, the proposed approach notably outperforms the other spectral-clustering-based algorithms in all four scenarios.

Fig. 3: Performance with different number of data clusters. Left: the dimension of intersection , Right: .

Iii-B Clustering error versus

Here, we investigate the performance of the algorithms when there is a large number of clusters. The data follows Data Model 1, the dimension of each subspace is equal to 6, and . There are 60 data points in each cluster. Fig. 3 shows the clustering error versus the number of subspaces. In the left plot and all algorithms expect for LRR yield accurate clustering. In the right plot , in which case the clustering error of all algorithms except for DSC notably increases with the number of subspaces.

Iii-C Face clustering

Face clustering is a challenging and practical application of subspace clustering [14]. We use the Extended Yale B dataset, which contains 64 images for each of 38 individuals in frontal view and different illumination conditions [29]. The faces corresponding to each subject can be approximated with a low-dimensional subspace. Thus, a data set containing face images from multiple subjects can be modeled as a union of subspaces.

We apply DSC to face clustering and present results for a different number of clusters in Table I. The performance is also compared with SSC, SCC, and TSC. Heretofore, SSC yielded the best known result for this problem. For each number of clusters shown (except 38), we ran the algorithms over 50 different random combinations of subjects from the 38 clusters. To expedite the runtime, we project the data on the span of the first 500 left singular vectors, which does not affect the performance of the algorithms (expect SSC). For SSC, we report the results without projection (SSC) and with projection (SSC-P). As shown, DSC yields accurate clustering and notably outperforms the performance achieved by SSC.

of
subjects DSC SSC SSC-P SCC TSC
5 2.56 4.24 29.04 62.62 25.62
10 4.88 9.53 32.76 74.13 40.46
15 4.71 15.66 34.21 77.02 44.79
20 6.45 19.95 33.67 78.50 45.30
25 8.53 24.76 50.19 79.37 46.46
38 8.84 27.47 50.37 88.86 47.12
TABLE I: Clustering error () of different algorithms on the Extended Yale B dataset.

References

  • [1] C. M. Bishop, Pattern recognition and machine learning.   springer, 2006.
  • [2]

    M. Rahmani and G. Atia, “Randomized robust subspace recovery and outlier detection for high dimensional data matrices,”

    IEEE Transactions on Signal Processing, vol. 65, no. 6, March 2017.
  • [3]

    T. Zhang and G. Lerman, “A novel m-estimator for robust PCA,”

    The Journal of Machine Learning Research, vol. 15, no. 1, pp. 749–808, 2014.
  • [4] G. Lerman, M. B. McCoy, J. A. Tropp, and T. Zhang, “Robust computation of linear models by convex relaxation,” Foundations of Computational Mathematics, vol. 15, no. 2, pp. 363–410, 2015.
  • [5] R. Vidal, “Subspace clustering,” IEEE Signal Processing Magazine, vol. 2, no. 28, pp. 52–68, 2011.
  • [6] J. Ho, M.-H. Yang, J. Lim, K.-C. Lee, and D. Kriegman, “Clustering appearances of objects under varying illumination conditions,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, 2003, pp. 1–11.
  • [7] B. McWilliams and G. Montana, “Subspace clustering of high-dimensional data: a predictive approach,” Data Mining and Knowledge Discovery, vol. 28, no. 3, pp. 736–772, 2014.
  • [8] H.-P. Kriegel, P. Kröger, and A. Zimek, “Clustering high-dimensional data: A survey on subspace clustering, pattern-based clustering, and correlation clustering,” ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 3, no. 1, p. 1, 2009.
  • [9] A. Y. Yang, J. Wright, Y. Ma, and S. S. Sastry, “Unsupervised segmentation of natural images via lossy data compression,” Computer Vision and Image Understanding, vol. 110, no. 2, pp. 212–225, 2008.
  • [10] A. Y. Yang, S. R. Rao, and Y. Ma, “Robust statistical estimation and segmentation of multiple subspaces,” in Computer Vision and Pattern Recognition Workshop (CVPRW), 2006, pp. 99–99.
  • [11] M. E. Tipping and C. M. Bishop, “Mixtures of probabilistic principal component analyzers,” Neural computation, vol. 11, no. 2, pp. 443–482, 1999.
  • [12] Y. Sugaya and K. Kanatani, “Geometric structure of degeneracy for multi-body motion segmentation,” in Statistical Methods in Video Processing.   Springer, 2004, pp. 13–25.
  • [13] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
  • [14] E. Elhamifar and R. Vidal, “Sparse subspace clustering: Algorithm, theory, and applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 11, pp. 2765–2781, 2013.
  • [15] R. Vidal, Y. Ma, and S. Sastry, “Generalized principal component analysis (GPCA),” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 12, pp. 1945–1959, 2005.
  • [16] M. Rahmani and G. K. Atia, “Innovation pursuit: A new approach to subspace clustering,” IEEE Transactions on Signal Processing, 2017.
  • [17] P. S. Bradley and O. L. Mangasarian, “k-plane clustering,” Journal of Global Optimization, vol. 16, no. 1, pp. 23–32, 2000.
  • [18] T. Zhang, A. Szlam, and G. Lerman, “Median k-flats for hybrid linear modeling with many outliers,” in IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), 2009, pp. 234–241.
  • [19] U. Von Luxburg, “A tutorial on spectral clustering,” Statistics and computing, vol. 17, no. 4, pp. 395–416, 2007.
  • [20]

    E. L. Dyer, A. C. Sankaranarayanan, and R. G. Baraniuk, “Greedy feature selection for subspace clustering,”

    The Journal of Machine Learning Research, vol. 14, no. 1, pp. 2487–2517, 2013.
  • [21] H. Gao, F. Nie, X. Li, and H. Huang, “Multi-view subspace clustering,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 4238–4246.
  • [22] R. Heckel and H. Bölcskei, “Robust subspace clustering via thresholding,” arXiv preprint arXiv:1307.4891, 2013.
  • [23] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, “Robust recovery of subspace structures by low-rank representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 171–184, 2013.
  • [24] M. Soltanolkotabi, E. J. Candes et al., “A geometric analysis of subspace clustering with outliers,” The Annals of Statistics, vol. 40, no. 4, pp. 2195–2238, 2012.
  • [25] Y.-X. Wang, H. Xu, and C. Leng, “Provable subspace clustering: When lrr meets ssc,” in Advances in Neural Information Processing Systems, 2013, pp. 64–72.
  • [26] G. Chen and G. Lerman, “Spectral curvature clustering (scc),” International Journal of Computer Vision, vol. 81, no. 3, pp. 317–330, 2009.
  • [27] D. Park, C. Caramanis, and S. Sanghavi, “Greedy subspace clustering,” in Advances in Neural Inf. Processing Systems, 2014, pp. 2753–2761.
  • [28] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011.
  • [29]

    K.-C. Lee, J. Ho, and D. J. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,”

    IEEE Transactions on pattern analysis and machine intelligence, vol. 27, no. 5, pp. 684–698, 2005.