Massive amounts of data collected by recent information systems give rise to new challenges in the field of signal processing, machine learning, and data analysis. One such challenge is to develop fast and accurate algorithms so as to find low-dimensional structures in large-scale high-dimensional data sets. The task of extracting such low-dimensional structures is encountered in many practical applications including motion segmentation and face clustering in computer vision[1, 2], image representation and compression in image clustering [3, 4], and hybrid system identification in systems theory . In these settings, the data can be thought of as being a collection of points lying on a union of low-dimensional subspaces. The goal of subspace clustering is to organize data points into several clusters so that each cluster contains only the points from the same subspace.
Subspace clustering has drawn significant attention over the past decade . Among various approaches to subspace clustering, methods that rely on spectral clustering  to analyze the similarity matrix representing the relations among data points have received much attention due to their simplicity, theoretical rigour, and superior performance. These methods assume that the data is self-expressive , i.e., each data point can be represented by a linear combination of the other points in the union of subspaces. This motivates the search for a a so-called subspace preserving similarity matrix which establishes stronger connections among the points originating from a similar subspace. To form such a similarity matrix, the sparse subspace clustering (SSC) method in [8, 9] employs a sparse reconstruction algorithm referred to as basis pursuit (BP) that aims to minimize an -norm objective by means of convex optimization approaches such as interior point  or alternating direction of method of multipliers (ADMM) . In [12, 13], orthogonal matching pursuit (OMP) is used to greedily build the similarity matrix. Low rank subspace clustering approaches in [14, 15, 16, 17] rely on convex optimization techniques with
-norm and nuclear norm regularizations and find the singular value decomposition (SVD) of the data so as to build the similarity matrix. Finally, presents an algorithm that constructs the similarity matrix through thresholding the correlations among the data points. Performance of self-expressiveness-based subspace clustering schemes was analyzed in various settings. It was shown in [8, 9] that when the subspaces are disjoint (independent), the BP-based method is subspace preserving. [19, 20]
take a geometric point of view to further study the performance of BP-based SSC algorithm in the setting of intersecting subspaces and in the presence of outliers. These results are extended to the OMP-based SSC in[12, 13].
Sparse subspace clustering of large-scale data is computationally challenging. The computational complexity of state-of-the-art BP-based method in  and the low rank representation methods [14, 15, 16, 17] is often prohibitive in practical applications. On the other hand, current scalable SSC algorithms, e.g., [12, 13], may produce poor clustering solutions, especially in scenarios where the subspaces are not well separated. In this paper, we address these challenges by proposing a novel self-expressiveness-based algorithm for subspace clustering that exploits a fast variant of orthogonal least-squares (OLS) to efficiently form a similarity matrix by finding a sparse representation for each data point. We analyze the performance of the proposed scheme and show that in the scenarios where the subspaces are independent, the proposed algorithm always finds a solution that is subspace-preserving. Simulation studies illustrate that our proposed SSC algorithm significantly outperforms the state-of-the-art method  in terms of runtime while providing essentially the same or better clustering accuracy. The results further illustrate that, unlike the methods in [8, 12, 13], when the subspaces are dependent our proposed scheme finds a subspace preserving solution.
The rest of the paper is organized as follows. Section 2 formally states the subspace clustering problem and reviews some relevant concepts. In Section 3, we introduce the accelerated sparse subspace clustering algorithm and analyze its performance. Section 4 presents the simulation results while the concluding remarks are stated in Section 5. 111The MATLAB implementation of the proposed algorithm is available at https://github.com/realabolfazl/ASSC.
2 Problem Formulation
First, we briefly summarize notation used in the paper and then formally introduce the SSC problem.
Bold capital letters denote matrices while bold lowercase letters represent vectors. For a matrix, denotes the entry of , and is the column of . Additionally, is the submatrix of that contains the columns of indexed by the set . denotes the subspace spanned by the columns of . is the projection operator onto the orthogonal complement of where denotes the Moore-Penrose pseudo-inverse of and
is the identity matrix. Further, let, be the vector of all ones, and
denote the uniform distribution on.
The SSC problem is detailed next. Let be a collection of data points in and let be the data matrix representing the data points. The data points are drawn from a union of n subspaces with dimensions . Without a loss of generality, we assume that the columns of , i.e., the data points, are normalized vectors with unit norm. The goal of subspace clustering is to partition into groups so that the points that belong to the same subspace are assigned to the same cluster. In the sparse subspace clustering (SSC) framework , one assumes that the data points satisfy the self-expressiveness property formally stated below.
A collection of data points satisfies the self-expressiveness property if each data point has a linear representation in terms of the other points in the collection, i.e., there exist a representation matrix such that
Notice that since each point in can be written in terms of at most points in , SSC aims to find a sparse subspace preserving as formalized next.
A representation matrix is subspace preserving if for all and a subspace it holds that
The task of finding a subspace preserving leads to the optimization problem 
where is the column of . Given a subspace preserving solution , one constructs a similarity matrix for the data points. The graph normalized Laplacian of the similarity matrix is then used as an input to a spectral clustering algorithm  which in turn produces clustering assignments.
3 Accelerated OLS for Subspace Clustering
In this section, we develop a novel self-expressiveness-based algorithm for the subspace clustering problem and analyze its performance. We propose to find an approximate solution to the problem
by employing a low-complexity variant of the orthogonal least-squares (OLS) algorithm  so as to find a sparse representation for each data point and thus construct . Note that in (4), is a small predefined parameter that is used as the stopping criterion of the proposed algorithm.
, is a greedy heuristic that iteratively reconstructs sparse signals by identifying one nonzero signal component at a time. The complexity of using classical OLS to find a subspace preserving – although lower than that of the BP-based SSC method  – might be prohibitive in applications involving large-scale data. To this end, we propose a fast variant of OLS referred to as accelerated OLS (AOLS)  that significantly improves both the running time and accuracy of the classical OLS. AOLS replaces the aforementioned single component selection strategy by the procedure where indices are selected at each iteration, leading to significant improvements in both computational cost and accuracy. To enable significant gains in speed, AOLS efficiently builds a collection of orthogonal vectors that represent the basis of the subspace that includes the approximation of the sparse signal.222 is the maximum number of iterations that depends on the threshold parameter .
In order to use AOLS for the SSC problem, consider the task of finding a sparse representation for . Let be the set containing indices of data points with nonzero coefficients in the representation of . That is, for all , . The proposed algorithm for sparse subspace clustering, referred to as accelerated sparse subspace clustering (ASSC), finds in an iterative fashion (See Algorithm 1). In particular, starting with , in the iteration we identify data points for the representation of . The indices correspond to the largest terms , where denotes the residual vector in the iteration with , and
is the projection of onto the span of orthogonal vectors . Once are selected, we use the assignment
times to obtain and that are required for subsequent iterations. This procedure is continued until for some iteration , or the algorithm reaches the predefined maximum number of iterations . Then the vector of coefficients used for representing is computed as the least-squares solution . Finally, having found ’s, we construct and apply spectral clustering on its normalized Laplacian to obtain the clustering solution.
3.1 Performance Guarantee for ASSC
In this section, we analyze performance of the ASSC algorithm under the scenario that data points are noiseless and drawn from a union of independent subspaces, as defined next.
Let be a collection of subspaces with dimensions . Define . Then, is called independent if and only if .
Theorem 1 states our main theoretical results about the performance of the proposed ASSC algorithm.
Let be a collection of noiseless data points drawn from a union of independent subspaces . Then, the representation matrix returned by the ASSC algorithm is subspace preserving.
The proof of Theorem 1, omitted for brevity, relies on the observation that in order to select new representation points, ASSC finds data points that are highly correlated with the current residual vector. Since the subspaces are independent, if ASSC chooses a point that is drawn from a different subspace, its corresponding coefficient will be zero once ASSC meets a terminating criterion (e.g., -norm of the residual vector becomes less than or ). Hence, only the points that are drawn from the same subspace will have nonzero coefficients in the final sparse representation.
Remark: It has been shown in [8, 12, 13] that if subspaces are independent, SSC-BP and SSC-OMP schemes are also subspace preserving. However, as we illustrate in our simulation results, ASSC is very robust with respect to dependencies among the data points across different subspaces while in those settings SSC-BP and SSC-OMP struggle to produce a subspace preserving matrix . Further theoretical analysis of this setting is left to future work.
4 Simulation Results
To evaluate performance of the ASSC algorithm, we compare it to that of the BP-based [8, 9] and OMP-based [12, 13] SSC schemes, referred to as SSC-BP and SSC-OMP, respectively. For SSC-BP, two implementations based on ADMM and interior point methods are available by the authors of [8, 9]. The interior point implementation of SSC-BP is more accurate than the ADMM implementation while the ADMM implementation tends to produce sup-optimal solution in a few iterations. However, the interior point implementation is very slow even for relatively small problems. Therefore, in our simulation studies we use the ADMM implementation of SSC-BP that is provided by the authors of [8, 9]. Our scheme is tested for and
. We consider the following two scenarios: (1) A random model where the subspaces are with high probability near-independent; and (2) The setting where we used hybrid dictionaries to generate similar data points across different subspaces which in turn implies the independence assumption no longer holds. In both scenarios, we randomly generate subspaces, each of dimension , in an ambient space of dimension . Each subspace contains sample points where we vary from to ; therefore, the total number of data points, , is varied from to . The results are averaged over independent instances. For scenario (1), we generate data points by uniformly sampling from the unit sphere. For the second scenario, after sampling a data point, we add a perturbation term where .
In addition to comparing the algorithms in terms of their clustering accuracy and running time, we use the following metrics defined in [8, 9] that quantify the subspace preserving property of the representation matrix returned by each algorithm: Subspace preserving rate defined as the fraction of points whose representations are subspace-preserving, Subspace preserving error defined as the fraction of norms of the representation coefficients associated with points from other subspaces, i.e., where represents the set of data points from other subspaces.
The results for the scenario (1) and (2) are illustrated in Fig. 1 and Fig. 2, respectively. As we see in Fig. 1, ASSC is nearly as fast as SSC-OMP and orders of magnitude faster than SSC-BP while ASSC achieves better subspace preserving rate, subspace preserving error, and clustering accuracy compared to competing schemes. Regarding the second scenario, we observe that the performance of SSC-OMP is severely deteriorated while ASSC still outperforms both SSC-BP and SSC-OMP in terms of accuracy. Further, similar to the first scenario, running time of ASSC is similar to that of SSC-OMP while both methods are much faster that SSC-BP. Overall as Fig. 1 and Fig. 2 illustrate, ASSC algorithm, especially with , is superior to other schemes and is essentially as fast as the SSC-OMP method.
In this paper, we proposed a novel algorithm for clustering high dimensional data lying on a union of subspaces. The proposed algorithm, referred to as accelerated sparse subspace clustering (ASSC), employs a computationally efficient variant of the orthogonal least-squares algorithm to construct a similarity matrix under the assumption that each data point can be written as a sparse linear combination of other data points in the subspaces. ASSC then performs spectral clustering on the similarity matrix to find the clustering solution. We analyzed the performance of the proposed scheme and provided a theorem stating that if the subspaces are independent, the similarity matrix generated by ASSC is subspace-preserving. In simulations, we demonstrated that the proposed algorithm is orders of magnitudes faster than the BP-based SSC scheme [8, 9] and essentially delivers the same or better clustering solution. The results also show that ASSC outperforms the state-of-the-art OMP-based method [12, 13], especially in scenarios where the data points across different subspaces are similar.
As part of the future work, it would be of interest to extend our results and analyze performance of ASSC in the general setting where the subspaces are arbitrary and not necessarily independent. Moreover, it would be beneficial to develop distributed implementations for further acceleration of ASSC.
-  A. Y. Yang, J. Wright, Y. Ma, and S. S. Sastry, “Unsupervised segmentation of natural images via lossy data compression,” Computer Vision and Image Understanding, vol. 110, no. 2, pp. 212–225, 2008.
-  R. Vidal, R. Tron, and R. Hartley, “Multiframe motion segmentation with missing data using powerfactorization and gpca,” International Journal of Computer Vision, vol. 79, no. 1, pp. 85–105, 2008.
J. Ho, M.-H. Yang, J. Lim, K.-C. Lee, and D. Kriegman, “Clustering appearances
of objects under varying illumination conditions,” in
Computer vision and pattern recognition, 2003. Proceedings. 2003 IEEE computer society conference on, vol. 1, pp. I–I, IEEE, 2003.
-  W. Hong, J. Wright, K. Huang, and Y. Ma, “Multiscale hybrid linear models for lossy image representation,” IEEE Transactions on Image Processing, vol. 15, no. 12, pp. 3655–3671, 2006.
-  R. Vidal, S. Soatto, Y. Ma, and S. Sastry, “An algebraic geometric approach to the identification of a class of linear hybrid systems,” in Decision and Control, 2003. Proceedings. 42nd IEEE Conference on, vol. 1, pp. 167–172, IEEE, 2003.
-  R. Vidal, “Subspace clustering,” IEEE Signal Processing Magazine, vol. 28, no. 2, pp. 52–68, 2011.
A. Y. Ng, M. I. Jordan, Y. Weiss, et al.
, “On spectral clustering: Analysis and an algorithm,” inProceedings of the Advances in Neural Information Processing Systems (NIPS), vol. 14, pp. 849–856, 2001.
-  E. Elhamifar and R. Vidal, “Sparse subspace clustering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2790–2797, IEEE, 2009.
-  E. Elhamifar and R. Vidal, “Sparse subspace clustering: Algorithm, theory, and applications,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 11, pp. 2765–2781, 2013.
-  S.-J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, “An interior-point method for large-scale -regularized least squares,” IEEE journal of selected topics in signal processing, vol. 1, no. 4, pp. 606–617, 2007.
-  S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine Learning, vol. 3, no. 1, pp. 1–122, Jan. 2011.
E. L. Dyer, A. C. Sankaranarayanan, and R. G. Baraniuk, “Greedy feature selection for subspace clustering,”The Journal of Machine Learning Research, vol. 14, no. 1, pp. 2487–2517, 2013.
-  C. You, D. Robinson, and R. Vidal, “Scalable sparse subspace clustering by orthogonal matching pursuit,” in in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3918–3927, 2016.
-  C.-Y. Lu, H. Min, Z.-Q. Zhao, L. Zhu, D.-S. Huang, and S. Yan, “Robust and efficient subspace segmentation via least squares regression,” Computer Vision–ECCV 2012, pp. 347–360, 2012.
-  G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, “Robust recovery of subspace structures by low-rank representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 171–184, 2013.
P. Favaro, R. Vidal, and A. Ravichandran, “A closed form solution to robust subspace estimation and clustering,” inComputer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 1801–1807, IEEE, 2011.
-  R. Vidal and P. Favaro, “Low rank subspace clustering (lrsc),” Pattern Recognition Letters, vol. 43, pp. 47–61, 2014.
-  R. Heckel and H. Bölcskei, “Robust subspace clustering via thresholding,” IEEE Transactions on Information Theory, vol. 61, no. 11, pp. 6320–6342, 2015.
-  M. Soltanolkotabi and E. J. Candes, “A geometric analysis of subspace clustering with outliers,” The Annals of Statistics, pp. 2195–2238, Aug. 2012.
-  M. Soltanolkotabi, E. Elhamifar, E. J. Candes, et al., “Robust subspace clustering,” The Annals of Statistics, vol. 42, no. 2, pp. 669–699, Apr. 2014.
-  S. Chen, S. A. Billings, and W. Luo, “Orthogonal least squares methods and their application to non-linear system identification,” International Journal of Control, vol. 50, no. 5, pp. 1873–1896, Nov. 1989.
-  A. Hashemi and H. Vikalo, “Recovery of sparse signals via branch and bound least-squares,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 4760–4764, IEEE, 2017.
-  L. Rebollo-Neira and D. Lowe, “Optimized orthogonal matching pursuit approach,” IEEE Signal Processing Letters, vol. 9, no. 4, pp. 137–140, Apr. 2002.
A. Hashemi and H. Vikalo, “Sparse linear regression via generalized orthogonal least-squares,” inProceedings of IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 1305–1309, IEEE, Dec. 2016.
-  C. Soussen, R. Gribonval, J. Idier, and C. Herzet, “Joint k-step analysis of orthogonal matching pursuit and orthogonal least squares,” IEEE Transactions on Information Theory, vol. 59, no. 5, pp. 3158–3174, May 2013.
-  C. Herzet, A. Drémeau, and C. Soussen, “Relaxed recovery conditions for omp/ols by exploiting both coherence and decay,” IEEE Transactions on Information Theory, vol. 62, no. 1, pp. 459–470, 2016.
-  A. Hashemi and H. Vikalo, “Sampling requirements and accelerated schemes for sparse linear regression with orthogonal least-squares,” arXiv preprint arXiv, 2016.