Accelerated Sparse Subspace Clustering

10/31/2017 ∙ by Abolfazl Hashemi, et al. ∙ 0

State-of-the-art algorithms for sparse subspace clustering perform spectral clustering on a similarity matrix typically obtained by representing each data point as a sparse combination of other points using either basis pursuit (BP) or orthogonal matching pursuit (OMP). BP-based methods are often prohibitive in practice while the performance of OMP-based schemes are unsatisfactory, especially in settings where data points are highly similar. In this paper, we propose a novel algorithm that exploits an accelerated variant of orthogonal least-squares to efficiently find the underlying subspaces. We show that under certain conditions the proposed algorithm returns a subspace-preserving solution. Simulation results illustrate that the proposed method compares favorably with BP-based method in terms of running time while being significantly more accurate than OMP-based schemes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Massive amounts of data collected by recent information systems give rise to new challenges in the field of signal processing, machine learning, and data analysis. One such challenge is to develop fast and accurate algorithms so as to find low-dimensional structures in large-scale high-dimensional data sets. The task of extracting such low-dimensional structures is encountered in many practical applications including motion segmentation and face clustering in computer vision

[1, 2], image representation and compression in image clustering [3, 4], and hybrid system identification in systems theory [5]. In these settings, the data can be thought of as being a collection of points lying on a union of low-dimensional subspaces. The goal of subspace clustering is to organize data points into several clusters so that each cluster contains only the points from the same subspace.

Subspace clustering has drawn significant attention over the past decade [6]. Among various approaches to subspace clustering, methods that rely on spectral clustering [7] to analyze the similarity matrix representing the relations among data points have received much attention due to their simplicity, theoretical rigour, and superior performance. These methods assume that the data is self-expressive [8], i.e., each data point can be represented by a linear combination of the other points in the union of subspaces. This motivates the search for a a so-called subspace preserving similarity matrix which establishes stronger connections among the points originating from a similar subspace. To form such a similarity matrix, the sparse subspace clustering (SSC) method in [8, 9] employs a sparse reconstruction algorithm referred to as basis pursuit (BP) that aims to minimize an -norm objective by means of convex optimization approaches such as interior point [10] or alternating direction of method of multipliers (ADMM) [11]. In [12, 13], orthogonal matching pursuit (OMP) is used to greedily build the similarity matrix. Low rank subspace clustering approaches in [14, 15, 16, 17] rely on convex optimization techniques with

-norm and nuclear norm regularizations and find the singular value decomposition (SVD) of the data so as to build the similarity matrix. Finally,

[18] presents an algorithm that constructs the similarity matrix through thresholding the correlations among the data points. Performance of self-expressiveness-based subspace clustering schemes was analyzed in various settings. It was shown in [8, 9] that when the subspaces are disjoint (independent), the BP-based method is subspace preserving. [19, 20]

take a geometric point of view to further study the performance of BP-based SSC algorithm in the setting of intersecting subspaces and in the presence of outliers. These results are extended to the OMP-based SSC in

[12, 13].

Sparse subspace clustering of large-scale data is computationally challenging. The computational complexity of state-of-the-art BP-based method in [8] and the low rank representation methods [14, 15, 16, 17] is often prohibitive in practical applications. On the other hand, current scalable SSC algorithms, e.g., [12, 13], may produce poor clustering solutions, especially in scenarios where the subspaces are not well separated. In this paper, we address these challenges by proposing a novel self-expressiveness-based algorithm for subspace clustering that exploits a fast variant of orthogonal least-squares (OLS) to efficiently form a similarity matrix by finding a sparse representation for each data point. We analyze the performance of the proposed scheme and show that in the scenarios where the subspaces are independent, the proposed algorithm always finds a solution that is subspace-preserving. Simulation studies illustrate that our proposed SSC algorithm significantly outperforms the state-of-the-art method [8] in terms of runtime while providing essentially the same or better clustering accuracy. The results further illustrate that, unlike the methods in [8, 12, 13], when the subspaces are dependent our proposed scheme finds a subspace preserving solution.

The rest of the paper is organized as follows. Section 2 formally states the subspace clustering problem and reviews some relevant concepts. In Section 3, we introduce the accelerated sparse subspace clustering algorithm and analyze its performance. Section 4 presents the simulation results while the concluding remarks are stated in Section 5. 111The MATLAB implementation of the proposed algorithm is available at https://github.com/realabolfazl/ASSC.

2 Problem Formulation

First, we briefly summarize notation used in the paper and then formally introduce the SSC problem.

Bold capital letters denote matrices while bold lowercase letters represent vectors. For a matrix

, denotes the entry of , and is the column of . Additionally, is the submatrix of that contains the columns of indexed by the set . denotes the subspace spanned by the columns of . is the projection operator onto the orthogonal complement of where denotes the Moore-Penrose pseudo-inverse of and

is the identity matrix. Further, let

, be the vector of all ones, and

denote the uniform distribution on

.

The SSC problem is detailed next. Let be a collection of data points in and let be the data matrix representing the data points. The data points are drawn from a union of n subspaces with dimensions . Without a loss of generality, we assume that the columns of , i.e., the data points, are normalized vectors with unit norm. The goal of subspace clustering is to partition into groups so that the points that belong to the same subspace are assigned to the same cluster. In the sparse subspace clustering (SSC) framework [8], one assumes that the data points satisfy the self-expressiveness property formally stated below.

Definition 1.

A collection of data points satisfies the self-expressiveness property if each data point has a linear representation in terms of the other points in the collection, i.e., there exist a representation matrix such that

(1)

Notice that since each point in can be written in terms of at most points in , SSC aims to find a sparse subspace preserving as formalized next.

Definition 2.

A representation matrix is subspace preserving if for all and a subspace it holds that

(2)

The task of finding a subspace preserving leads to the optimization problem [8]

(3)

where is the column of . Given a subspace preserving solution , one constructs a similarity matrix for the data points. The graph normalized Laplacian of the similarity matrix is then used as an input to a spectral clustering algorithm [7] which in turn produces clustering assignments.

3 Accelerated OLS for Subspace Clustering

In this section, we develop a novel self-expressiveness-based algorithm for the subspace clustering problem and analyze its performance. We propose to find an approximate solution to the problem

(4)

by employing a low-complexity variant of the orthogonal least-squares (OLS) algorithm [21] so as to find a sparse representation for each data point and thus construct . Note that in (4), is a small predefined parameter that is used as the stopping criterion of the proposed algorithm.

The OLS algorithm, drawn much attention in recent years [22, 21, 23, 24, 25, 26]

, is a greedy heuristic that iteratively reconstructs sparse signals by identifying one nonzero signal component at a time. The complexity of using classical OLS

[21] to find a subspace preserving – although lower than that of the BP-based SSC method [8] – might be prohibitive in applications involving large-scale data. To this end, we propose a fast variant of OLS referred to as accelerated OLS (AOLS) [27] that significantly improves both the running time and accuracy of the classical OLS. AOLS replaces the aforementioned single component selection strategy by the procedure where indices are selected at each iteration, leading to significant improvements in both computational cost and accuracy. To enable significant gains in speed, AOLS efficiently builds a collection of orthogonal vectors that represent the basis of the subspace that includes the approximation of the sparse signal.222 is the maximum number of iterations that depends on the threshold parameter .

In order to use AOLS for the SSC problem, consider the task of finding a sparse representation for . Let be the set containing indices of data points with nonzero coefficients in the representation of . That is, for all , . The proposed algorithm for sparse subspace clustering, referred to as accelerated sparse subspace clustering (ASSC), finds in an iterative fashion (See Algorithm 1). In particular, starting with , in the iteration we identify data points for the representation of . The indices correspond to the largest terms , where denotes the residual vector in the iteration with , and

(5)

is the projection of onto the span of orthogonal vectors . Once are selected, we use the assignment

(6)

times to obtain and that are required for subsequent iterations. This procedure is continued until for some iteration , or the algorithm reaches the predefined maximum number of iterations . Then the vector of coefficients used for representing is computed as the least-squares solution . Finally, having found ’s, we construct and apply spectral clustering on its normalized Laplacian to obtain the clustering solution.

1:  Input: , , ,
2:  Output: clustering assignment vector
3:  for  
4:     Initialize , , , for all
5:     while  and  
6:        Select corresponding to largest terms
7:        
8:        
9:        Perform (6) times to update and
10:         for all
11:     end while
12:     
13:  end for
14:  
15:  Apply spectral clustering on the normalized Laplacian of to obtain
Algorithm 1 Accelerated Sparse Subspace Clustering

3.1 Performance Guarantee for ASSC

In this section, we analyze performance of the ASSC algorithm under the scenario that data points are noiseless and drawn from a union of independent subspaces, as defined next.

Definition 3.

Let be a collection of subspaces with dimensions . Define . Then, is called independent if and only if .

Theorem 1 states our main theoretical results about the performance of the proposed ASSC algorithm.

Theorem 1.

Let be a collection of noiseless data points drawn from a union of independent subspaces . Then, the representation matrix returned by the ASSC algorithm is subspace preserving.

The proof of Theorem 1, omitted for brevity, relies on the observation that in order to select new representation points, ASSC finds data points that are highly correlated with the current residual vector. Since the subspaces are independent, if ASSC chooses a point that is drawn from a different subspace, its corresponding coefficient will be zero once ASSC meets a terminating criterion (e.g., -norm of the residual vector becomes less than or ). Hence, only the points that are drawn from the same subspace will have nonzero coefficients in the final sparse representation.

Remark: It has been shown in [8, 12, 13] that if subspaces are independent, SSC-BP and SSC-OMP schemes are also subspace preserving. However, as we illustrate in our simulation results, ASSC is very robust with respect to dependencies among the data points across different subspaces while in those settings SSC-BP and SSC-OMP struggle to produce a subspace preserving matrix . Further theoretical analysis of this setting is left to future work.

4 Simulation Results

(a) Subspace preserving rate
(b) Subspace preserving error
(c) Clustering accuracy
(d) Running time (sec)
Fig. 1: Performance comparison of ASSC, SSC-OMP [12, 13], and SSC-BP [8, 9] on synthetic data with no perturbation. The points are drawn from subspaces of dimension in ambient dimension . Each subspace contains the same number of points and the overall number of points is varied from to .
(a) Subspace preserving rate
(b) Subspace preserving error
(c) Clustering accuracy
(d) Running time (sec)
Fig. 2: Performance comparison of ASSC, SSC-OMP [12, 13], and SSC-BP [8, 9] on synthetic data with perturbation terms . The points are drawn from subspaces of dimension in ambient dimension . Each subspace contains the same number of points and the overall number of points is varied from to .

To evaluate performance of the ASSC algorithm, we compare it to that of the BP-based [8, 9] and OMP-based [12, 13] SSC schemes, referred to as SSC-BP and SSC-OMP, respectively. For SSC-BP, two implementations based on ADMM and interior point methods are available by the authors of [8, 9]. The interior point implementation of SSC-BP is more accurate than the ADMM implementation while the ADMM implementation tends to produce sup-optimal solution in a few iterations. However, the interior point implementation is very slow even for relatively small problems. Therefore, in our simulation studies we use the ADMM implementation of SSC-BP that is provided by the authors of [8, 9]. Our scheme is tested for and

. We consider the following two scenarios: (1) A random model where the subspaces are with high probability near-independent; and (2) The setting where we used hybrid dictionaries

[25] to generate similar data points across different subspaces which in turn implies the independence assumption no longer holds. In both scenarios, we randomly generate subspaces, each of dimension , in an ambient space of dimension . Each subspace contains sample points where we vary from to ; therefore, the total number of data points, , is varied from to . The results are averaged over independent instances. For scenario (1), we generate data points by uniformly sampling from the unit sphere. For the second scenario, after sampling a data point, we add a perturbation term where .

In addition to comparing the algorithms in terms of their clustering accuracy and running time, we use the following metrics defined in [8, 9] that quantify the subspace preserving property of the representation matrix returned by each algorithm: Subspace preserving rate defined as the fraction of points whose representations are subspace-preserving, Subspace preserving error defined as the fraction of norms of the representation coefficients associated with points from other subspaces, i.e., where represents the set of data points from other subspaces.

The results for the scenario (1) and (2) are illustrated in Fig. 1 and Fig. 2, respectively. As we see in Fig. 1, ASSC is nearly as fast as SSC-OMP and orders of magnitude faster than SSC-BP while ASSC achieves better subspace preserving rate, subspace preserving error, and clustering accuracy compared to competing schemes. Regarding the second scenario, we observe that the performance of SSC-OMP is severely deteriorated while ASSC still outperforms both SSC-BP and SSC-OMP in terms of accuracy. Further, similar to the first scenario, running time of ASSC is similar to that of SSC-OMP while both methods are much faster that SSC-BP. Overall as Fig. 1 and Fig. 2 illustrate, ASSC algorithm, especially with , is superior to other schemes and is essentially as fast as the SSC-OMP method.

5 Conclusion

In this paper, we proposed a novel algorithm for clustering high dimensional data lying on a union of subspaces. The proposed algorithm, referred to as accelerated sparse subspace clustering (ASSC), employs a computationally efficient variant of the orthogonal least-squares algorithm to construct a similarity matrix under the assumption that each data point can be written as a sparse linear combination of other data points in the subspaces. ASSC then performs spectral clustering on the similarity matrix to find the clustering solution. We analyzed the performance of the proposed scheme and provided a theorem stating that if the subspaces are independent, the similarity matrix generated by ASSC is subspace-preserving. In simulations, we demonstrated that the proposed algorithm is orders of magnitudes faster than the BP-based SSC scheme [8, 9] and essentially delivers the same or better clustering solution. The results also show that ASSC outperforms the state-of-the-art OMP-based method [12, 13], especially in scenarios where the data points across different subspaces are similar.

As part of the future work, it would be of interest to extend our results and analyze performance of ASSC in the general setting where the subspaces are arbitrary and not necessarily independent. Moreover, it would be beneficial to develop distributed implementations for further acceleration of ASSC.

References

  • [1] A. Y. Yang, J. Wright, Y. Ma, and S. S. Sastry, “Unsupervised segmentation of natural images via lossy data compression,” Computer Vision and Image Understanding, vol. 110, no. 2, pp. 212–225, 2008.
  • [2] R. Vidal, R. Tron, and R. Hartley, “Multiframe motion segmentation with missing data using powerfactorization and gpca,” International Journal of Computer Vision, vol. 79, no. 1, pp. 85–105, 2008.
  • [3] J. Ho, M.-H. Yang, J. Lim, K.-C. Lee, and D. Kriegman, “Clustering appearances of objects under varying illumination conditions,” in

    Computer vision and pattern recognition, 2003. Proceedings. 2003 IEEE computer society conference on

    , vol. 1, pp. I–I, IEEE, 2003.
  • [4] W. Hong, J. Wright, K. Huang, and Y. Ma, “Multiscale hybrid linear models for lossy image representation,” IEEE Transactions on Image Processing, vol. 15, no. 12, pp. 3655–3671, 2006.
  • [5] R. Vidal, S. Soatto, Y. Ma, and S. Sastry, “An algebraic geometric approach to the identification of a class of linear hybrid systems,” in Decision and Control, 2003. Proceedings. 42nd IEEE Conference on, vol. 1, pp. 167–172, IEEE, 2003.
  • [6] R. Vidal, “Subspace clustering,” IEEE Signal Processing Magazine, vol. 28, no. 2, pp. 52–68, 2011.
  • [7] A. Y. Ng, M. I. Jordan, Y. Weiss, et al.

    , “On spectral clustering: Analysis and an algorithm,” in

    Proceedings of the Advances in Neural Information Processing Systems (NIPS), vol. 14, pp. 849–856, 2001.
  • [8] E. Elhamifar and R. Vidal, “Sparse subspace clustering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2790–2797, IEEE, 2009.
  • [9] E. Elhamifar and R. Vidal, “Sparse subspace clustering: Algorithm, theory, and applications,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 11, pp. 2765–2781, 2013.
  • [10] S.-J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, “An interior-point method for large-scale -regularized least squares,” IEEE journal of selected topics in signal processing, vol. 1, no. 4, pp. 606–617, 2007.
  • [11] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine Learning, vol. 3, no. 1, pp. 1–122, Jan. 2011.
  • [12]

    E. L. Dyer, A. C. Sankaranarayanan, and R. G. Baraniuk, “Greedy feature selection for subspace clustering,”

    The Journal of Machine Learning Research, vol. 14, no. 1, pp. 2487–2517, 2013.
  • [13] C. You, D. Robinson, and R. Vidal, “Scalable sparse subspace clustering by orthogonal matching pursuit,” in in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3918–3927, 2016.
  • [14] C.-Y. Lu, H. Min, Z.-Q. Zhao, L. Zhu, D.-S. Huang, and S. Yan, “Robust and efficient subspace segmentation via least squares regression,” Computer Vision–ECCV 2012, pp. 347–360, 2012.
  • [15] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, “Robust recovery of subspace structures by low-rank representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 171–184, 2013.
  • [16]

    P. Favaro, R. Vidal, and A. Ravichandran, “A closed form solution to robust subspace estimation and clustering,” in

    Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 1801–1807, IEEE, 2011.
  • [17] R. Vidal and P. Favaro, “Low rank subspace clustering (lrsc),” Pattern Recognition Letters, vol. 43, pp. 47–61, 2014.
  • [18] R. Heckel and H. Bölcskei, “Robust subspace clustering via thresholding,” IEEE Transactions on Information Theory, vol. 61, no. 11, pp. 6320–6342, 2015.
  • [19] M. Soltanolkotabi and E. J. Candes, “A geometric analysis of subspace clustering with outliers,” The Annals of Statistics, pp. 2195–2238, Aug. 2012.
  • [20] M. Soltanolkotabi, E. Elhamifar, E. J. Candes, et al., “Robust subspace clustering,” The Annals of Statistics, vol. 42, no. 2, pp. 669–699, Apr. 2014.
  • [21] S. Chen, S. A. Billings, and W. Luo, “Orthogonal least squares methods and their application to non-linear system identification,” International Journal of Control, vol. 50, no. 5, pp. 1873–1896, Nov. 1989.
  • [22] A. Hashemi and H. Vikalo, “Recovery of sparse signals via branch and bound least-squares,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 4760–4764, IEEE, 2017.
  • [23] L. Rebollo-Neira and D. Lowe, “Optimized orthogonal matching pursuit approach,” IEEE Signal Processing Letters, vol. 9, no. 4, pp. 137–140, Apr. 2002.
  • [24]

    A. Hashemi and H. Vikalo, “Sparse linear regression via generalized orthogonal least-squares,” in

    Proceedings of IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 1305–1309, IEEE, Dec. 2016.
  • [25] C. Soussen, R. Gribonval, J. Idier, and C. Herzet, “Joint k-step analysis of orthogonal matching pursuit and orthogonal least squares,” IEEE Transactions on Information Theory, vol. 59, no. 5, pp. 3158–3174, May 2013.
  • [26] C. Herzet, A. Drémeau, and C. Soussen, “Relaxed recovery conditions for omp/ols by exploiting both coherence and decay,” IEEE Transactions on Information Theory, vol. 62, no. 1, pp. 459–470, 2016.
  • [27] A. Hashemi and H. Vikalo, “Sampling requirements and accelerated schemes for sparse linear regression with orthogonal least-squares,” arXiv preprint arXiv, 2016.