Robust Subspace Clustering via Tighter Rank Approximation

10/30/2015 ∙ by Zhao Kang, et al. ∙ Southern Illinois University 0

Matrix rank minimization problem is in general NP-hard. The nuclear norm is used to substitute the rank function in many recent studies. Nevertheless, the nuclear norm approximation adds all singular values together and the approximation error may depend heavily on the magnitudes of singular values. This might restrict its capability in dealing with many practical problems. In this paper, an arctangent function is used as a tighter approximation to the rank function. We use it on the challenging subspace clustering problem. For this nonconvex minimization problem, we develop an effective optimization procedure based on a type of augmented Lagrange multipliers (ALM) method. Extensive experiments on face clustering and motion segmentation show that the proposed method is effective for rank approximation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Matrix rank minimization arises in control, machine learning, signal processing and other areas

[43]

. It is difficult to solve due to the discontinuity and nonconvexity of the rank function. Existing algorithms are largely based on the nuclear norm heuristic, i.e., to replace the rank by the nuclear norm

[11]. The nuclear norm of a matrix , denoted by , is the sum of all its singular values, i.e., . Under some conditions, the solution to the nuclear norm heuristic coincides with the minimum rank solution [31, 32]. However, since the nuclear norm is the convex envelop of rank() over the unit ball , it may deviate from the rank of in many circumstances [4, 3]. The rank function counts the number of nonvanishing singular values, while the nuclear norm sums their amplitudes. As a result, the nuclear norm may be dominated by a few very large singular values. Variations of standard nuclear norm are shown to be promising in some recent research [14, 2, 29]. A number of nonconvex surrogate functions have come up to better approximate the rank function, such as Logarithm Determinant [11, 16], Schatten- norm [26], truncated nuclear norm [14] and others [24]. In general, they are to solve the following low-rank minimization problem:

(1)

where denotes the -th singular value of , is a potentially nonconvex, nonsmooth function, and

is a loss function. By choosing

, the summation of the first term in (1) goes back to the nuclear norm , problem (1) becomes the well known convex relaxation of the rank minimization problem:

(2)

In this paper, we will propose a new nonconvex rank approximation and consider subspace clustering as a specific application.

1.1 Previous Work on Subspace Clustering

In many real-world applications, high-dimensional data reside in a union of multiple low-dimensional subspaces rather than one single low-dimensional subspace

[7]

. Subspace clustering deals with exactly this structure by clustering data points according to their underlying subspaces. It has numerous applications in computer vision

[30] and image processing [25]. Therefore subspace clustering has drawn significant attention in recent years [36]

. In practice, the underlying subspace structure is often corrupted by noise and outliers, and thus the data may deviate from the original subspaces. It is necessary to develop robust estimation techniques.

A number of approaches to subspace clustering have been proposed in the past two decades. According to the survey in [36]

, they can be roughly divided into four categories: 1) algebraic methods; 2) iterative methods; 3) statistical methods; and 4) spectral clustering-based methods. Among them, spectral clustering-based methods have obtained state-of-the-art results, including sparse subspace clustering (SSC)

[9], and low rank representation (LRR) [21]

. They perform subspace clustering in two steps: first, learning an affinity matrix that encodes the subspace membership information, and then applying spectral clustering algorithms

[33, 28] to the learned affinity matrix to obtain the final clustering results. Their main difference is how to obtain a good affinity matrix.

SSC assumes that each data point can be represented as a sparse linear combination of other points. The popular -norm heuristic is used to capture the sparsity. It enjoys great performance for face clustering and motion segmentation data. Now we have a good theoretical understanding about SSC. For instance, [8] shows that disjoint subspaces can be exactly recovered under certain conditions; geometric analysis of SSC [34] significantly broadens the scope of SSC to intersecting subspaces. However, the data points are assumed to be lying exactly in the subspace. This assumption may be violated in the presence of corrupted data. [38] extends SSC by adding adversarial or random noise. However, SSC’s solution might be too sparse, thus the affinity graph from a single subspace will not be a fully connected body [27]. To address the above issue, another regularization term is introduced to promote connectivity of the graph [9].

LRR also represents each data point as a linear combination of other points. It is to find the lowest rank representation of all data points jointly, where the nuclear norm is used as a common surrogate of the rank function. In the presence of noise or outliers, LRR solves the following problem:

(3)

where balances the effects of the low rank representation and errors, is a set of

-dimensional data vectors drawn from the union of

subspaces , and characterizes certain corruptions . For example, when represents Gaussian noise, squared Frobenius norm is used; when denotes random corruptions, norm is appropriate; when indicates the sample-specific corruptions, norm is adopted, where . The low rank as well as sparsity requirement may help counteract corruptions. A variant of LRR works even in the presence of some arbitrarily large outliers [23]. However, LRR has never been shown to succeed other than under strong “independent subspace” condition [15].

In view of the issues with the nuclear norm mentioned in the beginning, we propose the use of an arctangent function instead in this work. We demonstrate the enhanced performance of the proposed algorithm on benchmark data sets.

1.2 Our Contributions

In summary, the main contributions of this paper are threefold:

  • More accurate rank approximation is proposed to obtain the low-rank representation of high-dimensional data.

  • An efficient optimization procedure is developed for arctangent rank minimization (ARM) problem. Theoretical analysis shows that our algorithm converges to a stationary point.

  • The superiority of the proposed method to various state-of-the-art subspace clustering algorithms is verified with significantly and consistently lower error rates of ARM on popular datasets.

Figure 1: Comparison of approximation for rank 2.

2 Subspace clustering by ARM

In this work, we demonstrate the application of , where for any (for high dimensional data usually which is the case we suppose in the paper), as a rank approximation of matrix in subspace clustering setting. There are three advantages of this approximation function. First, it approximates rank() much better than the nuclear norm does, i.e., as , . Figure 1 shows the rank approximation value of the two approaches for rank 2 situation. We can clearly see that arctangent reflects the real rank pretty well on a broad range of singular values. Second, is differentiable, concave and monotonically increasing on , by defining the gradient of at 0 as . Third, is unitarily invariant and is absolutely symmetric, i.e., is invariant under arbitrary permutation and sign changes of the components of . Based on these properties, we have the following theorems, which is proved in Appendix A.

Theorem 2.1

For and , the following problem

(4)

is solved by the vector minimization

(5)

so that with the SVD of being
.

2.1 Arctangent Rank Minimization

To demonstrate the effectiveness of the arctangent rank approximation, we consider its application in the challenging subspace clustering problem. We propose the following arctangent rank minimization (ARM) problem:

(6)

It is difficult to solve (6) directly because the objective function is neither convex nor concave. We convert it to the following equivalent problem:

(7)

Now we resort to a type of augmented Lagrange multipliers (ALM) [20] method to solve (7). For simplicity of notation, we denote and . The corresponding augmented Lagrangian function is:

(8)

where is a penalty parameter and , are Lagrangian multipliers. The variables , , and can be updated alternatively, one at each step, while keeping the other two fixed. For the ()th iteration, the iterative scheme is given as follows.

For , by fixing , , and , we have:

(9)

It is evident that the objective function of (9) is a strongly convex quadratic function which can be solved directly. By setting the first derivative of it to zero, we have:

(10)

where

is the identity matrix.

For , we have:

(11)

Then we can convert it to problem (5). The first term in (5) is concave while the second term convex in , so we can apply difference of convex (DC) [13] (vector) optimization method. A linear approximation is used at each iteration of DC programing. At iteration ,

(12)

whose closed-form solution is

(13)

where is the gradient of at and the SVD of is . Finally, it converges to a local optimal point . Then .

For , we have the following subproblem:

(14)

Depending on different regularization strategies, we have different closed-form solutions. For squared Forbenius norm, it is again a quadratic problem,

(15)

For and norm, we use the lemmas from Appendix B. Let , we can solve element-wisely as below:

(16)

In the case of norm, we have

(17)

The update of Lagrange multipliers is:

(18)
(19)

The procedure is outlined in Algorithm 1.

Input: data matrix , parameters , and .
Initialize: , , .
REPEAT

1:  Update by (10).
2:  Solve (11).
3:  Solve by either (15), (16) or (17) according to .
4:  Update by (18) and by (19).
5:  Update by .

UNTIL stopping criterion is met.

Algorithm 1 Arctan Rank Minimization

Figure 2: Sample face images in Extended Yale B.

2.2 Affinity Graph Construction

After obtaining optimal , we can build the similarity graph matrix . As argued in [9], some postprocessing of the coefficient matrix can improve the clustering performance. Following the angular information based technique of [21], we define , where and are from the skinny of . Inspired by [17], we define as follows:

(20)

where and denote the -th and -th columns of , and controls the sharpness of the affinity between two points. Increasing the power enhances the separation ability in the presence of noise. However, an excessively large would break affinities between points of the same group. In order to compare with LRR111As we confirmed with an author of [21], the power 2 of its equation (12) is a typo, which should be 4., we use in our experiments, then we have the same postprocessing procedure as LRR. After obtaining , we directly utilize a spectral clustering algorithm NCuts [33] to cluster the samples. Algorithm 2 summarizes the complete subspace clustering steps of the proposed method.

Input: data matrix , number of subspaces .
Do

1:  Obtain optimal by solving (7).
2:  Compute the skinny SVD .
3:  Calculate .
4:  Construct the affinity graph matrix by (20).
5:  Perform NCuts on .
Algorithm 2 Subspace Clustering by ARM
Figure 3: Affinity graph matrix with five and ten subjects.

3 Convergence analysis

Since the objective function (6) is nonconvex, it would not be easy to prove the convergence in theory. In this paper, we mathematically prove that our optimization algorithm has at least a convergent subsequence which converges to an accumulation point, and moreover, any accumulation point of our algorithm is a stationary point. Although the final solution might be a local optimum, our results are superior to the global optimal solution from convex approaches. Some previous work also reports similar observations [39, 12, 42].

We will show the proof in the case of . Let’s first reformulate our objective function:

(21)
(22)
Lemma 3.1

The sequences of and are bounded.

satisfies the first-order necessary local optimality condition,

(23)

Let’s define if ; otherwise, it is 1. According to (41) in Appendix B,

(24)

and , is bounded. From (23), we conclude that is bounded.

Similarly, for

(25)

Here denotes the subgradient operator [6]. Because is nonsmooth only at , we define if . Then is bounded. Therefore, is bounded.

Lemma 3.2

, and are bounded if , and is invertible.

(26)
(27)

Iterating the inequality (27) gives that

(28)

Under the given conditions on , both terms on the right-hand side of the above inequality are bounded, thus is bounded. In addition,

(29)

The left-hand side in the above equation is bounded and each term on the right-hand side is nonnegative, so each term is bounded. Therefore, is bounded. is bounded according to the last term on the right-hand side of (29), and thus after multiplying a constant matrix , we have is bounded. Under the condition that is invertible, by multiplying a constant matrix , we have that is bounded. Finally, is bounded because the second to the last term is bounded. Therefore, , and are bounded.

Theorem 3.1

The sequence generated by Algorithm 1 has at least one accumulation point, under the conditions that , and is invertible. For any accumulation point , is a stationary point of optimization problem (21), under the conditions that , and .

Based on the conditions on the penalty parameter sequence and , Algorithm 1 generates a bounded sequence by Lemma (3.1) and (3.2) . By the Bolzano-Weierstrass theorem, at least one accumulation point exists, e.g., . Without loss of generality, we assume that itself converges to . As shown below, is a stationary point of problem (21), under additional conditions that and .

Since and , we have . Therefore, .

Similarly, by , we have .

For , the first-order optimality condition is

If and , we have , i.e., . It is easy to verify . Therefore, satisfies the KKT conditions of and thus is a stationary point of (21).

4 Experiments

This section presents experiments with the proposed algorithm on the Extended Yale B (EYaleB) [18] and Hopkins 155 databases [35]. They are standard tests for robust subspace clustering algorithms. As shown in [9], the challenge in the Hopkins 155 dataset is due to the small principal angles between subspaces. For EYaleB, the challenge lies in the small principal angles and another factor that data points from different subspaces are close. Our results are compared with several state-of-the-art subspace clustering algorithms, including LRR [21], SSC [9], LRSC [10, 37], spectral curvature clustering (SCC) [5], and local subspace affinity (LSA) [40], in terms of misclassification rate222The implementation of our algorithm is available at: https://github.com/sckangz/arctangent.. For fair comparison, we follow the experimental setup in [9] and obtain the results.

As other methods do, we tune our parameters to obtain the best results. In general, the value of depends on prior knowledge of the noise level of the data. If the noise is heavy, a small should be adopted. and affect the convergence speed. The larger their values are, the fewer iterations are required for the algorithm to converge, but meanwhile we may lose some precision of the final objective function value. In the literature, the value of is often chosen between 1 and 1.1. The iteration stops at a relative normed difference of between two successive iterations, or a maximum of 150 iterations.

4.1 Face Clustering

Face clustering refers to partitioning a set of face images from multiple individuals to multiple subspaces according to the identity of each individual. The face images are heavily contaminated by sparse gross errors due to varying lighting conditions, as shown in Figure 2. Therefore, is used to model the errors in our experiment. The EYaleB database contains cropped face images of 38 individuals taken under 64 different illumination conditions. The 38 subjects were divided into four groups as follows: subjects 1 to 10, 11 to 20, 21 to 30, and 31 to 38. All choices of are considered for each of the first three groups, and all choices of are considered for the last group. As a result, there are combinations corresponding to different . Each image is downsampled to and is vectorized to a 2016-dimensional vector. , and are used in this experiment.


Figure 4: Recovery results of two face images. The three columns from left to right are the original image (), the error matrix () and the recovered image (), respectively.

Algorithm LRR SSC LSA LRSC SCC ARM
2 Subjects
Mean 2.54 1.86 32.80 5.32 16.62 1.51
Median 0.78 0.00 47.66 4.69 7.82 0.78
3 Subjects
Mean 4.21 3.10 52.29 8.47 38.16 2.26
Median 2.60 1.04 50.00 7.81 39.06 1.56
5 Subjects
Mean 6.90 4.31 58.02 12.24 58.90 3.06
Median 5.63 2.50 56.87 11.25 59.38 2.50
8 Subjects
Mean 14.34 5.85 59.19 23.72 66.11 3.70
Median 10.06 4.49 58.59 28.03 64.65 3.32
10 Subjects
Mean 22.92 10.94 60.42 30.36 73.02 3.85
Median 23.59 5.63 57.50 28.75 75.78 2.97
Table 1: Clustering error rates (%) on the EYaleB database.

Table 1 provides the best performance of each method. As shown in the table, our proposed method has the lowest mean clustering error rates in all five settings. In particular, in the most challenging case of 10 subjects, the mean clustering error rate is as low as 3.85. The improvement is significant compared with other low rank representation based subspace clustering, i.e., LRR and LRSC. For example, 19 and 11 improvement over LRR can be observed in the cases of 10 and 8 subjects, respectively. This demonstrates the importance of accurate rank approximation. In addition, the error of LSA is large maybe because LSA is based on MSE. Since the MSE is quite sensitive to outliers, LSA will fail to deal with large outliers.


Figure 5: Convergence curve of the objective function value in (6).

Figure 6: Average computational time (sec) of the algorithms on the EYaleB database as a function of the number of subjects.

Figure 3 shows the obtained affinity graph matrix for the five and ten subjects scenarios. We can see a distinct block-diagonal structure, which means that each cluster becomes highly compact and different subjects are well separated.

Figure 7: Example frames from two video sequences of the Hopkins 155 database with traced feature points.

In Figure 4, we present the recovery results of some sample faces from the 10-subject clustering case. We can see that the proposed algorithm has the benefit of removing the corruptions in data.

Figure 5 plots the progress of objective function values of (6). It is observed that with more iterations, the value of objective function decreases monotonically. This empirically verifies the convergence of our optimization method.

We compare the average computational time of LRR, SSC, and ARM as a function of the number of subjects in Figure 6. All the experiments are conducted and timed on the same machine with an Intel Xeon E3-1240 3.40GHz CPU that has 4 cores and 8GB memory, running Ubuntu and Matlab (R2014a). We can observe that the computational time of SSC is higher than LRR and ARM, while ARM is a little slower than LRR in most cases.

4.2 Motion Segmentation


Algorithm LRR SSC LSA LRSC SCC ARM
2 Motions
Mean 2.13 1.52 4.23 3.69 2.89 1.48
Median 0.00 0.00 0.56 0.29 0.00 0.00
3 Motions
Mean 4.03 4.40 7.02 7.69 8.25 1.49
Median 1.43 0.56 1.45 3.80 0.24 0.84
All
Mean 2.56 2.18 4.86 4.59 4.10 1.48
Median 0.00 0.00 0.89 0.60 0.00 0.00

Table 2: Segmentation error rates (%) on the Hopkins 155 Dataset.

Motion segmentation involves segmenting a video sequence of multiple moving objects into multiple spatiotemporal regions corresponding to different motions. These motion sequences can be divided into three main categories: checkerboard, traffic, and articulated or non-rigid motion sequences. The Hopkins 155 dataset includes 155 video sequences of 2 or 3 motions, corresponding to 2 or 3 low-dimensional subspaces of the ambient space. Each sequence represents a data set and so there are 155 motion segmentation problems in total. Several example frames are shown in Figure 7. The trajectories are extracted automatically by a tracker, so they are slightly corrupted by noise. As in [21, 22], is adopted in the model. In this experiment, , and .

We use the original 2F-dimensional feature trajectories in our experiment. We show the clustering error rates of different algorithms in Table 2. ARM outperforms other algorithms in mean error rate. Especially, its all mean error rates are around . This again demonstrates the effectiveness of using arctangent as a rank approximation.


Figure 8: The influence of parameter of ARM on clustering error of Hopkins 155 database.

Figure 8 shows the culstering error rate of ARM for different over all 155 sequences. When is between 1 and 3, the clustering error rate varies between and . This demonstrates that ARM performs well under a pretty wide range of values of . This is another advantage of ARM over LRR [21].

5 Conclusion

In this work, we propose to use arctangent as a concave rank approximation function. It has some nice properties compared with the standard nuclear norm. We apply this function to the low rank representation-based subspace clustering problem and develop an iterative algorithm for optimizing the associated objective function. Extensive experimental results demonstrate that, compared to many state-of-the-art algorithms, the proposed algorithm gives the lowest clustering error rates on many benchmark datasets. This fully demonstrates the significance of accurate rank approximation. Interesting future work includes other applications of the arctangent rank approximation; for example, matrix completion. Since LRR can only ensure its validity for independent subspace segmentation, it is worthwhile to investigate somewhat dependent yet possibly disjoint subspace clustering.

6 Acknowledgments

This work is supported by US National Science Foundation Grants IIS 1218712. The corresponding author is Qiang Cheng.

References

  • [1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.
  • [2] J.-F. Cai, E. J. Candès, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010.
  • [3] E. J. Candès and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717–772, 2009.
  • [4] E. J. Candès and T. Tao. The power of convex relaxation: Near-optimal matrix completion. Information Theory, IEEE Transactions on, 56(5):2053–2080, 2010.
  • [5] G. Chen and G. Lerman. Spectral curvature clustering (scc). International Journal of Computer Vision, 81(3):317–330, 2009.
  • [6] F. H. Clarke. Optimization and nonsmooth analysis, volume 5. Siam, 1990.
  • [7] E. Elhamifar and R. Vidal. Sparse subspace clustering. In

    Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on

    , pages 2790–2797. IEEE, 2009.
  • [8] E. Elhamifar and R. Vidal. Clustering disjoint subspaces via sparse representation. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pages 1926–1929. IEEE, 2010.
  • [9] E. Elhamifar and R. Vidal. Sparse subspace clustering: Algorithm, theory, and applications. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(11):2765–2781, 2013.
  • [10] P. Favaro, R. Vidal, and A. Ravichandran. A closed form solution to robust subspace estimation and clustering. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1801–1807. IEEE, 2011.
  • [11] M. Fazel. Matrix rank minimization with applications. PhD thesis, PhD thesis, Stanford University, 2002.
  • [12] P. Gong, J. Ye, and C.-s. Zhang. Multi-stage multi-task feature learning. In Advances in Neural Information Processing Systems, pages 1988–1996, 2012.
  • [13] R. Horst and N. V. Thoai. Dc programming: overview. Journal of Optimization Theory and Applications, 103(1):1–43, 1999.
  • [14] Y. Hu, D. Zhang, J. Ye, X. Li, and X. He. Fast and accurate matrix completion via truncated nuclear norm regularization. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(9):2117–2130, 2013.
  • [15] K. Kanatani. Motion segmentation by subspace separation and model selection. In Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, volume 2, pages 586–591, 2001.
  • [16] Z. Kang, C. Peng, J. Cheng, and Q. Cheng. Logdet rank minimization with application to subspace clustering. Computational Intelligence and Neuroscience, 2015, 2015.
  • [17] F. Lauer and C. Schnorr. Spectral clustering of linear subspaces for motion segmentation. In Computer Vision, 2009 IEEE 12th International Conference on, pages 678–685. IEEE, 2009.
  • [18] K.-C. Lee, J. Ho, and D. Kriegman.

    Acquiring linear subspaces for face recognition under variable lighting.

    Pattern Analysis and Machine Intelligence, IEEE Transactions on, 27(5):684–698, 2005.
  • [19] A. S. Lewis and H. S. Sendov. Nonsmooth analysis of singular values. part i: Theory. Set-Valued Analysis, 13(3):213–241, 2005.
  • [20] Z. Lin, R. Liu, and Z. Su. Linearized alternating direction method with adaptive penalty for low-rank representation. In Advances in neural information processing systems, pages 612–620, 2011.
  • [21] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma. Robust recovery of subspace structures by low-rank representation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(1):171–184, 2013.
  • [22] G. Liu, Z. Lin, and Y. Yu. Robust subspace segmentation by low-rank representation. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 663–670, 2010.
  • [23] G. Liu, H. Xu, and S. Yan.

    Exact subspace segmentation and outlier detection by low-rank representation.

    In AISTATS, pages 703–711, 2012.
  • [24] C. Lu, J. Tang, S. Y. Yan, and Z. Lin. Generalized nonconvex nonsmooth low-rank minimization. In IEEE International Conference on Computer Vision and Pattern Recognition. IEEE, 2014.
  • [25] Y. Ma, H. Derksen, W. Hong, and J. Wright. Segmentation of multivariate mixed data via lossy data coding and compression. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(9):1546–1562, 2007.
  • [26] K. Mohan and M. Fazel. Iterative reweighted algorithms for matrix rank minimization. The Journal of Machine Learning Research, 13(1):3441–3473, 2012.
  • [27] B. Nasihatkon and R. Hartley. Graph connectivity in sparse subspace clustering. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 2137–2144. IEEE, 2011.
  • [28] A. Y. Ng, M. I. Jordan, Y. Weiss, et al.

    On spectral clustering: Analysis and an algorithm.

    Advances in neural information processing systems, 2:849–856, 2002.
  • [29] F. Nie, H. Huang, and C. H. Ding. Low-rank matrix recovery via efficient schatten p-norm minimization. In AAAI, 2012.
  • [30] S. Rao, R. Tron, R. Vidal, and Y. Ma. Motion segmentation in the presence of outlying, incomplete, or corrupted trajectories. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(10):1832–1845, 2010.
  • [31] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review, 52(3):471–501, 2010.
  • [32] B. Recht, W. Xu, and B. Hassibi. Null space conditions and thresholds for rank minimization. Mathematical programming, 127(1):175–202, 2011.
  • [33] J. Shi and J. Malik. Normalized cuts and image segmentation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 22(8):888–905, 2000.
  • [34] M. Soltanolkotabi, E. J. Candes, et al. A geometric analysis of subspace clustering with outliers. The Annals of Statistics, 40(4):2195–2238, 2012.
  • [35] R. Tron and R. Vidal. A benchmark for the comparison of 3-d motion segmentation algorithms. In Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pages 1–8. IEEE, 2007.
  • [36] R. Vidal. A tutorial on subspace clustering. IEEE Signal Processing Magazine, 28(2):52–68, 2010.
  • [37] R. Vidal and P. Favaro. Low rank subspace clustering (lrsc). Pattern Recognition Letters, 43:47–61, 2014.
  • [38] Y.-X. Wang and H. Xu. Noisy sparse subspace clustering. In Proceedings of The 30th International Conference on Machine Learning, pages 89–97, 2013.
  • [39] S. Xiang, X. Tong, and J. Ye.

    Efficient sparse group feature selection via nonconvex optimization.

    In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 284–292, 2013.
  • [40] J. Yan and M. Pollefeys. A general framework for motion segmentation: Independent, articulated, rigid, non-rigid, degenerate and non-degenerate. In Computer Vision–ECCV 2006, pages 94–106. Springer, 2006.
  • [41] J. Yang, W. Yin, Y. Zhang, and Y. Wang. A fast algorithm for edge-preserving variational multichannel image restoration. SIAM Journal on Imaging Sciences, 2(2):569–592, 2009.
  • [42] Z. Zhang and B. Tu. Nonconvex penalization using laplace exponents and concave conjugates. In Advances in Neural Information Processing Systems, pages 611–619, 2012.
  • [43] Y.-B. Zhao. An approximation theory of matrix rank minimization and its application to quadratic equations. Linear Algebra and its Applications, 437(1):77–93, 2012.

Appendix A Proof

Theorem A.1

For and , the following problem

(30)

is solved by the vector minimization

(31)

so that with the SVD of being
.

Let be the skinny SVD of , then . Denoting which has exactly the same singular values as , we have

(32)
(33)
(34)
(35)
(36)
(37)
(38)
(39)
(40)

In the above, (33) holds because the Frobenius norm is unitarily invariant; (34) holds because is unitarily invariant; (36) is true by von Neumann’s trace inequality; and (38) holds because of the definition of . Therefore, (38) is a lower bound of (32). Note that the equality in (36) is attained if . Because , the SVD of is . By minimizing (39), we get . Therefore, eventually we get , which is the minimizer of problem (30).

Appendix B Theorem and Lemmas

Theorem B.1

[19] Suppose is represented as , and is absolutely symmetric and differentiable, where with SVD , the gradient of at is

(41)

where .

Lemma B.1

[1] For , and , the solution of the problem

is given by , which is defined component-wisely by