Robust PCA via Nonconvex Rank Approximation

11/17/2015 ∙ by Zhao Kang, et al. ∙ Southern Illinois University 0

Numerous applications in data mining and machine learning require recovering a matrix of minimal rank. Robust principal component analysis (RPCA) is a general framework for handling this kind of problems. Nuclear norm based convex surrogate of the rank function in RPCA is widely investigated. Under certain assumptions, it can recover the underlying true low rank matrix with high probability. However, those assumptions may not hold in real-world applications. Since the nuclear norm approximates the rank by adding all singular values together, which is essentially a ℓ_1-norm of the singular values, the resulting approximation error is not trivial and thus the resulting matrix estimator can be significantly biased. To seek a closer approximation and to alleviate the above-mentioned limitations of the nuclear norm, we propose a nonconvex rank approximation. This approximation to the matrix rank is tighter than the nuclear norm. To solve the associated nonconvex minimization problem, we develop an efficient augmented Lagrange multiplier based optimization algorithm. Experimental results demonstrate that our method outperforms current state-of-the-art algorithms in both accuracy and efficiency.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

page 8

Code Repositories

lrslibrary

Low-Rank and Sparse Tools for Background Modeling and Subtraction in Videos


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In many machine learning and data mining applications, the dimensionality of data is very high, such as digital images, video sequences, text documents, genomic data, social networks, and financial time series. Data mining on such data sets is challenging due to the curse of dimensionality. Dimensionality reduction techniques, which project the original high-dimensional feature space to a low-dimensional space, have been extensively explored. Among them, principal component analysis (PCA)

[1]

, which finds a small number of orthogonal basis vectors that characterize most of the variability of the data set, is well established and commonly used. However, PCA may fail spectacularly even when a single grossly corrupted entry exists in the data. To enhance its robustness to outliers or corrupted observations, early attempts on robust PCA (RPCA) have been made

[2], [3], [4], [5], [6]. Nevertheless, none of these algorithms yields a solution in polynomial-time with strong performance guarantees under broad conditions.

Due to the seminal work of [7], [8], a more recent version of RPCA becomes popular these days. The idea is to recover a low-rank matrix from highly corrupted observations . Entries in the sparse component

can have arbitrarily large magnitude. This has numerous applications ranging from recommender system design to anomaly detection in dynamic networks. For example, for videos and face images under varying illumination, the background and underlying clean face image are regarded as the low-rank component while the moving objects and shadows represent the sparse part

[8]; common words in a collection of text documents can be captured by a low-rank matrix while the few words that distinguish each document from others can be represented by a sparse matrix [9].

Mathematically, this kind of problem can be modeled as

(1)

where a weight parameter. Unfortunately, (1) is generally an NP-hard problem. By relaxing the nonconvex rank function and the -norm into the nuclear norm and -norm respectively, a convex formulation can be yielded

(2)

where ; i.e., the nuclear norm of is the sum of its singular values, and . Under incoherence assumptions, both low-rank and sparse components can be recovered exactly with an overwhelming probability [8].

Despite its convex formulation and ease of optimization, RPCA in (2) has two major limitations. First, the underlying matrix may have no incoherence guarantee [8] in practical scenarios, and the data may be grossly corrupted. Under these circumstances, the resulting global optimal solution to (2) may deviate significantly from the truth. Second, RPCA shrinks all the singular values equally. The nuclear norm is essentially an norm of the singular values and it is well known that norm has a shrinkage effect and leads to a biased estimator [10], [11]. This implies that the nuclear norm over-penalizes large singular values, and consequently it may only find a much biased solution. Nonconvex penalties to norm such as smoothly clipped absolute deviation penalty [10], minimax concave penalty [11], capped- regularization [12], and truncated function [13] have shown that they provide better estimation accuracy and variable selection consistency [14]. Recently, nonconvex relaxations to the nuclear norm have received increasing attention [15]. Variations of the nuclear norm, e.g., weighted nuclear norm [16], [17], singular value thresholding [18], and truncated nuclear norm [19] are proposed and outperform the standard nuclear norm. However, their applications are still quite limited and they are often designed for specific applications.

In this paper, we propose a novel nonconvex function to directly approximate the rank, which provides a tighter approximation than the nuclear norm does. This is crucial to reveal the rank in low-rank matrix estimation. To solve this nonconvex model, we devise an Augmented Lagrange Multiplier (ALM) based optimization algorithm. Theoretical convergence analysis shows that our iterative optimization at least converges to a stationary point. Extensive experiments on three representative applications confirm the advantages of our approach.

Ii Related Work

The convex approach to RPCA in (2) has been studied thoroughly. It is proved that when the locations of nonzero entries of

are uniformly distributed, and when the rank of

and the sparsity of satisfy some mild conditions, and can be exactly recovered with a high probability [8]. In the literature, numerous algorithms have been developed to solve (2), e.g., SVT [18], APGL [20], FISTA [21], and ALM [22]. Among them, ALM based approach is the most popular. Although the theory is elegant, convex technique is still computationally quite expensive and has poor convergence rate [23]. Furthermore, (2) breaks down when large errors concentrate only on a number of columns of [24], [25].

To incorporate the spatial connection information of the sparse elements, -norm is introduced in outlier pursuit [26], [27]:

(3)

Here, can detect outliers with column-wise sparsity, while treats each entry independently. Theoretical analysis on this model is difficult. In this model, only the column space of and the column support of can be exactly recovered [28], [24]. When the rank of the intrinsic matrix is comparable to the number of samples , the working range of outlier pursuit is limited.

To alleviate the deficiency of convex relaxations, capped norm based nonconvex RPCA (CNorm) has been proposed and it solves the following problem [29]:

(4)

where , for some small parameters , , and denotes the level of Gaussian noise. If all singular values of are greater than and all absolute values of elements are greater than , then the objective function in (4) falls back to (1). However, it is hard to provide any convergence guarantee about this nonconvex method. More importantly, as we will show in the experimental part, it cannot deal with large scale data well.

By combining the simplicity of PCA and elegant theory of convex RPCA, a recent paper has proposed a new nonconvex RPCA [23]. The idea is to project the residuals onto the set of low-rank and sparse matrices alternatively. Specifically, it proceeds in (the desired rank of ) stages, and compute rank- projection in each stage, where . During this process, sparse errors are suppressed by discarding matrix elements with large approximation errors. This method enjoys several nice properties, including low complexity, global convergence guarantee, fast convergence rate, and theoretical guarantee for exact recovery of the low-rank matrix. However, it needs the knowledge of three parameters: sparsity of , incoherence of , and rank of . Such knowledge is not always readily available.

Iii Proposed algorithm

In this section, we present a novel matrix rank approximation, and propose a nonconvex RPCA algorithm.

Iii-a Problem formulation

Consider the general framework for RPCA

(5)

where denotes a rank approximation which we term -norm, and represents a proper norm of noise and outliers.

Fig. 1: The contribution of different functions to the rank with respect to a varying singular value. The true rank is 1 for nonzero .

We define -norm of matrix as

(6)

It can be observed that , , and it coincides with true rank with being all 0 and all 1. Furthermore, is unitarily invariant, that is, for any orthonormal and . Certainly it is not a real norm. Figure 1 plots several rank relaxations in the literature. Among them, a log-determinant function, , where is a very small constant (e.g., ), has been well studied [30]. As we can see, our formulation ( is used in this figure and our experiments) closely matches the true rank, while the nuclear norm deviates considerably when the singular values depart from 1. As a result, the proposed -norm overcomes the imbalanced penalization by different singular values in convex nuclear norm. On the other hand, (5) is a nonconvex formulation, which is usually difficult to optimize. In the next section, we design an effective algorithm to solve it.

Iii-B Optimization

For problem (5), by introducing a Lagrange multiplier and a quadratic penalty term, we can remove the equality constraint and construct the augmented Lagrangian function:

(7)

where is the inner product of two matrices, that is, , and is a positive parameter. An iterative approach is applied to update , and iteratively. At the th step, we update by solving the following subproblem:

(8)

To solve (8), we first develop the following theorem and provide the proof in Appendix A.

Theorem 1.

Let be the SVD of and . Let be a unitarily invariant function and . Then an optimal solution to the following problem

(9)

is , where and . Here is the Moreau-Yosida operator, defined as

(10)

In our case, the new objective function in (10) is a combination of concave and convex functions. This intrinsic structure motivates us to use difference of convex (DC) programing [31]. DC algorithm decomposes a nonconvex function as the difference of two convex functions and iteratively optimizes it by linearizing the concave term at each iteration. At the th inner iteration,

(11)

which admits a closed-form solution

(12)

where is the gradient of at and is the SVD of . After a number of iterations, it converges to a local optimal point . Then .

For optimization,

(13)

Depending on the choice of , we obtain different closed-form solutions to the above subproblem. According to the result in [32] which is also given as Lemma 3 in Appendix B, for norm,

(14)

where and is the -th column of .

When modeled by norm, based on Lemma 4, we have

(15)

The updates of and are standard:

(16)
(17)

where . The complete procedure is outlined in Algorithm 1.

Input: data matrix , parameters , and .
Initialize: , .
REPEAT

1:  Update by (8).
2:  Solve by either (14) or (15) according to .
3:  Update and by (16) and (17), respectively.

UNTIL converge.

Algorithm 1 Solving problem (5)

Iv Convergence analysis

Convergence analysis of nonconvex optimization problem is usually difficult. In this section, we will show that our algorithm has at least a convergent subsequence which tends to a stationary point. While the final solution might not be a globally optimal one, all our experiments show that our algorithm converges to a solution that produces promising results.

For convenience, we write as in (7)

(18)
Lemma 1.

The sequence is bounded.

Proof.

satisfies the first-order necessary local optimality condition,

(19)

For case, since is nonsmooth at , we redefine subgradient if . Then , hence is bounded. Similarly, it can be shown that is also bounded. Thus is bounded. ∎

Lemma 2.

and are bounded if .

Proof.

With some algebra, we have the following equality

(20)

Then,

(21)

Iterating the inequality chain (21) times, we obtain

(22)

Since is bounded, all terms on the right-hand side of the above inequality are bounded, thus is upper bounded.

Again,

(23)

Since each term on the right-hand side is bounded, is bounded. By the last term on the right-hand of (23), is bounded. Therefore, and are both bounded. ∎

Theorem 2.

Let be the sequence generated in Algorithm 1 and be an accumulation point. Then is a stationary point of optimization problem (5) if and .

Proof.

The sequence generated in Algorithm 1 is bounded as shown in Lemmas 1 and 2. By Bolzano-Weierstrass theorem, the sequence must have at least one accumulation point, e.g., . Without loss of generality, we assume that itself converges to .

Since , we have . Then . Thus the primal feasibility condition is satisfied.

For , it is true that

(24)

If the singular value decomposition of

is , according to Theorem 3 in Appendix B,

(25)

where if ; otherwise, it is . Since is finite, is bounded. Since is bounded, is bounded. Under the assumption that [33],

Hence, satisfies the KKT conditions of . Thus is a stationary point of (5). ∎

V Experiments

In this section, we evaluate our algorithm by deploying it to three important practical applications: foreground-background separation on surveillance video, shadow removal from face images, and anomaly detection. All applications involve the recovery of intrinsically low-dimensional data from gross corruption. We compare our algorithm with other state-of-the-art methods, including convex RPCA [8], CNorm [29], and AltProj [23]. All these three methods call PROPACK package [34]. In addition, for convex RPCA [7], we use the state-of-the-art solver, viz., an inexact augmented Lagrange multiplier (IALM) method [8], [22]. For CNorm, we use the fast alternating algorithm and the convex relaxation solutions from NSA [35] as its initial conditions. We perform all experiments with Matlab in Windows 7 based on Intel Xeon 2.33GHz CPU with 4G RAM111The code is available at: https://github.com/sckangz/noncvx-PRCA.

V-a Parameter setting

There are three parameters in our model: , , and . If is too large, the trivial solution of is obtained, which generates with high rank. On the other hand, small leads to . Similar to [8], can be selected from the neighborhood of . Experiments show that our results are insensitive to in a pretty broad range, so we just set through all our experiments. As for , a large value will lead to fast convergence, while a small value of will result in a more accurate solution. In the literature, a often used value is 1.1. As discussed in [36], can also affect in IALM. If is too large, will have a rank larger than the desired low rank. This provides a way to manipulate the desired rank. By this principle, we tune the value of . Finally, we set to , , and 4 for the following four experiments, respectively. In practice, these parameters can be chosen by cross validation. For fair comparison, we follow the experimental setting in AltProj [23] and stop the program when a relative error of is reached. For other methods, we follow the parameter settings used in the corresponding papers.

        
        
        
(a) AltProj (b) Our (c) IALM (d) CNorm
Fig. 2: Foreground-background separation in the escalator video. The three rows from top to bottom are original image frame, static background, and dynamic foreground, respectively.

V-B Video background subtraction

Background subtraction from video sequences is a popular approach to detecting interesting activities in the scene. Surveillance videos from a fixed camera can be naturally modeled by our model due to their relatively static background and sparse foreground.

V-B1 First experiment scenario

In this experiment, we use a benchmark data set escalator [37], which contains 3,417 frames of size 160 130. The data matrix is formed by vectorizing each frame and concatenating the vectors column-wisely. As a result, the size of is . For this data set, the background appears to be completely static, thus the ideal rank of the background matrix is one.

Algorithm Rank() Time (s)
AltProj 1 6.35e-4 537
CNorm 131 1.00e-1 1015
IALM 2011 9.50e-8 11315
Our 1 5.45e-4 208
TABLE I: Recovery results of escalator video

For escalator, unfortunately, CNorm cannot successfully separate the foreground from background with our current stopping criterion. Thus we relax its terminating relative error to be . For AltProj, we set the desired rank of to be one. The low rank ground truth is not available for these videos so we present a visual comparison of extraction results using different algorithms in Figure 2. As we can see, CNorm suffers from noticeable artifacts (shadows of people), which is due to overfitting in the low rank component. is missing since the sparse component is absorbed by the big relative error. Blur exists in IALM recovery image, which is also observed in many other work [23], [38]. In contrast, both AltProj and our method obtain a clean background. Moreover, the steps of the escalator are also removed by these two methods, since they are moving and are supposed to be part of the dynamic foreground.

Table I gives the quantitative comparison results. In terms of computing time, our method is more than twice faster than AltProj, and 54 times faster than IALM222The experiments in [23] are conducted on a machine with Dual 8-core Xeon (E5-2650) 2.0GHz CPU with 192GB RAM [39]. . One intuitive interpretation is that we observe that fewer iterations are required for our algorithm to converge. Therefore our method is efficient even when the matrix size is large. Furthermore, both AltProj and our algorithm can obtain the desired rank-one matrix . The nuclear norm based IALM results in with rank 2011, which implies that contains many blurred images. If we increase the size of data matrix , IALM performs even worse since it may incur errors from rank approximation. These results emphasize the significance of good rank approximation.

(a) AltProj
(b) IALM
(c) Our
Fig. 3: Foreground-background separation in the lobby video. The three rows from top to bottom are original image frame, background, and dynamic foreground, respectively.

V-B2 Second experiment scenario

Algorithm Rank() Time (s)
AltProj 2 1.88e-4 203
IALM 259 9.59e-4 1187
Our 2 1.95e-5 46
TABLE II: Recovery results of lobby video

The purpose of this experiment is to demonstrate the effectiveness of our algorithm on dynamic backgrounds. In some cases, the background changes over time due to, e.g., illumination variation or weather change. Then the background can have a higher rank. Here we use sequences captured from a lobby. The size of is . On this occasion, background changes are mainly caused by switching on/off lights. Therefore, we expect the rank of to be 2. Again, we set the desired rank in Altproj to be 2. Two examples are shown for each method in Figure 3. The first example denotes the scene before the other three lights are turned on. In this example, except for some shadows of pants in one image recovered by IALM, all the recovered background images appear satisfactory.

We list the numerical results in Table II. For this experiment, our algorithm is almost five times faster than AltProj, 26 times faster than IALM. It is also noted that, the rank obtained from IALM is still high.

  
  
  
  
(a) Original images (b) AltProj (c) Our (d) IALM (e) CNorm
Fig. 4: Shadow removal from face images. Column 1 displays sample images 17 and 30 from Subject05 of the Yale B database. Columns 2 to 5 show their low rank approximation obtained by different algorithms. Rows 2 and 4 are corresponding sparse components.

V-C Face image shadow removal

Removing shadows, specularities, and saturations from face images is another important application of RPCA [8]

. Face images taken under different lighting conditions often introduce errors to face recognition

[40]. These errors might be large in magnitude, but are supposed to be sparse in the spatial domain. Given enough face images of the same person, it is possible to reconstruct the true face image.

We use images from the Extended Yale B database [41]. There are 38 subjects and each subject has 64 images of size taken under varying different illuminations. Images of each subject are heavily corrupted due to different illumination conditions. All images are converted to 32,256-dimensional column vectors, hence for each subject. Since the images are well aligned, should have a rank of 1.

Algorithm Rank() Time (s)
AltProj 1 4.88e-4 22
CNorm 26 1.00e-3 138
IALM 32 6.40e-4 9
Our 1 3.07e-5 0.5
TABLE III: Extended Yale B Face images recovery results
Fig. 5: norm of each of the 200 columns of .

Figure 4 illustrates the results of different algorithms on two images. The proposed algorithm removes the specularities and shadows well, while there are some artifacts left by using IALM and CNorm. Although the visual qualities are similar for AltProj and our method, the numerical measurements in Table III demonstrate that our algorithm is 22 times faster than AltProj. Similar to the results in [29], IALM and CNorm result in of high ranks.

V-D Anomaly Detection

It is widely known that images from the same subject reside in a low-dimensional subspace. If we inject some images of different subjects into a dominant number of images of the same subject, they will stand out as outliers or anomalies. To test this, we use images from USPS data set [42]. We choose digits ’1’ and ’7’, since they share some similarities. And we treat each 1616 image as a 256-dimensional column vector. Then the data matrix is constructed to contain the first 190 samples from digit ’1’ and the last 10 samples from ’7’. Our goal is to identify all anomalies, including all ’7’s, in an unsupervised way. We apply our model to estimate and , which are expected to capture ’1’s and ’7’s, respectively. norm of each column in is used to identify anomalies. Ideally, ’7’s should give larger values than ’1’s.

Fig. 6: USPS anomaly detection results. The first row gives some typical ’1’s and ’7’s. The second row plots the four abnormal ’1’s identified in Figure 5.

Figure 5 shows the norm of columns in . For visual quality, we apply thresholding with a threshold of 4 to get rid of small values. We can see that all ’7’s are found. Besides, four ’1’s at columns 16, 22, 49 and 130 also appear. As shown in Figure 6, these four ’1’s are written in a way different from the rest of ’1’s.

Vi Conclusion

This paper investigates a nonconvex approach to the robust principal component analysis (RPCA) problem. In particular, we provide a novel matrix rank approximation, which is more robust and less biased than the nuclear norm. We devise an augmented Lagrange multiplier framework to solve this nonconvex optimization problem. Extensive experimental results demonstrate that our proposed approach outperforms previous algorithms. Our algorithm can be used as a powerful tool to efficiently separate low-dimensional and sparse structure for high-dimensional data. It would be interesting to establish more theoretical properties to the proposed nonconvex approach, for example, the theoretical guarantees for the estimator to be consistent.

Vii Acknowledgments

This work is supported by US National Science Foundation Grants IIS 1218712. The corresponding author is Qiang Cheng.

Appendix A Proof of Theorem 1

Theorem 1.

[43], [44] Let be the SVD of and . Let be a unitarily invariant function and . Then an optimal solution to the following problem

(26)

is , where and . Here is the Moreau-Yosida operator, defined as

(27)
Proof.

Since , then . Denoting which has exactly the same singular values as , we have

(28)
(29)
(30)
(31)
(32)
(33)

Note that (29) hold since the Frobenius norm is unitarily invariant; (30) is due to the Hoffman-Wielandt inequality; and (31) holds as . Thus, (31) is a lower bound of (28). Because , the SVD of is . By minimizing (32), we get . Hence , which is the optimal solution of problem (26). ∎

Appendix B

Lemma 3.

[32] Let be a given matrix. If the optimal solution to

is , then the -th column of is

Lemma 4.

[21] The shrinkage-thresholding operator is defined as

where and are scalars.

Theorem 3.

[45] Suppose is represented as , where with SVD , , and is differentiable. The gradient of at is

(34)

where .

References

  • [1] I. Jolliffe, Principal component analysis.   Wiley Online Library, 2002.
  • [2] L. Xu and A. L. Yuille, “Robust principal component analysis by self-organizing rules based on statistical physics approach,” Neural Networks, IEEE Transactions on, vol. 6, no. 1, pp. 131–143, 1995.
  • [3] C. Croux and G. Haesbroeck, “Principal component analysis based on robust estimators of the covariance or correlation matrix: influence functions and efficiencies,” Biometrika, vol. 87, no. 3, pp. 603–618, 2000.
  • [4]

    F. De la Torre and M. J. Black, “Robust principal component analysis for computer vision,” in

    Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, vol. 1.   IEEE, 2001, pp. 362–369.
  • [5] F. De La Torre and M. J. Black, “A framework for robust subspace learning,” International Journal of Computer Vision, vol. 54, no. 1-3, pp. 117–142, 2003.
  • [6] C. Croux and A. Ruiz-Gazen, “High breakdown estimators for principal components: the projection-pursuit approach revisited,”

    Journal of Multivariate Analysis

    , vol. 95, no. 1, pp. 206–226, 2005.
  • [7] J. Wright, A. Ganesh, S. Rao, Y. Peng, and Y. Ma, “Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization,” in Advances in neural information processing systems, 2009, pp. 2080–2088.
  • [8] E. J. Candès, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?” Journal of the ACM (JACM), vol. 58, no. 3, p. 11, 2011.
  • [9] K. Min, Z. Zhang, J. Wright, and Y. Ma, “Decomposing background topics from keywords by principal component pursuit,” in Proceedings of the 19th ACM international conference on Information and knowledge management.   ACM, 2010, pp. 269–278.
  • [10] J. Fan and R. Li, “Variable selection via nonconcave penalized likelihood and its oracle properties,” Journal of the American statistical Association, vol. 96, no. 456, pp. 1348–1360, 2001.
  • [11] C.-H. Zhang, “Nearly unbiased variable selection under minimax concave penalty,” The Annals of Statistics, pp. 894–942, 2010.
  • [12] P. Gong, J. Ye, and C.-s. Zhang, “Multi-stage multi-task feature learning,” in Advances in Neural Information Processing Systems, 2012, pp. 1988–1996.
  • [13]

    S. Xiang, X. Tong, and J. Ye, “Efficient sparse group feature selection via nonconvex optimization,” in

    Proceedings of the 30th International Conference on Machine Learning (ICML-13), 2013, pp. 284–292.
  • [14] Z. Wang, H. Liu, and T. Zhang, “Optimal computational and statistical rates of convergence for sparse nonconvex learning problems,” Annals of statistics, vol. 42, no. 6, p. 2164, 2014.
  • [15] Z. Kang, C. Peng, and Q. Cheng, “Robust subspace clustering via tighter rank approximation,” in Proceedings of the 24th ACM International Conference on Conference on Information and Knowledge Management.   ACM, 2015.
  • [16] S. Gu, L. Zhang, W. Zuo, and X. Feng, “Weighted nuclear norm minimization with application to image denoising,” in

    Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on

    .   IEEE, 2014, pp. 2862–2869.
  • [17] X. Zhong, L. Xu, Y. Li, Z. Liu, and E. Chen, “A nonconvex relaxation approach for rank minimization problems,” 2015.
  • [18] J.-F. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956–1982, 2010.
  • [19] Y. Hu, D. Zhang, J. Ye, X. Li, and X. He, “Fast and accurate matrix completion via truncated nuclear norm regularization,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 35, no. 9, pp. 2117–2130, 2013.
  • [20] K.-C. Toh and S. Yun, “An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems,” Pacific Journal of Optimization, vol. 6, no. 615-640, p. 15, 2010.
  • [21] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009.
  • [22] Z. Lin, M. Chen, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” arXiv preprint arXiv:1009.5055, 2010.
  • [23] P. Netrapalli, U. Niranjan, S. Sanghavi, A. Anandkumar, and P. Jain, “Non-convex robust pca,” in Advances in Neural Information Processing Systems, 2014, pp. 1107–1115.
  • [24] H. Xu, C. Caramanis, and S. Sanghavi, “Robust pca via outlier pursuit,” Information Theory, IEEE Transactions on, vol. 58, no. 5, pp. 3047–3064, 2012.
  • [25] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, “Robust recovery of subspace structures by low-rank representation,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 35, no. 1, pp. 171–184, 2013.
  • [26] H. Xu, C. Caramanis, and S. Sanghavi, “Robust pca via outlier pursuit,” in Advances in Neural Information Processing Systems, 2010, pp. 2496–2504.
  • [27] M. McCoy, J. A. Tropp et al., “Two proposals for robust pca using semidefinite programming,” Electronic Journal of Statistics, vol. 5, pp. 1123–1160, 2011.
  • [28] Y. Chen, H. Xu, C. Caramanis, and S. Sanghavi, “Robust matrix completion and corrupted columns,” in Proceedings of the 28th International Conference on Machine Learning (ICML-11), 2011, pp. 873–880.
  • [29] Q. Sun, S. Xiang, and J. Ye, “Robust principal component analysis via capped norms,” in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining.   ACM, 2013, pp. 311–319.
  • [30]

    M. Fazel, H. Hindi, and S. P. Boyd, “Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices,” in

    American Control Conference, 2003. Proceedings of the 2003, vol. 3.   IEEE, 2003, pp. 2156–2162.
  • [31] P. D. Tao and L. T. H. An, “Convex analysis approach to dc programming: Theory, algorithms and applications,” Acta Mathematica Vietnamica, vol. 22, no. 1, pp. 289–355, 1997.
  • [32] M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 68, no. 1, pp. 49–67, 2006.
  • [33] F. Nie, Y. Huang, X. Wang, and H. Huang, “New primal svm solver with linear computational cost for big data classifications,” in Proc. ICML, 2014, pp. 1–9.
  • [34] X. Ding, L. He, and L. Carin, “Bayesian robust principal component analysis,” Image Processing, IEEE Transactions on, vol. 20, no. 12, pp. 3419–3430, 2011.
  • [35] N. S. Aybat, D. Goldfarb, and G. Iyengar, “Fast first-order methods for stable principal component pursuit,” arXiv preprint arXiv:1105.2126, 2011.
  • [36] W. K. Leow, Y. Cheng, L. Zhang, T. Sim, and L. Foo, “Background recovery by fixed-rank robust principal component analysis,” in Computer Analysis of Images and Patterns.   Springer, 2013, pp. 54–61.
  • [37] L. Li, W. Huang, I.-H. Gu, and Q. Tian, “Statistical modeling of complex backgrounds for foreground object detection,” Image Processing, IEEE Transactions on, vol. 13, no. 11, pp. 1459–1472, 2004.
  • [38] S. D. Babacan, M. Luessi, R. Molina, and A. K. Katsaggelos, “Sparse bayesian methods for low-rank matrix estimation,” Signal Processing, IEEE Transactions on, vol. 60, no. 8, pp. 3964–3977, 2012.
  • [39] P. Netrapalli, U. Niranjan, S. Sanghavi, A. Anandkumar, and P. Jain, “Non-convex robust pca,” arXiv preprint arXiv:1410.7660, 2014.
  • [40] R. Basri and D. W. Jacobs, “Lambertian reflectance and linear subspaces,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 25, no. 2, pp. 218–233, 2003.
  • [41] K.-C. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 27, no. 5, pp. 684–698, 2005.
  • [42] C. E. Rasmussen and C. K. Williams, “Gaussian processes for machine learning. cambridge, ma, 2006,” ISBN 0-262-18253-X, Tech. Rep.
  • [43] Z. Kang, C. Peng, and Q. Cheng, “Robust subspace clustering via smoothed rank approximation,” Signal Processing Letters, IEEE, vol. 22, no. 11, pp. 2088–2092, 2015.
  • [44] C. Peng, Z. Kang, H. Li, and Q. Cheng, “Subspace clustering using log-determinant rank approximation,” in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.   ACM, 2015, pp. 925–934.
  • [45] A. S. Lewis and H. S. Sendov, “Nonsmooth analysis of singular values. part i: Theory,” Set-Valued Analysis, vol. 13, no. 3, pp. 213–241, 2005.