Robust and Scalable Column/Row Sampling from Corrupted Big Data

11/18/2016 ∙ by Mostafa Rahmani, et al. ∙ University of Central Florida 0

Conventional sampling techniques fall short of drawing descriptive sketches of the data when the data is grossly corrupted as such corruptions break the low rank structure required for them to perform satisfactorily. In this paper, we present new sampling algorithms which can locate the informative columns in presence of severe data corruptions. In addition, we develop new scalable randomized designs of the proposed algorithms. The proposed approach is simultaneously robust to sparse corruption and outliers and substantially outperforms the state-of-the-art robust sampling algorithms as demonstrated by experiments conducted using both real and synthetic data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Finding an informative or explanatory subset of a large number of data points is an important task of numerous machine learning and data analysis applications, including problems arising in computer vision

[10], image processing [13], bioinformatics [2], and recommender systems [17]. The compact representation provided by the informative data points helps summarize the data, understand the underlying interactions, save memory and enable remarkable computation speedups [14]. Most existing sampling algorithms assume that the data points can be well approximated with low-dimensional subspaces. However, much of the contemporary data comes with remarkable corruptions, outliers and missing values, wherefore a low-dimensional subspace (or a union of them) may not well fit the data. This fact calls for robust sampling algorithms, which can identify the informative data points in presence of all such imperfections. In this paper, we present an new column sampling approach which can identify the representative columns when the data is grossly corrupted and fraught with outliers.

I-a Summary of contributions

We study the problem of informative column sampling in presence of sparse corruption and outliers. The key technical contributions of this paper are summarized next: I. We present a new convex algorithm which locates the informative columns in presence of sparse corruption with arbitrary magnitudes. II. We develop a set of randomized algorithms which provide scalable implementations of the proposed method for big data applications. We propose and implement a scalable column/row subspace pursuit algorithm that enables sampling in a particularly challenging scenario in which the data is highly structured. III. We develop a new sampling algorithm that is robust to the simultaneous presence of sparse corruption and outlying data points. The proposed method is shown to outperform the-state-of-the-art robust (to outliers) sampling algorithms. IV. We propose an iterative solver for the proposed convex optimization problems.

I-B Notations and data model

Given a matrix , denotes its spectral norm, its -norm given by , and its -norm defined as where is the -norm of the column of . In an -dimensional space, is the vector of the standard basis. For a given vector , denotes its -norm. For a given matrix , and are defined as the column and row of , respectively. In this paper, represents the low rank (LR) matrix (the clean data) with compact SVD , where , and and is the rank of . Two linear subspaces and are independent if the dimension of their intersection is equal to zero. The incoherence condition for the row space of with parameter states that [6]. In this paper (except for Section V), it is assumed that the given data follows the following data model.

Data Model 1.

The given data matrix can be expressed as . Matrix is an element-wise sparse matrix with arbitrary support. Each element of

is non-zero with a small probability

.

Ii Related Work

The vast majority of existing column sampling algorithms presume that the data lies in a low-dimensional subspace and look for few data points spanning the span of the dominant left singular vectors [16, 34]. The column sampling methods based on the low rankness of the data can be generally categorized into randomized [7, 9, 8] and deterministic methods [3, 19, 13, 11, 15]

. In the randomized method, the columns are sampled based on a carefully chosen probability distribution. For instance,

[8] uses the -norm of the columns, and in [9] the sampling probabilities are proportional to the norms of the rows of the top right singular vectors of the data. There are different types of deterministic sampling algorithms, including the rank revealing QR algorithm [15], and clustering-based algorithms [3]. In [19, 13, 21, 11] , sparse coding is used to leverage the self-expressiveness property of the columns in low rank matrices to sample informative columns.

The low rankness of the data is a crucial requirement for these algorithms. For instance, [9] assumes that the span of few top right singular vectors approximates the row space of the data, and [11] presumes that the data columns admit sparse representations in the rest of the data, i.e., can be obtained through linear combinations of few columns. However, contemporary data comes with gross corruption and outliers. [21] focused on column sampling in presence of outliers. While the approach in [21] exhibits more robustness to the presence of outliers than older methods, it still ends up sampling from the outliers. In addition, it is not robust to other types of data corruption, especially element-wise sparse corruption. Element-wise sparse corruption can completely tear existing linear dependence between the columns, which is crucial for column sampling algorithms including [21] to perform satisfactorily.

Iii Shortcoming of Random Column Sampling

As mentioned earlier, there is need for sampling algorithms capable of extracting important features and patterns in data when the data available is grossly corrupted and/or contains outliers. Existing sampling algorithms are not robust to data corruptions, hence uniform random sampling is utilized to sample from corrupted data. In this section, we discuss and study some of the shortcomings of random sampling in the context of two important machine learning problems, namely, data clustering and robust PCA.

Data clustering: Informative column sampling is an effective tool for data clustering [19]

. The representative columns are used as cluster centers and the data is clustered with respect to them. However, the columns sampled through random sampling may not be suitable for data clustering. The first problem stems from the non-uniform distribution of the data points. For instance, if the population in one cluster is notably larger than the other clusters, random sampling may not acquire data points from the less populated clusters. The second problem is that random sampling is data independent. Hence, even if a data point is sampled from a given cluster through random sampling, the sampled point may not necessarily be an important descriptive data point from that cluster.

Robust PCA: There are many important applications in which the data follows Data model 1 [6, 5, 37, 35, 25, 26, 18, 38]. In [6], it was shown that the optimization problem

(1)

is guaranteed to yield exact decomposition of into its LR and sparse components if the column space (CS) and the row space (RS) of are sufficiently incoherent with the standard basis. However, the decomposition algorithms directly solving (1) are not scalable as they need to save the entire data in the working memory and have complexity per iteration.

An effective idea to develop scalable decomposition algorithms is to exploit the low dimensional structure of the LR matrix [23, 28, 33, 30, 24, 22]. The idea is to form a data sketch by sampling a subset of columns of whose LR component can span the CS of . This sketch is decomposed using (1) to learn the CS of . Similarly, the RS of is obtained by decomposing a subset of the rows. Finally, the LR matrix is recovered using the learned CS and RS. Thus, in lieu of decomposing the full scale data, one decomposes small sketches constructed from subsets of the data columns and rows.

Since existing sampling algorithms are not robust to sparse corruption, the scalable decomposition algorithms rely on uniform random sampling for column/row sampling. However, if the distributions of the columns/rows of are highly non-uniform, random sampling cannot yield concise and descriptive sketches of the data. For instance, suppose the columns of admit a subspace clustering structure [29, 12] as per the following assumption.

Assumption 1.

The matrix can be represented as . The CS of are random -dimensional subspaces in . The RS of are random -dimensional subspaces in , respectively, , and .

The following two lemmas show that the sufficient number of randomly sampled columns to capture the CS can be quite large depending on the distribution of the columns of .

Lemma 1.

Suppose columns are sampled uniformly at random with replacement from the matrix with rank . If then the selected columns of the matrix span the CS of with probability at least .

Lemma 2.

If   follows Assumption 1, the rank of is equal to , and , then

According to Lemma 2, the RS coherency parameter is linear in . The factor can be quite large depending on the distribution of the columns. Thus, according to Lemma 1 and Lemma 2, we may need to sample too many columns to capture the CS if the distribution of the columns is highly non-uniform. As an example, consider , the rank of and . The matrix follows Assumption 1 with , and . The matrix follows Assumption 1 with , and . Thus, the columns of lie in a union of 60 1-dimensional subspaces. Fig. 1 shows the rank of randomly sampled columns of versus the number of sampled columns. Evidently, we need to sample more than half of the data to span the CS. As such, we cannot evade high-dimensionality with uniform random column/row sampling.

Fig. 1: The rank of randomly sampled columns.

Iv Column Sampling from Sparsely Corrupted Data

In this section, the proposed robust sampling algorithm is presented. It is assumed that the data follows Data model 1. Consider the following optimization problem

(2)

where is the column of and is equal to with the column removed. If the CS of does not contain sparse vectors, the optimal point of (2) is equivalent to the optimal point of

(3)

where is the LR component of and is the LR component of ( similarly, and are the sparse component). To clarify, (2) samples columns of whose LR component cancels out the LR component of and the linear combination is as sparse as possible.

This idea can be extended by searching for a set of columns whose LR component can cancel out the LR component of all the columns. Thus, we modify (2) as

(4)

where is the number of non-zero rows of . The constraint in (4) forces to sample columns. Both the objective function and the constraint in (4) are non-convex. We propose the following convex relaxation

(5)

where is a regularization parameter. Define as the optimal point of (5) and define the vector with entries , where and are the element of and the row of , respectively. The non-zero elements of identify the representative columns. For instance, suppose follows Data model 1 with . The matrix with rank can be expressed as where the ranks of are equal to 5, 1, 5, and 1, respectively. Fig. 2 shows the output of the proposed method and the algorithm presented in [11]. As shown, the proposed method samples a sufficient number of columns from each cluster. Since the algorithm presented in [11] requires strong linear dependence between the columns of , the presence of the sparse corruption matrix seriously degrades its performance.

Fig. 2: The elements of the vector . Te left plot corresponds to (5) and the right plot corresponds to [11].

Iv-a Robust column sampling from Big data

The complexity of solving (5) is

. In this section, we present a randomized scalable approach which yields a scalable implementation of the proposed method for high dimensional data reducing complexity to

. We further assume that the RS of can be captured using a small random subset of the rows of , an assumption that will be relaxed in Section IV-A1. The following lemma shows that the RS can be captured using few randomly sampled rows even if the distribution of the columns is highly non-uniform.

Lemma 3.

Suppose follows Assumption 1 and rows of are sampled uniformly at random with replacement. If the rank of is equal to and

(6)

then the sampled rows span the row space of   with probability at least , where .

The sufficient number of sampled rows for the setup of Lemma 3 is roughly , and is thus independent of the distribution of the columns. Let denote the matrix of randomly sampled rows of and its LR component. Suppose the rank of is equal to . Since the RS of is equal to the RS of , if a set of the columns of span its CS, the corresponding columns of will span the CS of . Accordingly, we rewrite (5) as

(7)

Note that we still have an dimensional optimization problem and the complexity of solving (7) is roughly . In this section, we propose an iterative randomized method which solves (7) with complexity . Algorithm 1 presents the proposed solver. It starts the iteration with few randomly sampled columns of and refines the sampled columns in each iteration.

Remark 1.

In both (7) and (8), we use the same symbol to designate the optimization variable. However, in (7) , while in (8) , where of order is the number of columns of .

Here we provide a brief explanation of the different steps of Algorithm 1:

Steps 2.1 and 2.2: The matrix is the sampled columns of . In steps 2.1 and 2.2, the redundant columns of are removed.

Steps 2.3 and 2.4: Define as the LR component of . Steps 2.3 and 2.4 aim at finding the columns of which do not lie in the CS of . Define as the column of . For a given , if (the LR component of ) lies in the CS of , the column of will be a sparse vector. Thus, if we remove a small portion of the elements of the column of with the largest magnitudes, the column of will approach the zero vector. Thus, by removing a small portion of the elements with largest magnitudes of each column of , step 2.3 aims to locate the columns of that do not lie in the CS of , namely, the columns of corresponding to the columns of with the largest -norms. Therefore, in step 2.5, these columns are added to the matrix of sampled columns .

As an example, suppose follows Data model 1 where the CS of is independent of the CS of . In addition, assume and that all the columns of happen to be sampled from the first 180 columns of , i.e., all sampled columns belong to . Thus, the LR component of the last 20 columns of do not lie in the CS of . Fig. 3 shows . One can observe that the algorithm will automatically sample the columns corresponding to because if few elements (with the largest absolute values) of each column are eliminated, only the last 20 columns will be non-zero.

Remark 2.

The approach proposed in Algorithm 1 is not limited to the proposed method. The same idea can be used to enable more scalable versions of existing algorithms. For instance, the complexities of [11] and [21] can be reduced from roughly to , which is a remarkable speedup for high dimensional data.

Fig. 3: Matrix in step 2.3 of Algorithm 1.

1. Initialization
1.1 Set , , and equal to integers greater than 0. Set equal to a positive integer less than 50. is a known upper bound on .

1.2 Form by sampling rows of randomly.

1.3 Form by sampling columns of randomly.

2. For from 1 to
2.1 Locate informative columns: Define as the optimal point of

(8)

2.2 Remove redundant columns: Remove the zero rows of (or the rows with -norms close to 0) and remove the columns of corresponding to these zero rows.

2.3 Remove sparse residuals: Define . For each column of , remove the percent elements with largest absolute values.

2.4 Locate new informative columns: Define as the columns of which are corresponding to the columns of with maximum -norms.

2.5 Update sampled columns: .

2. End For

Output: Construct as the columns of corresponding to the sampled columns from (which form ). The columns of are the sampled columns.

Algorithm 1 Scalable Randomized Solver for (7)

Iv-A1 Sampling from highly structured Big data

Algorithm 1 presumes that the rows of are well distributed such that randomly sampled rows of span its RS. This may be true in many settings where the clustering structure is only along one direction (either the rows or columns), in which case Algorithm 1 can successfully locate the informative columns. If, however, both the columns and the rows exhibit clustering structures and their distribution is highly non-uniform, neither the CS nor the RS can be captured concisely using random sampling. As such, in this section we address the scenario in which the rank of may not be equal to . We present an iterative CS-RS pursuit approach which converges to the CS and RS of in few iterations.

Initialization: Set equal to randomly sampled rows of . Set equal to randomly sampled columns of and set equal to an integer greater than 0.

For from 1 to do

1. Column Sampling
1.1 Locating informative columns: Apply Algorithm 1 without the initialization step to as follows: set equal to , set equal to , and set .
1.2 Update sub-matrix : Set sub-matrix equal to , the output of Step 2.5 of Algorithm 1.
1.3 Sample the columns: Form matrix using the columns of corresponding to the columns of which were used to form .

2. Row Sampling
2.1 Locating informative rows: Apply Algorithm 1 without the initialization step to as follows: set equal to , set equal to , and set .

2.2 Update sub-matrix : Set sub-matrix equal to , the output of step 2.5 of Algorithm 1.

2.3 Sample the rows: Form matrix using the rows of corresponding to the rows of which were used to form .

End For

Output: The matrices and are the sampled columns and rows, respectively.

Algorithm 2 Column/Row Subspace Pursuit Algorithm

The table of Algorithm 2, Fig. 4 and its caption provide the details of the proposed sampling approach along with the definitions of the used matrices. We start the cycle from the position marked I in Fig. 4. The matrix is the informative columns of . Thus, the rank of (the LR component of ) is equal to the rank of (the LR component of ). The rows of are a subset of the rows of (the LR component of ). If the rows of exhibit a clustering structure, it is likely that rank. Thus, rank. We continue one cycle of the algorithm by going through steps II and 1 of Fig. 4 to update . Using a similar argument, we see that the rank of an updated will be greater than the rank of . Thus, if we run more cycles of the algorithm – each time updating and – the rank of and will increase. While there is no guarantee that the rank of will converge to (it can converge to a value smaller than ), our investigations have shown that Algorithm 2 performs quite well and the RS of (CS of ) converges to the RS of (CS of ) in very few iterations.

Fig. 4: Visualization of Algorithm 2. : Matrix is obtained as the columns of corresponding to the columns which form . IّI: Algorithm 1 is applied to Matrix to update (the sampled rows of ). 1: Matrix is obtained as the rows of corresponding to the rows which form . 2: Algorithm 1 is applied to matrix to update (the sampled columns of ).

Iv-B Solving the convex optimization problems

The proposed methods are based on convex optimization problems, which can be solved using generic convex solvers. However, the generic solvers do not scale well to high dimensional data. In this section, we use an Alternating Direction Method of Multipliers (ADMM) method [4] to develop an efficient algorithm for solving (8). The optimization problem (5) can be solved using this algorithm as well, where we would just need to substitute and with .

The optimization problem (8) can be rewritten as

(9)

which is equivalent to

(10)

where is the tuning parameter. The Lagrangian function of (10) can be written as

where and are the Lagrange multipliers and tr denotes the trace of a given matrix.

The ADMM approach then consists of an iterative procedure. Define as the optimization variables and as the Lagrange multipliers at the iteration. Define and define the element-wise function as . In addition, define a column-wise thresholding operator as: set equal to zero if , otherwise set , where and are the columns of and , respectively. Each iteration consists of the following steps:
1. Obtain by minimizing the Lagrangian function with respect to while the other variables are held constant. The optimal is obtained as

2. Similarly, is updated as

3. Update as

4. Update the Lagrange multipliers as follows

These 4 steps are repeated until the algorithm converges or the number of iterations exceeds a predefined threshold. In our numerical experiments, we initialize , , , and with zero matrices.

V Robustness to Outlying Data Points

In many application, the data contain outlying data points [31, 36]. In this section, we extend the proposed sampling algorithm (5) to make it robust to both sparse corruption and outlying data points. Suppose the given data can be expressed as

(11)

where and follow Data model 1. The matrix has some non-zero columns modeling the outliers. The outlying columns do not lie in the column space of and they cannot be decomposed into columns of plus sparse corruption. Below, we provide two scenarios motivating the model (11).

I. Facial images with different illuminations were shown to lie in a low dimensional subspace [1]. Now suppose we have a dataset consisting of some sparsely corrupted face images along with few images of random objects (e.g., building, cars, cities, …). The images from random objects cannot be modeled as face images with sparse corruption. We seek a sampling algorithm which can find informative face images while ignoring the presence of the random images to identify the different human subjects in the dataset.

II. A users rating matrix in recommender systems can be modeled as a LR matrix owing to the similarity between people’s preferences for different products. To account for natural variability in user profiles, the LR plus sparse matrix model can better model the data. However, profile-injection attacks, captured by the matrix , may introduce outliers in the user rating databases to promote or suppress certain products. The model (11) captures both element-wise and column-wise abnormal ratings.

The objective is to develop a column sampling algorithm which is simultaneously robust to sparse corruption and outlying columns. To this end, we propose the following optimization problem extending (5)

(12)

The matrix cancels out the effect of the outlying columns in the residual matrix . Thus, the regularization term corresponding to uses an -norm which promotes column sparsity. Since the outlying columns do not follow low dimensional structures, an outlier cannot be obtained as a linear combination of few data columns. The constraint plays an important role as it prevents the scenario where an outlying column is sampled by to cancel itself.

The sampling algorithm (12) can locate the representative columns in presence of sparse corruption and outlying columns. For instance, suppose , , where and . The ranks of and are equal to 5 and 2, respectively, and their column spaces are independent. The matrix follows Data model 1 with and the last 50 columns of are non-zero. The elements of the last 50 columns of

are sampled independently from a zero mean normal distribution. Fig.

5 compares the output of (12) with the state-of-the-art robust sampling algorithms in [21, 27]. In the first row of Fig. 5, , and in the second row, . Interestingly, even if , the robust column sampling algorithm (12) substantially outperforms [21, 27]. As shown, the proposed method samples correctly from each cluster (at least 5 columns from the first cluster and at least 2 columns from the second), and unlike [21, 27], does not sample from the outliers.

Remark 3.

The sampling algorithm (12) can be used as a robust PCA algorithm. The sampling algorithm is applied to the data and the decomposition algorithm (1) is applied to the sampled columns to learn the CS of . In [32], we investigate this problem in more details.

Fig. 5: Comparing the performance of (12) with the algorithm in [21, 27]. In the first row, . In the second row, . The last 50 columns of are the outliers.
Fig. 6: Left: Subspace recovery error versus the number of randomly sampled columns. Right: The rank of sampled columns through the iterations of Algorithm 1.
Fig. 7: Few frames of the video file. The frames in color are sampled by the robust column sampling algorithm.

Vi Experimental Results

In this section, we apply the proposed sampling methods to both real and synthetic data and the performance is compared to the-state-of-the-art.

Vi-a Shortcomings of random sampling

In the first experiment, it is shown that random sampling uses too many columns from to correctly learn the CS of . Suppose the given data follows Data model 1 with . In this experiment, is a matrix. The LR component is generated as For , where , and the elements of and are sampled independently from . For , where , . Thus, with high probability the columns of lie in a union of 60 independent 1-dimensional linear subspaces.

We apply Algorithm 1 to . The matrix is formed using 100 randomly sampled rows of . Since the rows of do not follow a clustering structure, the rank of (the LR component of ) is equal to 60 with overwhelming probability. The right plot of Fig. 6 shows the rank of the LR component of sampled columns (the rank of ) after each iteration of Algorithm 1. Each point is obtained by averaging over 20 independent runs. One can observe that 2 iterations suffice to locate the descriptive columns. We run Algorithm 1 with . It samples 255 columns on average.

We apply the decomposition algorithm to the sampled columns. Define the recovery error as , where is the basis for the CS learned by decomposing the sampled columns. If we learn the CS using the columns sampled by Algorithm 1, the recovery error in 0.02 on average. On the other hand, the left plot of Fig. 6 shows the recovery error based on columns sampled using uniform random sampling. As shown, if we sample 2000 columns randomly, we cannot outperform the performance enabled by the informative column sampling algorithm (which here samples 255 columns on average).

Vi-B Face sampling and video summarization with corrupted data

In this experiment, we use the face images in the Yale Face Database B [20]. This dataset consists of face images from 38 human subjects. For each subject, there is 64 images with different illuminations. We construct a data matrix with images from 6 human subjects (384 images in total with containing the vectorized images). The left panel of Fig. 8 shows the selected subjects. It has been observed that the images in the Yale dataset follow the LR plus sparse matrix model [5]. In addition, we randomly replace 2 percent of the pixels of each image with random pixel values, i.e., we add a synthetic sparse matrix (with ) to the images to increase the corruption. The sampling algorithm (5) is then applied to . The right panel of Fig. 8 displays the images corresponding to the sampled columns. Clearly, the algorithm chooses at least one image from each subject.

Informative column sampling algorithms can be utilized for video summarization [10, 11]. In this experiment, we cut 1500 consecutive frames of a cartoon movie, each with resolution. The data matrix is formed by adding a sparse matrix with to the vectorized frames. The sampling algorithm (7) is then applied to find the representative columns (frames). The matrix is constructed using 5000 randomly sampled rows of the data. The algorithm samples 52 frames which represent almost all the important instances of the video. Fig. 7 shows some of the sampled frames (designated in color) along with neighboring frames in the video. The algorithm judiciously samples one frame from frames that are highly similar.

Fig. 8: Left: 6 images from the 6 human subjects forming the dataset. Right: 8 images corresponding to the columns sampled from the face data matrix.

Vi-C Sampling from highly structured data

Suppose and are generated independently similar to the way was constructed in Section VI-A. Set equal to the first 60 right singular vectors of , equal to the first 60 right singular vectors of , and set . Thus, the columns/rows of lie in a union of 60 1-dimensional subspaces and their distribution is highly non-uniform. Define and as the rank of and , respectively. Table I shows and through the iterations of Algorithm 2. The values are obtained as the average of 10 independent runs with each average value fixed to the nearest integer. Initially, is equal to 34. Thus, Algorithm 1 is not applicable in this case since the rows of the initial do not span the RS of . According to Table I, the rank of the sampled columns/rows increases through the iterations and converge to the rank of in 3 iterations.

Iteration Number 0 1 2 3
- 45 55 60
34 50 58 60
TABLE I: Rank of columns/rows sampled by Algorithm 2

Vi-D Robustness to outliers

In this experiment, the performance of the sampling algorithm (12) is compared to the state of the art robust sampling algorithm presented in [21, 27]. Since the existing algorithms are not robust to sparse corruption, in this experiment matrix is set equal to zero. The data can be represented as . The matrix contain the inliers, the rank of is equal to 5 and the columns of are distributed randomly within the CS of . The columns of are the outliers, the elements of are sampled from and is the number of outliers. We perform 10 independent runs. Define as the average of the 10 vectors . Fig. 9 compares the output of the algorithms for different number of outliers. One can observe that the existing algorithm fails not to sample from outliers. Interestingly, eve if , the proposed method does not sample from the outliers. However, if we increase to 600, the proposed method starts to sample from the outliers.

Fig. 9: Comparing the performance of (12) with [21, 27]. The blues are the sampled points from inliers and blacks are sampled points from outliers.

References

  • [1] R. Basri and D. W. Jacobs. Lambertian reflectance and linear subspaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(2):218–233, 2003.
  • [2] J. Bien and R. Tibshirani. Prototype selection for interpretable classification. The Annals of Applied Statistics, pages 2403–2424, 2011.
  • [3] C. Boutsidis, J. Sun, and N. Anerousis. Clustered subset selection and its applications on it service metrics. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 599–608. ACM, 2008.
  • [4] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning, 3(1):1–122, 2011.
  • [5] E. J. Candès, X. Li, Y. Ma, and J. Wright.

    Robust principal component analysis?

    Journal of the ACM (JACM), 58(3):11, 2011.
  • [6] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2):572–596, 2011.
  • [7] A. Deshpande, L. Rademacher, S. Vempala, and G. Wang. Matrix approximation and projective clustering via volume sampling. In Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, pages 1117–1126. Society for Industrial and Applied Mathematics, 2006.
  • [8] P. Drineas, A. Frieze, R. Kannan, S. Vempala, and V. Vinay.

    Clustering large graphs via the singular value decomposition.

    Machine learning, 56(1-3):9–33, 2004.
  • [9] P. Drineas, M. W. Mahoney, and S. Muthukrishnan. Subspace sampling and relative-error matrix approximation: Column-based methods. In

    Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques

    , pages 316–326. Springer, 2006.
  • [10] E. Elhamifar, G. Sapiro, and S. Sastry. Dissimilarity-based sparse subset selection. IEEE Transactions on Software Engineering, 38(11), 2014.
  • [11] E. Elhamifar, G. Sapiro, and R. Vidal. See all by looking at a few: Sparse modeling for finding representative objects. In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 1600–1607, 2012.
  • [12] E. Elhamifar and R. Vidal. Sparse subspace clustering: Algorithm, theory, and applications. IEEE transactions on pattern analysis and machine intelligence, 35(11):2765–2781, 2013.
  • [13] E. Esser, M. Moller, S. Osher, G. Sapiro, and J. Xin. A convex model for nonnegative matrix factorization and dimensionality reduction on physical space. IEEE Transactions on Image Processing, 21(7):3239–3252, 2012.
  • [14] S. Garcia, J. Derrac, J. Cano, and F. Herrera. Prototype selection for nearest neighbor classification: Taxonomy and empirical study. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(3):417–435, 2012.
  • [15] M. Gu and S. C. Eisenstat. Efficient algorithms for computing a strong rank-revealing QR factorization. SIAM Journal on Scientific Computing, 17(4):848–869, 1996.
  • [16] N. Halko, P.-G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217–288, 2011.
  • [17] J. Hartline, V. Mirrokni, and M. Sundararajan. Optimal marketing strategies over social networks. In Proceedings of the 17th international conference on World Wide Web, pages 189–198. ACM, 2008.
  • [18] Q. Ke and T. Kanade. Robust l 1 norm factorization in the presence of outliers and missing data by alternative convex programming. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 739–746, 2005.
  • [19] D. Lashkari and P. Golland. Convex clustering with exemplar-based models. Advances in neural information processing systems, 20, 2007.
  • [20] K.-C. Lee, J. Ho, and D. J. Kriegman.

    Acquiring linear subspaces for face recognition under variable lighting.

    IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(5):684–698, 2005.
  • [21] H. Liu, Y. Liu, and F. Sun. Robust exemplar extraction using structured sparse coding.

    IEEE Transactions on Neural Networks and Learning Systems

    , 26(8):1816–1821, 2015.
  • [22] R. Liu, Z. Lin, S. Wei, and Z. Su. Solving principal component pursuit in linear time via filtering. arXiv preprint arXiv:1108.5359, 2011.
  • [23] L. Mackey, A. Talwalkar, and M. I. Jordan. Distributed matrix completion and robust factorization. arXiv preprint arXiv:1107.0789, 2011.
  • [24] L. W. Mackey, M. I. Jordan, and A. Talwalkar. Divide-and-conquer matrix factorization. In Advances in Neural Information Processing Systems (NIPS), pages 1134–1142, 2011.
  • [25] S. Minaee and Y. Wang. Screen content image segmentation using robust regression and sparse decomposition. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2016.
  • [26] S. Minaee and Y. Wang. Screen content image segmentation using sparse decomposition and total variation minimization. arXiv preprint arXiv:1602.02434, 2016.
  • [27] F. Nie, H. Huang, X. Cai, and C. H. Ding.

    Efficient and robust feature selection via joint ℓ2, 1-norms minimization.

    In Advances in Neural Information Processing Systems, pages 1813–1821, 2010.
  • [28] M. Rahmani and G. Atia. High dimensional low rank plus sparse matrix decomposition. arXiv preprint arXiv:1502.00182, 2015.
  • [29] M. Rahmani and G. Atia. Innovation pursuit: A new approach to subspace clustering. arXiv preprint arXiv:1512.00907, 2015.
  • [30] M. Rahmani and G. Atia. Randomized robust subspace recovery for high dimensional data matrices. Preprint arXiv:1505.05901, 2015.
  • [31] M. Rahmani and G. Atia. Coherence pursuit: Fast, simple, and robust principal component analysis. arXiv preprint arXiv:1609.04789, 2016.
  • [32] M. Rahmani and G. Atia. Pca with robustness to both sparse corruption and outliers. arXiv preprint arXiv, 2016.
  • [33] M. Rahmani and G. Atia. A subspace learning approach for high dimensional matrix decomposition with efficient column/row sampling. In Proceedings of The 33rd International Conference on Machine Learning (ICML), pages 1206–1214, 2016.
  • [34] J. A. Tropp.

    Column subset selection, matrix factorization, and eigenvalue optimization.

    In Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 978–986. Society for Industrial and Applied Mathematics, 2009.
  • [35] J. Wright, A. Ganesh, K. Min, and Y. Ma. Compressive principal component pursuit. Information and Inference, 2(1):32–68, 2013.
  • [36] H. Xu, C. Caramanis, and S. Sanghavi. Robust PCA via outlier pursuit. In Advances in Neural Information Processing Systems (NIPS), pages 2496–2504, 2010.
  • [37] T. Zhou and D. Tao. Godec: Randomized low-rank & sparse matrix decomposition in noisy case. In International Conference on Machine Learning (ICML), 2011.
  • [38] Z. Zhou, X. Li, J. Wright, E. Candès, and Y. Ma. Stable principal component pursuit. In IEEE International Symposium on Information Theory Proceedings (ISIT), pages 1518–1522. IEEE, 2010.