1 Introduction
In many applications in computer vision, including motion estimation and segmentation
[19]and face recognition
[2], highdimensional datasets can be well approximated by a union of lowdimensional subspaces. Such applications have motivated a lot of research on the problems of learning one or more subspaces from data, a.k.a. subspace learning and subspace clustering, respectively. In practice, datasets are often contaminated by points that do not lie in the subspaces, i.e. outliers. In such situations, it is often essential to detect and reject these outliers before any subsequent processing/analysis is performed.Prior work. We address the problem of outlier detection in the setting when the inlier data are assumed to lie close to a union of unknown lowdimensional subspaces (low relative to the dimension of the ambient space). A traditional method for solving this problem is RANSAC [13], which is based on randomly selecting a subset of points, fitting a subspace to them, and counting the number of points that are well fit by this subspace; this process is repeated for sufficiently many trials and the best fit is chosen. RANSAC is intrinsically combinatorial and the number of trials needed to find a good estimate of the subspace grows exponentially with the subspace dimension. Consequently, the methods of choice have been to robustly learn the subspaces by penalizing the sum of unsquared distances (in lieu of squared distances used in classical methods such as PCA) of points to the closest subspace [9, 22, 62, 61]. Such a penalty is robust to outliers because it reduces the contributions from large residuals arising from outliers. However, the optimization problem is usually nonconvex and a good initialization is extremely important for finding the optimal solution.
The groundbreaking work of Wright et al. [54] and Candès et al. [4] on using convex optimization techniques to solve the PCA problem with robustness to corrupted entries has led to many recent methods for PCA with robustness to outliers [55, 32, 25, 60, 21]. For example, Outlier Pursuit [55] uses the nuclear norm to seek lowrank solutions by solving the problem for some . A prominent advantage of convex optimization techniques is that they are guaranteed to correctly identify outliers under certain conditions. Very recently, several nonconvex outlier detection methods have also been developed with guaranteed correctness [20, 6]. Nonetheless, these methods typically model a unique inlier subspace, e.g., by a low rank matrix in Outlier Pursuit, and therefore cannot deal with multiple inlier subspaces since the union of multiple subspaces could be highdimensional.
Another class of methods with theoretical guarantees for correctness utilizes the fact that outliers are expected to have low similarities with other data points. In [5, 1], a multiway similarity is introduced that is defined from the polar curvature, which has the advantage of exploiting the subspace structure. However, the number of combinations in multiway similarity can be prohibitively large. Some recent works have explored using inner products between data points for outlier detection [17, 40]. Although computationally very efficient, these methods require the inliers to be well distributed and densely sampled within the subspaces.
Overview of our method and contributions. In this work, we address the problem of outlier detection by using data selfrepresentation. The proposed approach builds on the selfexpressiveness property of data in a union of lowdimensional subspaces, originally introduced in [11], which states that a point in a subspace can always be expressed as a linear combination of other points in the subspace. In particular, if the columns of lie in multiple subspaces, then for all , there exists a vector such that and the nonzero entries of correspond to points in the same subspace as . If the subspace dimensions are small, can be taken to be sparse and be computed by solving the minimization problem
(1) 
for some . In [11], an undirected graph is constructed from in which each vertex corresponds to a data point, and vertices corresponding to and are connected if either or
is nonzero. Such a graph can be used to segment the data into their respective subspaces by applying spectral clustering
[48] to the graph’s Laplacian.Consider now the case where contains outliers to the subspaces. Figure 1 illustrates an example representation matrix computed from (1) for data drawn from a single subspace (face images from one individual) plus outliers (other images). In this case, the representation is such that inliers express themselves as linear combinations of a few other inliers, while outliers express themselves as linear combinations of both inliers and outliers. Motivated by this observation, we use a directed graph to model data relations: a directed edge from to indicates that uses in its representation (i.e.
). Then a random walk on the representation graph initialized at an outlier will not return to the set of outliers since once the random walk reaches an inlier it cannot return to the outliers. Therefore, we design a random walk process and identify outliers as those whose probabilities tend to zero. Our work makes the following contributions with respect to the state of the art:

[topsep=0.25em,noitemsep]

Our method can detect outliers using the probability distribution of a
random walk on a graph constructed from data selfrepresentation. 
Our data selfrepresentation allows our method to handle multiple inlier subspaces. Knowledge of the number of subspaces and their dimensions is not required, and the subspaces may have a nontrivial intersection.

Our method can explore contextual information by using a random walk, i.e., the “outlierness” of a particular point depends on the “outlierness” of its neighbors.

Our analysis shows that our method correctly identifies outliers under suitable assumptions on the data distribution and connectivity of the representation graph.

Experiments on real image databases illustrate the effectiveness of our method.
2 Related work
Outlier detection by selfrepresentation. Prior work has explored using data selfrepresentation as a tool for outlier detection in a union of subspaces. Specifically, motivated by the observation that outliers do not have sparse representations, [43, 8] declare a point as an outlier if is above a threshold. However, this thresholding strategy is not robust to outliers that are close to each other since their representation vectors may have small norms. The LRR [28] solves for a lowrank selfrepresentation matrix in lieu of a sparse representation and penalizes the sum of unsquared selfrepresentation errors , which makes it more robust to outliers. However, LRR requires the subspaces to be independent and the sum of the union of subspaces to be lowdimensional [29].
Outlier detection by maximum consensus. In a diverse range of contexts such as maximum consensus [63, 7]
and robust linear regression
[33, 49], people have studied problems of the form(2) 
in which is the indicator function. Note that if we set for all , then (2) can be interpreted as detecting outliers in data where the inliers lie close to an affinehyperplane. A problem closely related to (2) is
(3) 
which appears in many applications (e.g. see [39]). In particular, (3) can be used to learn a linear hyperplane from data corrupted by outliers. To detect outliers in a general lowdimensional subspace, one can apply (2) and (3) recursively to find a basis for the orthogonal complement of the subspace [46]. However, such an approach is limited because there can be only one inlier subspace and the dimension of that subspace must be known in advance.
Outlier detection by random walk. Perhaps the most wellknown random walk based algorithm is PageRank [3]. Originally introduced to determine the authority of website pages from web graphs, PageRank and its variants have been used in different contexts for ranking the centrality of the vertices in a graph. In particular, [34, 35]
propose the OutRank, which ranks the “outlierness” of points in a dataset by applying PageRank to an undirected graph in which the weight of an edge is the cosine similarity or RBF similarity between the two connected data points. Then, points that have low centrality are regarded as outliers. The outliers returned by OutRank are those that have low similarity to other data points. Therefore, OutRank does not work if points in a subspace are not dense enough.
3 Outlier detection by selfrepresentation
In this section, we present our data selfrepresentation based outlier detection method. We first describe the data selfrepresentation and its associated properties for inliers and outliers. We then design a random walk algorithm on the representation graph whose limiting behavior allows us to identify the sets of inliers and outliers.
3.1 Data selfrepresentation
Given an unlabeled dataset containing inliers and outliers, the first step of our algorithm is to construct the data selfrepresentation matrix denoted by . As briefly discussed in the introduction (see also Figure 1), a selfrepresentation matrix computed from (1) is observed to have different properties for inliers and outliers. Specifically, inliers usually use only other inliers for selfrepresentation, i.e. for an inlier , the representation is such that only if is also an inlier, where is the th entry of . This property is expected to hold if the inliers lie in a union of low dimensional subspaces, as evidenced from the works [12, 43, 59, 52, 50]. As an intuitive explanation, if the inliers lie in a low dimensional subspace, then any inlier has a sparse representation using other points in this subspace. Thus such a representation can be found by using sparsityinducing regularization as seen in (1). In contrast, outliers are generally randomly distributed in the ambient space, so that a selfrepresentation usually contains both inliers and outliers.
Since the representation computed from (1) is sparse, there are potentially connectivity issues in the representation graph, i.e. an inlier that is not wellconnected to other inliers may be detected as an outlier, and an outlier that is not well connected may be detected as an inlier. To address the connectivity issue, we compute the data selfrepresentation matrix by the elastic net problem [64, 56]:
(4) 
in which controls the balance between sparseness (via regularization) and connectivity (via regularization). Specifically, if is chosen close to , we can still expect that the computed representation for an inlier will only use inliers. The regularization has been introduced to promote more connections between data points, i.e. if , then one expects more nonzero entries in . A detailed discussion of the representation computed from (4) and the connectivity issue is provided in Section 4.
3.2 Representation graph and random walk
We use a directed graph , which we call a representation graph, to capture the behavior of inliers and outliers from the representation matrix . The vertices of correspond to the data points , and the edges are given by the (weighted) adjacency matrix with the absolute value taken elementwise, i.e., the weight of the edge from to is given by . In the representation graph, we expect that vertices corresponding to inliers will have edges that only lead to inliers, while vertices that are outliers will have edges that lead to both inliers and outliers. In other words, we do not expect to have any edges that lead from an inlier to an outlier.
Using the previous paragraph as motivation, we design a random walk procedure to identify the outliers. A random walk on the representation graph is a discrete time Markov chain, for which the transition probability from at a given time to at the next time is given by with . By this definition, if the starting point of a random walk is an inlier then it will never escape the set of inliers as there is no edge going from any inlier to any outlier. In contrast, a random walk starting from an outlier will likely end up in an inlier state since once it enters any inlier it will never return to an outlier state. Thus, by using different data points to initialize random walks, outliers can be identified by observing the final probability distribution of the state of the random walks (see Figure 2).
If is the transition matrix with entries , then is related to the representation matrix by
(5) 
We define to be the state probability distribution at time , then the state transition is given by . Thus, a step transition is with the chosen initial state probability distribution.
3.3 Main algorithm: Outlier detection by Rgraph
We propose to perform outlier detection by using random walks on the representation graph . We set the initial probability distribution as , and then compute the step transition . This can be interpreted as initializing a random walk from each of the data points, and then finding the sum of probability distributions of all random walks after steps. It is expected that all random walks—starting from either an inlier or an outlier—will eventually have high probabilities for the inlier states and low probabilities for the outlier states.
We note that the defined as above need not converge, as shown by the dimensional example . Instead, we choose to use the step Cesàro mean, given by
(6) 
which is the average of the first tstep probability distributions (see Figure 2). The sequence has the benefit that it always converges, and its limit is the same as that of whenever the latter exists. In the next section, we give a more detailed discussion of this choice, its properties for outlier detection, and its convergence behavior.
Our complete algorithm is stated as Algorithm 1.
4 Theoretical guarantees for correctness
Let us first formally define the problem of outlier detection when data is drawn from a union of subspaces.
Problem 4.1 (Outlier detection in a union of subspaces).
Given data whose columns contain inliers that are drawn from an unknown number of unknown subspaces , and outliers that are outside of , the goal is to identify the set of outliers.
Recall that motivation for our method is that ideally there will be no edge going from an inlier to an outlier in the representation graph. This motivates us to assume that a random walk starting at any inlier will eventually return to itself, i.e. inliers are essential states of the Markov chain, while outliers are those that have a chance of never coming back to itself, i.e. outliers are inessential states. Formally, we work with a (time homogeneous) Markov chain with state space , in which each state corresponds to data , and the transition probability is given by (5). Given , we say that is accessible from , denoted as , if there exists some such that the th entry of is positive. Intuitively, if a random walk can move from to in finitely many steps.
Definition 4.1 (Essential and inessential state [23]).
A state is essential if for all such that it is also true that . A state is inessential if it is not essential.
Our aim in this section is to establish that if inliers connect to themselves, i.e. they are subspacepreserving (Section 4.1), and the representation satisfies certain connectivity conditions (Section 4.2), then inliers are essential states of the Markov chain and outliers are inessential states. Subsequently, in Section 4.3 we show that the Cesàro mean (6) identifies essential and inessential states, thus establishing the correctness of Algorithm 1 for outlier detection.
4.1 Subspacepreserving representation
We first establish that inliers express themselves with only other inliers when they lie in a union of low dimensional subspaces. This property is wellstudied in the subspace clustering literature. We will borrow terminologies and results from prior work and modify them for our current task of outlier detection.
Definition 4.2 (Subspacepreserving representation [47]).
If is an inlier, then the representation is called subspacepreserving if the nonzero entries of correspond to points in , i.e. only if . The representation matrix is called subspacepreserving if is subspacepreserving for every inlier .
A representation matrix is subspacepreserving if each inlier uses points in its own subspace for representation. Given , a subspacepreserving representation can be obtained by solving (4) when certain geometric conditions hold. The following result is modified from [56]. It assumes that columns of are normalized to have unit norm.
Theorem 4.1.
Let be an inlier. Define the oracle point of to be , where is the matrix containing all points in except and
The solution to (4) is subspacepreserving if
(7) 
where .
An outline of the proof is given in the appendix. Note that the oracle point lies in and that its definition only depends on points in . The first term in condition (7) captures the distribution of points in near , and is expected to be large if the neighborhood of is wellcovered by points from . The second term characterizes the similarity between the oracle point and all other data points, which includes the outliers and the inliers from other subspaces. The condition requires the former to be larger than the latter by a margin of , which is close to zero if is close to . Overall, condition (7) requires that points in are dense around , which is itself in , and that outliers and inliers from other subspaces do not lie close to .
Even if (7) holds for all so that the representation is subspacepreserving, we cannot automatically establish an equivalence between inliers/outliers and essential/inessential states because of potential complications related to the graph’s connectivity. This is addressed next.
4.2 Connectivity considerations
In the context of sparse subspace clustering, the wellknown connectivity issue [36, 53, 30, 56, 51]
refers to the problem that points in the same subspace may not be wellconnected in the representation graph, which may cause oversegmentation of the true clusters. Thus, one has to make the assumption that each true cluster is connected to guarantee correct clustering. For the outlier detection problem, it may happen that an inlier is inessential and thus classified as an outlier when the inliers are not wellconnected; similarly, an outlier may be essential and thus classified as an inlier if it is not connected to at least one inlier. In fact, the situation is even more involved since the representation graph is directed and inliers and outliers behave differently.
Suppose, as a first example, that there exists an inlier that is never used to express any other inliers. This is equivalent to saying that there is no edge going into this point from any other inliers. Note that the subspacepreserving property can still hold if this inlier expresses itself using other inliers. Yet, since a random walk leaving this point would never return it can not be identified as an inlier. To avoid such cases, we need the following assumption.
Assumption 4.1.
For any inlier subspace , the vertices of the representation graph are strongly connected, i.e. there is a path in each direction between each pair of vertices.
Assumption 4.1 requires good connectivity between points from the same inlier subspace. We also need good connectivity between outliers and inliers. Consider the example when there is a subset of outliers for which all of their outgoing edges lead only to points within that same subset. In this case, the subset of points can not be detected as outliers since their representation pattern is the same as for the inliers. The next assumption rules out this case.
Assumption 4.2.
For each subset of outliers there exists an edge in the representation graph that goes from a point in this subset to an inlier or to an outlier outside this subset.
4.3 Main theorem: guaranteed outlier detection
We can now establish guaranteed outlier detection by our representation graph based method stated as Algorithm 1.
Theorem 4.2.
Theorem 4.2 is a direct consequence of the following two facts whose proofs are provided in the appendix.
Lemma 4.1.
Lemma 4.2.
For any probability transition matrix , the averaged probability distribution in (6) satisfies , where is such that if and only if state is inessential.
Theorem 4.2 shows that Problem 4.1 is solved by Algorithm 1 if the data satisfies the geometric conditions in (7) and the representation graph satisfies the required connectivity assumptions.
We note that the random walk by the Cesàro mean adopted here is different from the popular random walk with restart as adopted by PageRank, for example. The benefit of PageRank is that the random walk converges to the unique stationary distribution. However, it is not clear whether this stationary distribution identifies the outliers. In fact, all states in the random walk of PageRank are essential, so that outliers do not converge to zero probabilities. In contrast, the random walk in our method does not necessarily have a unique stationary distribution, but the Cesàro mean does converge to one of the stationary distributions, which we have shown can be used to identify outliers. A detailed discussion is in the Appendix.
5 Experiments
We use several image databases (see Figure 3) to evaluate our outlier detection method (Algorithm 1). For computing the representation in (4), we use the solver in [18] with and , where is a parameter tuned to each dataset. In particular, the solution to (4) is nonzero if and only if . The number of iterations is set to be .
OutRank  CoP  REAPER  OutlierPursuit  LRR  DPCP  thresholding  Rgraph (ours)  
Inliers: all images from one subject Outliers: , taken from other subjects  
AUC  0.536  0.556  0.964  0.972  0.857  0.952  0.844  0.986 
F1  0.552  0.563  0.911  0.918  0.797  0.885  0.763  0.951 
Inliers: all images from three subjects Outliers: , taken from other subjects  
AUC  0.519  0.529  0.932  0.968  0.807  0.888  0.848  0.985 
F1  0.288  0.292  0.758  0.856  0.509  0.653  0.545  0.878 
5.1 Experimental setup
Databases. We construct outlier detection tasks from three publicly available databases. The Extended Yale B [15] dataset contains frontal face images of individuals each under different illumination conditions. The face images are of size , for which we downsample to . The Caltech256 [16] is a database that contains images from categories that have more than images each. There is also an additional “clutter” category in this database that contains images of different varieties, which are used as outliers. The Coil100 dataset [37] contains images of different objects. Each object has images taken at pose intervals of degrees, with the images being of size . For the Extended Yale B and Coil100 datasets we use raw pixel intensity as the feature representation. Images in Caltech256 are represented by a dimensional feature vector extracted from the last fully connected layer of the 16layer VGG network [42].
Baselines. We compare with other representative methods that are designed for detecting outliers in one or multiple subspaces: CoP [40], OutlierPursuit [55], REAPER [21], DPCP [46], LRR [28] and thresholding [43]. We also compare with a graph based method: OutRank [34, 35]. We implement the inexact ALM [26] for solving the optimization in OutlierPursuit. For LRR, we use the code available online at https://sites.google.com/site/guangcanliu/. For DPCP, we use the code provided by the authors. All other methods are implemented according to the description in their respective papers.
Evaluation metric. Each outlier detection method generates a numerical value for each data point that indicates its “outlierness”, and a threshold value is required for determining inliers and outliers. A Receiver Operating Characteristic (ROC) curve plots the true positive rate and false positive rate for all threshold values. We use the area under the curve (AUC) as a metric of performance in terms of the ROC. The AUC is always between and , with a perfect model having an AUC of and a model that guesses randomly having an AUC of approximately .
As a second metric, we provide the F1score, which is the harmonic mean of precision and recall. The F1score is dependent upon the threshold, and we report the largest F1score across all thresholds. An F1score of
means there exists a threshold that gives both precision and recall equal to , i.e. a perfect separation of inliers and outliers.The reported numbers for all experiments discussed in this section are the averages over 50 trials.
5.2 Outliers in face images
OutRank  CoP  REAPER  OutlierPursuit  LRR  DPCP  thresholding  Rgraph (ours)  
Inliers: one category of images Outliers:  
AUC  0.897  0.905  0.816  0.837  0.907  0.783  0.772  0.948 
F1  0.866  0.880  0.808  0.823  0.893  0.785  0.772  0.914 
Inliers: three categories of images Outliers:  
AUC  0.574  0.676  0.796  0.788  0.479  0.798  0.810  0.929 
F1  0.682  0.718  0.784  0.779  0.671  0.777  0.782  0.880 
Inliers: five categories of images Outliers:  
AUC  0.407  0.487  0.657  0.629  0.337  0.676  0.774  0.913 
F1  0.667  0.672  0.716  0.711  0.667  0.715  0.762  0.858 
OutRank  CoP  REAPER  OutlierPursuit  LRR  DPCP  thresholding  Rgraph (ours)  
Inliers: all images from one category Outliers:  
AUC  0.836  0.843  0.900  0.908  0.847  0.900  0.991  0.997 
F1  0.862  0.866  0.892  0.902  0.872  0.882  0.978  0.990 
Inliers: all images from four categories Outliers:  
AUC  0.613  0.628  0.877  0.837  0.687  0.859  0.992  0.996 
F1  0.491  0.500  0.703  0.686  0.541  0.684  0.941  0.970 
Inliers: all images from seven categories Outliers:  
AUC  0.570  0.580  0.824  0.822  0.628  0.804  0.991  0.996 
F1  0.342  0.346  0.541  0.528  0.366  0.511  0.897  0.955 
Suppose we are given a set of images of one or more individuals but that the data set is also corrupted by face images of a variety of other individuals. The task is to detect and remove those outlying face images. It is known that images of a face under different lighting conditions lie approximately in a low dimensional subspace. Thus, this task can be modeled as the problem of outlier detection in one subspace or in a union of subspaces.
We use the extended Yale B database. In the first experiment, we randomly choose a single individual from the 38 subjects and use all 64 images of this subject as the inliers. We then choose images from the remaining 37 subjects as outliers with at most one image from each subject. The overall data set has outliers. The average AUC and F1 measures over 50 trials are reported in Table 1. For a fair comparison, we finetuned the parameters for all methods.
Comparing to state of the art. We see that our representation graph based method Rgraph outperforms the other methods. Besides our method, the REAPER, Outlier Pursuit and DPCP algorithms all perform well. These three methods learn a single subspace and treat those that do not fit the subspace as outliers, thus making them well suited for this data (the images of one individual can be wellapproximated by a single low dimensional subspace).
The LRR and thresholding methods use data selfrepresentation, which is also the case for our method. However, LRR does not give good outlier detection results, probably because its algorithm for solving the LRR model is not guaranteed to converge to a global optimum. The thresholding also does not give good results, showing that the magnitude of the representation vector is not a robust measure for classifying outliers. By considering the connection patterns in the representation graph, our method achieves significantly better results.
The performance of OutRank and CoP is significantly worse than that of the other methods. This poor performance can be explained by the use of a coherencebased distance, which fails to capture similarity between data points when the data lie in subspaces. For example, it can be argued that the coherence between two faces with the same illumination condition can be higher than two images of the same face under different illumination conditions.
Dealing with multiple inlier groups. In order to test the ability of the methods to deal with multiple inlier groups, we designed a second experiment in which inliers are taken to be images of randomly chosen subjects, and outliers are randomly drawn from other subjects as before. For all methods, we use the same parameters as in the previous experiment to test the robustness to parameter tuning. The results of this experiment are reported in Table 1.
We can see that Outlier Pursuit and our Rgraph are the two best methods. Although Outlier Pursuit only models a single low dimensional subspace, it can still deal with this data since the union of the three subspaces corresponding to the three subjects in the inlier set is still low dimensional and can be treated as a single low dimensional subspace. However, we postulate that Outlier Pursuit will eventually fail as we increase the number of inlier groups, since the union of low dimensional subspaces will no longer be low rank. Our method does not have this limitation.
Similar to Outlier Pursuit, both REAPER and DPCP can, in principle, handle multiple inlier groups by fitting a single subspace to their union. However, REAPER and DPCP require as input the dimension of the union of the inlier subspaces, which can be hard to estimate in practice. Indeed, in Table 1, we observe that the performances of REAPER and DPCP are less competitive in comparison to Outlier Pursuit and our Rgraph for the three subspace case.
5.3 Outliers in images of objects
We test the ability of the methods to identify one or several object categories that frequently appear in a set of images amidst outliers that consist of objects that rarely occur. For Caltech256, images in randomly chosen categories are used as inliers in three different experiments. From each category, we use the first images if the category has more than images. We then randomly pick a certain number of images from the “clutter” category as outliers such that there are outliers in each experiment. For Coil100, we randomly pick categories as inliers and pick at most one image from each of the remaining categories as outliers.
The results are reported in Table 2 and Table 3. We see that our Rgraph method achieves the best performance. The two geometric distance based methods, OutRank and CoP, achieve good results when there is one inlier category, but deteriorate when the number of inlier categories increases. The performance of REAPER, Outlier Pursuit and DPCP are similar to each other and worse than our method. This may be because they all try to fit a linear subspace to the data, while the data in these two databases may be better modeled by a nonlinear manifold. The thresholding and the representation graph method are all based on data selfexpression, and seem to be more powerful for this data.
6 Conclusion
We presented an outlier detection method that combined data selfrepresentation and random walks on a representation graph. Unlike many prior methods for robust PCA, our method is able to deal with multiple subspaces and does not require the number of subspaces or their dimensions to be known. Our analysis showed that the method is guaranteed to identify outliers when certain geometric conditions are satisfied and two connectivity assumptions hold. In our experiments on face image and object image databases, our method achieves the stateoftheart performance.
Acknowledgment
This work was supported by NSF BIGDATA grant 1447822. The authors also thank Manolis Tsakiris, Conner Lane and ChunGuang Li for helpful comments.
The appendix is organized as follows. In Section A we discuss subspacepreserving representations and give an outline of the proof for Theorem 4.1. Section B contains relevant background on Markov chain theory, which is then used in Section C for proving Lemma 4.1 and Lemma 4.2, as well as providing an indepth discussion of the Cesàro mean used for outlier detection. In Section D we provide some additional results for experiments on the Extended Yale B database that provide additional insight into the behavior of the methods.
Appendix A Subspacepreserving representation and proof of Theorem 4.1
The idea of a subspacepreserving representation has been extensively studied in the literature of subspace clustering to guarantee the correctness of clustering [12, 43, 44, 31, 27, 10, 38, 59, 17, 58, 56, 50, 52, 24]. Concretely, the data in a subspace clustering task are assumed to lie in a union of low dimensional subspaces, without any outliers that lie outside of the subspaces. A data selfrepresentation matrix is called subspacepreserving if each point uses only points that are from its own subspace in its representation.
Theoretical results in subspace clustering can be adapted to study subspacepreserving representations in the presence of outliers. Here, we use the analysis and result from [56], which studied the elastic net representation (4) for subspace clustering, to prove a subspacepreserving representation result in the presence of outliers, i.e. Theorem 4.1. We also present a corollary of Theorem 4.1 which allows us to compare our result with other subspace clustering results.
a.1 Proof of Theorem 4.1
The proof of Theorem 4.1 follows mostly from the work [56]. We provide an outline of the proof for completeness.
Consider the vector , which is the solution of the problem in the statement of Theorem 4.1. Notice that the entries of correspond to columns of the data matrix
. One can subsequently construct a representation vector by padding additional zeros to
at entries corresponding to points in that are not in . Note that this vector is trivially subspacepreserving by construction. The idea of the proof is to show that this constructed vector, which is subspacepreserving by construction, is a solution to the optimization problem (4) (and no other vector is). A sufficient condition for this to hold is that , which is computed from , needs to have low correlation with all points . More precisely, we have the following lemma.Lemma A.1 ([56, Lemma 3.1]).
The vector is subspacepreserving if for all .
Lemma A.1 can be proved by using the optimality condition of the optimization problem in (4). Equivalently, it suggests that is subspacepreserving if
(A.1) 
To get more meaningful results, we need an upper bound on . This is provided by the following lemma.
Lemma A.2 ([57, Lemma C.2]).
If be the maximum coherence between the oracle point and columns of , i.e. , then
(A.2) 
a.2 Discussions
Another commonly used geometric quantity for characterizing when representations will be subspacepreserving is the inradius of sets of points [43, 44, 59, 58, 53, 52, 50]. In order to understand the relationship to the results found in these works, we present a corollary of Theorem 4.1.
Definition A.1 (inradius).
The (relative) inradius of a convex body , denoted as , is the radius of the largest ball in the span of that can be inscribed in .
Corollary A.1.
The inradius captures the distribution of the columns of , i.e. it is large if points are well spread out in . Thus, the condition in (A.5) is easier to be satisfied if the set of points in is dense and well covers the entire subspace. Note that this requirement is stronger than that in Theorem 4.1, which only requires points in to be dense around the oracle point (i.e. it requires to be large). In fact, it is established in [56] that , so that the condition in (A.5) is a stronger requirement than that of (7) in Theorem 4.1.
Appendix B Background on Markov chain theory
We present background material on Markov chain theory that will help us understand the Cesàro mean (6) used for outlier detection in our method. The following material is organized from textbooks [41, 14, 45, 23] and the website http://www.math.uah.edu/stat.
We consider a Markov chain on a finite state space with transition probabilities for . The step transition probabilities are defined to be .
b.1 Decomposition of the state space
A Markov chain can be decomposed into more basic and manageable parts.
Definition B.1.
State is accessible from state , denoted as , if for some . We say that the states and communicate with each other, denoted by , if and .
Since it can be shown that is an equivalence relation, it induces a partition of the state space into disjoint equivalence classes known as communicating classes. We are interested in each of the closed communicating classes.
Definition B.2.
A nonempty set is called a closed set if for and .
Note that states in a closed communicating class are essential while states in other communicating classes are inessential [23].
Theorem B.1 ([41]).
The state space has the unique decomposition , where is the set of inessential states, and are closed communicating classes containing essential states.
By Theorem B.1, the state space of any Markov chain is composed of the essential states and inessential states, and the essential states can be further decomposed into a union of communicating classes. Therefore, the probability transition matrix can be written in the following form (up to permutation of the states):
(B.1) 
b.2 Stationary distribution
A nonnegative row vector is called a stationary distribution for the Markov chain if it satisfies .
Theorem B.2 ([23, Proposition 1.14, Corollary 1.17]).
A Markov chain consisting of one closed communicating class has a unique stationary distribution. Moreover, each entry of the stationary distribution is positive.
b.3 Convergence of the Cesàro mean
Let be the probability that the chain starting at enters for the first time at the th step. The hitting probability is the probability that the random walk ever makes a transition to state when started at , i.e.
(B.4) 
The mean return time is the expected time for a random walk starting from state will return to state . A general convergence result is stated as follows.
Theorem B.3 ([45, Theorem 3.3.1]).
For any ,
(B.5) 
This result can be simplified by using the decomposition in Theorem B.1, which leads to the following lemma.
Lemma B.1.
If are in the same closed communicating class, then . Also, if is an inessential state and is a closed communicating class, then for all , where is the hitting probability from state to class .
The following result relates the mean return time with the stationary distribution.
Lemma B.2.
For every closed communicating class , it holds that (entrywise division), where is the vector of mean return times of states in . If is an inessential state, then .
By combining Theorem B.3 with Lemma B.1 and Lemma B.2, the Cesàro limit of a probability transition matrix of the form in (B.1) can be written as
(B.6) 
in which is a column vector of hitting probability from each state in to class .
We note that while the Cesàro mean converges, the step transition probability does not necessarily converge. Consider, for example, the probability transition matrix . In this case, when
is odd and
when is even, i.e. is oscillating and never converges. In general, converges if and only if each of the closed communicating classes for is aperiodic.Appendix C Guaranteed outlier detection
Our outlier detection method by representation graph is guaranteed to correctly identify outliers in a union of subspaces when the representation is subspacepreserving and that the connectivity assumptions are satisfied. In this section, we first prove that the inliers and outliers in the data correspond to essential and inessential states, respectively, of the Markov chain associated with the representation graph (Lemma 4.1). Then, we show that the average of the first tstep probability distributions identifies essential and inessential states (Lemma 4.2), thus establishing the correctness of our method.
c.1 Proof of Lemma 4.1
Recall that we work with a Markov chain with state space , in which each state corresponds to the point in the data matrix .
First, we show that any inlier point corresponds to an essential state of the Markov chain. Let be any point such that . Since the representation matrix is subspacepreserving, we know that and lie in the same subspace. Furthermore, by Assumption 4.1, all points in the same subspace are strongly connected, which implies that . Thus, is an essential state.
Second, we show that any outlier point corresponds to an inessential state of the Markov chain. Consider the set , i.e. the set of points that are accessible from . By Assumption 4.2, the set cannot contain only outliers. Thus, there exists such that and is an inlier. However, since the representation is subspacepreserving, we know that . Therefore, is not an essential state, i.e. it is an inessential state.
c.2 Proof of Lemma 4.2
According to Theorem B.1, the state space of the Markov chain can be decomposed into , in which contains the inessential states and each is a closed communicating class containing essential states. Assume, without loss of generality, that the transition probability matrix has the form of (B.1). By using (B.6), the Cesàro mean in (6) has the following limiting behavior:
(C.1) 
where for is the number of states in class , each is a vector of hitting probabilities for each state in to class , and is a positive vector of the stationary distributions of states in . Therefore, is zero if and only if is an inessential state. This finishes the proof.
OutRank  CoP  REAPER  OutlierPursuit  LRR  DPCP  thresholding  Rgraph (ours)  
Time (sec.)  0.019  0.003  0.079  1.186  3.502  0.182  0.312  0.272 
c.3 Discussion
In this section, we provide additional comments on using the Cesàro mean in (6) for outlier detection.
Stationary distributions. By (C.1), the vector that converges to is a stationary distribution of the Markov chain (see (B.3)). In fact, any convex combination of the stationary distribution of each closed communicating class is a stationary distribution of the Markov chain, and the particular stationary distribution that converges to depends on the choice of the initial state distribution .
A step probability distribution and PageRank. Traditionally, PageRank and many other spectral ranking algorithms use the limit of the step probability distribution rather than as adopted in our method. However, the sequence converges if and only if each closed communicating class of the Markov chain is aperiodic, which is not necessarily satisfied in many cases. To address this, PageRank adopts a random walk with restart algorithm. It can be interpreted as a random walk on a transformed Markov chain that adds a small probability of transition from each state to the other states on the transition probability of the original Markov chain. By doing so, the transformed Markov chain contains a single communicating class that is aperiodic. Therefore, the stationary distribution necessarily becomes unique, and the sequence for the transformed Markov chain converges to the unique stationary distribution regardless of the initial state distribution.
Despite the advantages of the random walk used by PageRank, all states of the Markov Chain are essential, so that outliers do not converge to zero probabilities. Therefore, it is less clear whether the stationary distribution that the algorithm converges to can effectively identify outliers.
Appendix D Additional experimental results
d.1 Computational time comparison
Table C.1 reports the average running time of the experiment on the Extended Yale B database with three inlier groups and outliers (226 images in total). From the table we observe that the running times of OutRank and CoP are much smaller than the other methods. This comes from the fact that OutRank and CoP are based on computing data pairwise inner products, which is efficient for small scale data. In contrast, the other methods solve optimization problems. In particular, REAPER, OutlierPursuit and LRR require computing an eigendecomposition of a matrix of size ( is the ambient dimension) during each iteration, which is time consuming when is large. In our experiments we observe that REAPER converges much faster than OutlierPursuit and LRR, thus the running time of REAPER is typically much smaller. The thresholding method and Rgraph method (our algorithm) both compute the representation matrix by solving an optimization problem for each of the data points with all other data points as the dictionary. Subsequently, thresholding rejects outliers simply by computing the norms of the representations, while Rgraph requires a random walk on the graph defined from the representation. We note that the random walk for Rgraph is computationally efficient because of the sparsity of the representation matrix. In each step of the random walk, the computational complexity is on the order of where is the number of data points and is the average number of nonzero entries in the representation vectors .
d.2 Influence of the algorithm parameters
The first step of our method is to compute the data selfrepresentation matrix using the optimization problem (4). In this section, we illustrate the effect that the parameter in (4) has on the performance of our method. Recall that for our numerical experiments we set and that the solution to (4) is nonzero if and only if . We run experiments on Extended Yale B database with inlier groups and outliers while varying in the range ; the results are shown in Figure 1(a). We can see that the Rgraph performs well over a wide range of the parameter . For comparison, Figure 1(a) also plots the performance of the other methods on the same dataset.
d.3 Influence of the percentage of outliers
In this experiment, we fix the number of inlier groups to be and vary the percentage of outliers from to . The performances of the different methods are reported in Figure 1(b). Note that the parameters for all methods are fixed across the different percentages of outliers. We see that the performance of our method is stable with respect to the percentage of outliers. Moreover, our method also achieves the best performance among all methods.
d.4 Visualization of the outliers
To supplement the AUC and F1 measures previously provided, and also to better understand the outliers returned by our outlier detection method, we conducted additional experiments that display the top outliers detected in each experiment. The set of inliers is taken to be the images of the first subject of the Extended Yale B database, and the outlier set is chosen as images randomly chosen from the remaining subjects (see Figure D.2). The top outliers returned by different methods are reported in Figure D.3. Images with red boxes are outliers (i.e. true positives) and images with green boxes are inliers (i.e. false positives).
False positives for all methods are mostly images taken under extreme illumination conditions. Such images have large shadows, which has the effect of removing them from the underlying subspace associated with the individual thus making them more likely to be detected as outliers. The results show that REAPER, Outlier Pursuit, DPCP and Rgraph are relatively robust. In particular, Rgraph is significantly better than thresholding even though both are sparse representation based methods. This shows that while the magnitude of the representation vector adopted by thresholding can be sensitive to corruptions, the connectivity behavior explored by Rgraph is more robust.
References
 [1] E. AriasCastro, G. Chen, and G. Lerman. Spectral clustering based on local linear approximations. Electron. J. Statist., 5:1537–1587, 2011.
 [2] R. Basri and D. Jacobs. Lambertian reflection and linear subspaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(2):218–233, 2003.
 [3] S. Brin and L. Page. The anatomy of a largescale hypertextual web search engine. Computer Networks and ISDN Systems, 30:107–117, 1998.

[4]
E. Candès, X. Li, Y. Ma, and J. Wright.
Robust principal component analysis.
Journal of the ACM, 58, 2011.  [5] G. Chen and G. Lerman. Spectral curvature clustering (SCC). International Journal of Computer Vision, 81(3):317–330, 2009.
 [6] Y. Cherapanamjeri, P. Jain, and P. Netrapalli. Thresholding based efficient outlier robust pca. arXiv preprint arXiv:1702.05571, 2017.

[7]
T.J. Chin, Y. Heng Kee, A. Eriksson, and F. Neumann.
Guaranteed outlier removal with mixed integer linear programs.
InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 5858–5866, 2016.  [8] Y. Cong, J. Yuan, and J. Liu. Sparse reconstruction cost for abnormal event detection. In The 24th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 2025 June 2011, pages 3449–3456, 2011.

[9]
C. Ding, D. Zhou, X. He, and H. Zha.
Rpca: rotational invariant lnorm principal component
analysis for robust subspace factorization.
In
Proceedings of the 23rd international conference on Machine learning
, pages 281–288. ACM, 2006. 
[10]
E. L. Dyer, A. C. Sankaranarayanan, and R. G. Baraniuk.
Greedy feature selection for subspace clustering.
Journal of Machine Learning Research, 14(1):2487–2517, 2013.  [11] E. Elhamifar and R. Vidal. Sparse subspace clustering. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2790–2797, 2009.
 [12] E. Elhamifar and R. Vidal. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11):2765–2781, 2013.
 [13] M. A. Fischler and R. C. Bolles. RANSAC random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 26:381–395, 1981.
 [14] R. G. Gallager. Stochastic processes: theory for applications. Cambridge University Press, 2013.
 [15] A. Georghiades, P. Belhumeur, and D. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6):643–660, 2001.
 [16] G. Griffin, A. Holub, and P. Perona. Caltech256 object category dataset. 2007.
 [17] R. Heckel and H. Bölcskei. Robust subspace clustering via thresholding. IEEE Transactions on Information Theory, 61(11):6320–6342, 2015.
 [18] B. Jin, D. Lorenz, and S. Schiffler. Elasticnet regulariztion: error estimates and active set methods. Inverse Problems, 25(11), 2009.
 [19] K. Kanatani. Motion segmentation by subspace separation and model selection. In IEEE International Conference on Computer Vision, volume 2, pages 586–591, 2001.
 [20] G. Lerman and T. Maunu. Fast, robust and nonconvex subspace recovery. arXiv preprint arXiv:1406.6145, 2014.
 [21] G. Lerman, M. B. McCoy, J. A. Tropp, and T. Zhang. Robust computation of linear models by convex relaxation. Foundations of Computational Mathematics, 15(2):363–410, 2015.
 [22] G. Lerman and T. Zhang. Robust recovery of multiple subspaces by geometric minimization. Annals of Statistics, 39(5):2686–2715, 2011.
 [23] D. A. Levin, Y. Peres, and E. L. Wilmer. Markov chains and mixing times. American Mathematical Soc., 2009.

[24]
J. Li, Y. Kong, and Y. Fu.
Sparse subspace clustering by learning approximation codes.
Proceedings of the ThirtyFirst AAAI Conference on Artificial Intelligence
, 2189–2195, 2017.  [25] X. Li and J. Haupt. Identifying outliers in large matrices via randomized adaptive compressive sampling. IEEE Transactions on Signal Processing, 63(7):1792–1807, 2015.
 [26] Z. Lin, M. Chen, L. Wu, and Y. Ma. The augmented Lagrange multiplier method for exact recovery of corrupted lowrank matrices. arXiv:1009.5055v2, 2011.
 [27] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma. Robust recovery of subspace structures by lowrank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012.
 [28] G. Liu, Z. Lin, and Y. Yu. Robust subspace segmentation by lowrank representation. In International Conference on Machine Learning, pages 663–670, 2010.
 [29] G. Liu, H. Xu, and S. Yan. Exact subspace segmentation and outlier detection by lowrank representation. In AISTATS, pages 703–711, 2012.
 [30] C. Lu, Z. Lin, and S. Yan. Correlation adaptive subspace segmentation by trace lasso. In IEEE International Conference on Computer Vision, pages 1345–1352, 2013.
 [31] C.Y. Lu, H. Min, Z.Q. Zhao, L. Zhu, D.S. Huang, and S. Yan. Robust and efficient subspace segmentation via least squares regression. In European Conference on Computer Vision, pages 347–360, 2012.
 [32] M. McCoy, J. A. Tropp, et al. Two proposals for robust pca using semidefinite programming. Electronic Journal of Statistics, 5:1123–1160, 2011.
 [33] K. Mitra, A. Veeraraghavan, and R. Chellappa. Analysis of sparse regularization based robust regression approaches. IEEE Transactions on Signal Processing, 61(5):1249–1257, 2013.
 [34] H. Moonesignhe and P.N. Tan. Outlier detection using random walks. In 2006 18th IEEE International Conference on Tools with Artificial Intelligence (ICTAI’06), pages 532–539. IEEE, 2006.
 [35] H. Moonesinghe and P.N. Tan. Outrank: a graphbased outlier detection framework using random walk. International Journal on Artificial Intelligence Tools, 17(01):19–36, 2008.
 [36] B. Nasihatkon and R. Hartley. Graph connectivity in sparse subspace clustering. In IEEE Conference on Computer Vision and Pattern Recognition, 2011.
 [37] S. A. Nene, S. K. Nayar, and H. Murase. Columbia object image library (COIL100). Technical Report CUCS00696, 1996.
 [38] D. Park, C. Caramanis, and S. Sanghavi. Greedy subspace clustering. In Neural Information Processing Systems, 2014.
 [39] Q. Qu, J. Sun, and J. Wright. Finding a sparse vector in a subspace: Linear sparsity using alternating directions. In Advances in Neural Information Processing Systems, pages 3401–3409, 2014.
 [40] M. Rahmani and G. Atia. Coherence pursuit: Fast, simple, and robust principal component analysis. arXiv preprint arXiv:1609.04789, 2016.
 [41] R. Serfozo. Basics of applied stochastic processes. Springer Science & Business Media, 2009.
 [42] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. CoRR, abs/1409.1556, 2014.
 [43] M. Soltanolkotabi and E. J. Candès. A geometric analysis of subspace clustering with outliers. Annals of Statistics, 40(4):2195–2238, 2012.
 [44] M. Soltanolkotabi, E. Elhamifar, and E. J. Candès. Robust subspace clustering. Annals of Statistics, 42(2):669–699, 2014.
 [45] H. C. Tijms. A first course in stochastic models. John Wiley and Sons, 2003.
 [46] M. Tsakiris and R. Vidal. Dual principal component pursuit. In ICCV Workshop on Robust Subspace Learning and Computer Vision, pages 10–18, 2015.
 [47] R. Vidal, Y. Ma, and S. Sastry. Generalized Principal Component Analysis. Springer Verlag, 2016.
 [48] U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395–416, 2007.
 [49] Y. Wang, C. Dicle, M. Sznaier, and O. Camps. Self scaled regularized robust regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3261–3269, 2015.
 [50] Y. Wang, Y. Wang, and A. Singh. A deterministic analysis of noisy sparse subspace clustering for dimensionalityreduced data. In International Conference on Machine Learning, pages 1422–1431, 2015.
 [51] Y. Wang, Y.X. Wang, and A. Singh. Graph connectivity in noisy sparse subspace clustering. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 538–546, 2016.
 [52] Y.X. Wang and H. Xu. Noisy sparse subspace clustering. Journal of Machine Learning Research, 17(12):1–41, 2016.
 [53] Y.X. Wang, H. Xu, and C. Leng. Provable subspace clustering: When LRR meets SSC. In Neural Information Processing Systems, 2013.
 [54] J. Wright, A. Ganesh, S. Rao, Y. Peng, and Y. Ma. Robust principal component analysis: Exact recovery of corrupted lowrank matrices via convex optimization. In NIPS, 2009.
 [55] H. Xu, C. Caramanis, and S. Sanghavi. Robust pca via outlier pursuit. In Advances in Neural Information Processing Systems, pages 2496–2504, 2010.
 [56] C. You, C.G. Li, D. Robinson, and R. Vidal. Oracle based active set algorithm for scalable elastic net subspace clustering. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3928–3937, 2016.
 [57] C. You, C.G. Li, D. Robinson, and R. Vidal. Oracle based active set algorithm for scalable elastic net subspace clustering. Arxiv, 2016.
 [58] C. You, D. Robinson, and R. Vidal. Scalable sparse subspace clustering by orthogonal matching pursuit. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3918–3927, 2016.
 [59] C. You and R. Vidal. Geometric conditions for subspacesparse recovery. In International Conference on Machine learning, pages 1585–1593, 2015.
 [60] T. Zhang and G. Lerman. A novel mestimator for robust pca. The Journal of Machine Learning Research, 15(1):749–808, 2014.
 [61] T. Zhang, A. Szlam, and G. Lerman. Median flats for hybrid linear modeling with many outliers. In Workshop on Subspace Methods, pages 234–241, 2009.
 [62] T. Zhang, A. Szlam, Y. Wang, and G. Lerman. Hybrid linear modeling via local bestfit flats. International Journal of Computer Vision, 100(3):217–240, 2012.
 [63] Y. Zheng, S. Sugimoto, and M. Okutomi. Deterministically maximizing feasible subsystem for robust model fitting with unit norm constraint. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1825–1832. IEEE, 2011.
 [64] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B, 67:301–320, 2005.
Comments
There are no comments yet.