I Introduction
We are now in an era of big and highdimensional data. Unfortunately, due to the storage difficulty and the computational obstacle, we can measure only a few entries from the data matrix. So restoring all of the information that the data carry through the partial measurements is of great interest in data analysis. This challenging problem is also known as the Matrix Completion (MC) problem, which is highly related to the socalled recommendation system, where one tries to predict unrevealed users’ preference according to the incomplete rating feedback. Admittedly, this inverse problem is illposed as there should be infinite number of feasible solutions. Fortunately, most of the data are structured, e.g., face
[1], texture [2], and motion [3, 4, 5]. They typically lie around lowdimensional subspaces. Because the rank of data matrix corresponds to the dimensionality of subspace, recent work [6, 7, 8, 9] in convex optimization demonstrates a remarkable fact: it is possible to exactly complete an matrix of rank , if the number of randomly selected matrix elements is no less than .Yet it is well known that the traditional MC model suffers from the robustness issue. It is even sensitive to minor corruptions, which commonly occur due to sensor failures and uncontrolled environments. In the recommendation system, for instance, malicious manipulation of even a single rater might drive the output of MC algorithm far from the ground truth. To resolve the issue, several efforts have been devoted to robustifying the MC model, among which robust MC [10] is the one with solid theoretical analysis. Chen et al. [10] proved that robust MC is able to exactly recover the ground truth subspace and detect the column corruptions (i.e., some entire columns are corrupted by noises), if the dimensionality of subspace is not too high and the corrupted columns are sparse compared with the input size. Most importantly, the observed expansion coefficients should be sufficient w.r.t. the standard matrix basis (please refer to Table I for explanation of notations).
However, recent advances in theoretical physics measure quantumstate entries by tomography w.r.t. the Pauli basis, which is rather different from the standard matrix one [8]. So it is not very straightforward to apply the existing theory on robust MC to such a special case. This paper tries to resolve the problem. More generally, we demonstrate the exact recoverability of an extended robust MC model in the presence of only a few coefficients w.r.t. a set of general basis, although some columns of the intrinsic matrix might be arbitrarily corrupted. By applying our filtering algorithm which has theoretical guarantees, we are able to speed up solving the model numerically. There are various applications of our results.
Ia Practical Applications
In numerical analysis, instead of the standard polynomial basis , the Legendre polynomials are widely used to represent smooth functions due to their orthogonality. Such expansions, however, are typically sensitive to perturbation: a small perturbation of the function might arbitrarily drive the fitting result far from its original. Moreover, to reduce the storage and the computational costs, sometimes we can record only a few expansion coefficients. To complete the missing values and get the outliers removed, this paper justifies the possibility of doing so.
In digital signal processing, one usually samples the signals, e.g., voices and feature vectors, at random in the Fourier basis. However, due to sensor failure, a group of signals that we capture may be rather unreliable. To recover the intrinsic information that the signals carry and remove the outliers simultaneously, our theoretical analysis guarantees the success of robust MC w.r.t. the Fourier basis.
In quantum information theory, to obtain a maximum likelihood estimation of a quantum state of 8 ions, one typically requires hundred of thousands of measurements w.r.t. the Pauli basis, which are unaffordable because of high experimental costs. To overcome the difficulty, Gross
[8] compressed the number of observations w.r.t. any basis by an MC model. However, their model is fragile to severe corruptions, which commonly occurs because of measurement errors. To robustify the model, this paper justifies the exact recoverability of robust MC w.r.t. general basis, even if the datasets are wildly corrupted.In subspace clustering, one tries to segment the data points according to the subspaces they lie in, which can be widely applied to motion segmentation [3, 4, 5, 11, 12], face classification [1, 13, 14, 15], system identification [16, 17, 18], and image segmentation [19, 20]
. Recently, it is of great interest to cluster the subspaces while the observations w.r.t. some coordinates are missing. To resolve the issue, as an application in this paper, our theorem relates robust MC to a certain subspace clustering model – the socalled extended robust LowRank Representation (LRR). Thus one could hope to correctly recover the structure of multiple subspaces, if robust MC is able to complete the unavailable values and remove the outlier samples at an overwhelming probability. This is guaranteed by our paper.
IB Related Work
Suppose that is an data matrix of rank whose columns are sample points, and entries are partially observed among the set . The MC problem aims at exactly recovering , or the range space of , from the measured elements. Probably the most wellknown MC model was proposed by Candès et al. [6]. To choose the lowestrank matrix so as to fit the observed entries, the original model is formulated as
(1) 
This model, however, is untractable because problem (1
) is NPhard. Inspired by recent work in compressive sensing, Candès et al. replaced the rank in the objective function with the nuclear norm, which is the sum of singular values and is the convex envelope of rank on the unit ball of matrix operator norm. Namely,
(2) 
It is worth noting that model (2) is only w.r.t. the standard matrix basis . To extend the model to any basis , Gross [8] proposed a more general MC model:
(3) 
Models (2) and (3) both have solid theoretical guarantees: recent work [6, 7, 8, 9] showed that the models are able to exactly recover the ground truth by an overwhelming probability, if
is uniformly distributed among all sets of cardinality
. Unfortunately, these traditional MC models suffer from the robustness issue: they are even sensitive to minor corruptions, which commonly occurs due to sensor failures, uncontrolled environments, etc.A parallel study to the MC problem is the socalled matrix recovery, namely, recovering underlying data matrix , or the range space of , from the corrupted data matrix , where
is the noise. Probably the most widely used one is Principal Component Analysis (PCA). However, PCA is fragile to outliers. Even a single but severe corruption may wildly degrade the performance of PCA. To resolve the issue, much work has been devoted to robustifying PCA
[21, 22, 23, 24, 25, 26, 27, 28, 29], among which a simple yet successful model to remove column corruptions is robust PCA via Outlier Pursuit:(4) 
and its convex relaxation
(5) 
Outlier Pursuit has theoretical guarantees: Xu et al. [30] and our previous work [31] proved that when the dimensionality of ground truth subspace is not too high and the columnwise corruptions are sparse compared with the sample size, Outlier Pursuit is able to recover the range space of and detect the nonzero columns of at an overwhelming probability. Nowadays, Outlier Pursuit has been widely applied to subspace clustering [32], image alignment [33], texture representation [34], etc. Unfortunately, the model cannot handle the case of missing values, which significantly limits its working range in practice.
It is worth noting that the pros and cons of abovementioned MC and Outlier Pursuit are mutually complementary. To remedy both of their limitations, recent work [10] suggested combining the two models together, resulting in robust MC – a model that could complete the missing values and detect the column corruptions simultaneously. Specifically, it is formulated as
(6) 
Correspondingly, the relaxed form is
(7) 
Chen at al. [10] demonstrated the recoverability of model (7), namely, if the range space of is lowdimensional, the observed entries are sufficient, and the column corruptions are sparse compared with the input size, one can hope to exactly recover the range space of and detect the corrupted samples by robust MC at an overwhelming probability. It is well reported that robust MC has been widely applied to recommendation system and medical research [10]. However, the specific basis in problem (7) limits its extensible applications to more challenging tasks, such as those discussed in Section IA.
IC Our Contributions
In this paper, we extend robust MC to more general cases, namely, the expansion coefficients are observed w.r.t. a set of general basis. We are particularly interested in the exact recoverability of this extended model. Our contributions are as follows:

We demonstrate that the extended robust MC model succeeds at an overwhelming probability. This result broadens the working range of traditional robust MC in three aspects: 1. the choice of basis in our model is not limited to the standard one anymore; 2. with slightly stronger yet reasonable incoherence (ambiguity) conditions, our result allows to be as high as even when the number of corruptions and observations are both constant fraction of the total input size. In comparison with the existing result which requires that , our analysis significantly extends the succeeding range of robust MC model; 3. we suggest that the regularization parameter be chosen as , which is universal.

We propose a socalled filtering algorithm to reduce the computational complexity of our model. Furthermore, we establish theoretical guarantees for our algorithm, which are elegantly relevant to the incoherence of the lowrank component.

As an application, we relate the extended robust MC model to a certain subspace clustering model – extended robust LRR. So both our theory and our algorithm on the extended robust MC can be applied to the subspace clustering problem if the extended robust MC can exactly recover the data structure.
IC1 Novelty of Our Analysis Technique
In the analysis of the exact recoverability of the model, we novelly divide the proof of Theorem 1 into two parts: The exact recoverability of column support and the exact recoverability of column space. We are able to attack the two problems separately thanks to the idea of expanding the objective function at the welldesigned points, i.e., for the recovery of column support and for the recovery of column space, respectively (see Sections IVB1 and IVC1 for details). This technique enables us to decouple the randomization of and , and so construct the dual variables easily by standard tools like the least squares and golfing scheme. We notice that our framework is general. It not only can be applied to the proof for easier model like Outlier Pursuit [31] (though we will sacrifice a small polylog factor for the probability of outliers), but can also hopefully simplify the proof for model with more complicated formulation. That is roughly the highlevel intuition why we can handle the general basis in this paper.
In the analysis for our filtering algorithm, we take advantage of the lowrank property, namely, we recover a smallsized seed matrix first and then use the linear representation to obtain the whole desired matrix. Our analysis employs tools in recent matrix concentration literature [35] to bound the size of the seed matrix, which elegantly relates to the incoherence of the underlying matrix. This is definitely consistent with the fact that, for matrix with high incoherence, we typically need to sample more columns in order to fully observe the maximal linearly independent group (see Algorithm 1 for the procedure).
The remainder of this paper is organized as follows. Section II describes the problem setup. Section III shows our theoretical results, i.e., the exact recoverability of our model. In Section IV, we present the detailed proofs of our main results. Section V proposes a novel filtering algorithm for the extended robust MC model, and establishes theoretical guarantees for the algorithm. We show an application of our analysis to subspace clustering problem, and demonstrate the validity of our theory by experiments in Section VI. Finally, Section VII concludes the paper.
Ii Problem Setup
Suppose that is an data matrix of rank , whose columns are sample points. is a noise matrix, whose column support is sparse compared with the input size . Let . Its expansion coefficients w.r.t. a set of general basis , , are partially observed. This paper considers the exact recovery problem as defined below.
Definition 1 (Exact Recovery Problem).
The exact recovery problem investigates whether the range space of and the column support of can be exactly recovered from randomly selected coefficients of w.r.t. general basis, provided that some columns of are arbitrarily corrupted.
A similar problem was proposed in [36, 37], which recovered the whole matrix and themselves if has elementwise support. However, it is worth noting that one can only hope to recover the range space of and the column support of in Definition 1, because a corrupted column can be addition of any one vector in the range space of and another appropriate vector [30, 10, 31]. Moreover, as existing work mostly concentrates on recovering a lowrank matrix from a sampling of matrix elements, our exact recovery problem covers this situation as a special case.
Iia Model Formulations
As our exact recovery problem defines, we study an extended robust MC model w.r.t. a set of general basis. To choose the solution with the lowest rank, the original model is formulated as
(8) 
where is the observation index and is a set of orthonormal bases such that
(9) 
Unfortunately, problem (8) is NPhard because the rank function is discrete. So we replace the rank in the objective function with the nuclear norm, resulting in the relaxed formulation:
(10) 
For brevity, we also rewrite it as
(11) 
where is an operator which projects a matrix onto the space , i.e., .
In this paper, we show that problem (10), or equivalently problem (11), exactly recovers the range space of and the column support of , if the rank of is no higher than , and the number of corruptions and observations are (nearly) constant fractions of the total input size. In other words, the original problem (8) can be well approximated by the relaxed problem (10).
IiB Assumptions
At first sight, it seems not always possible to successfully separate as the lowrank term plus the columnsparse one, because there seems to not be sufficient information to avoid the identifiability issues. The identifiability issues are reflected in two aspects: the true lowrank term might be sparse and the true sparse component might be lowrank, thus we cannot hopefully identify the ground truth correctly. So we require several assumptions in order to avoid such unidentifiable cases.
IiB1 Incoherence Conditions on the LowRank Term
As an extreme example, suppose that the lowrank term has only one nonzero entry, e.g., . This matrix has a one in the top left corner and zeros elsewhere, thus being both lowrank and sparse. So it is impossible to identify this matrix as the lowrank term correctly. Moreover, we cannot expect to recover the range space of this matrix from a sampling of its entries, unless we pretty much observe all of the elements.
To resolve the issue, Gross [8] introduced incoherence condition to the lowrank term in problem (3) w.r.t. the general basis :
(12a)  
(12b)  
(12c) 
where is the skinny SVD of . Intuitively, as discussed in [8, 37], conditions (12a), (12b), and (12c) assert that the singular vectors reasonably spread out for small . Because problem (3), which is a noiseless version of problem (10), requires conditions (12a), (12b), and (12c) in its theoretical guarantees [8], we will set the same incoherence conditions to analyze our model (10) as well. We argue that beyond (12a), conditions (12b) and (12c) are indispensible for the exact recovery of the target matrix in our setting. As an example, let few entries in the first row of a matrix be nonzeros while all other elements are zeros. This matrix satisfies condition (12a) but does not satisfy (12b) and (12c). In this scenario the probability of recovering its column space is not very high, as we cannot guarantee to take a sample from those uncorrupted nonzero entries, when there are a large amount of noises.
So we assume that the lowrank part satisfies conditions (12a), (12b), and (12c), and the lowrank component satisfies condition (12a), as work [31] did (please refer to Table I for explanation of notations). Though it is more natural to assume the incoherence on , the following example shows that the incoherence of does not suffice to guarantee the success of model (10) when the rank is relatively high:
Example 1.
Compute as a product of i.i.d. matrices. The column support of
is sampled by Bernoulli distribution with parameter
. Let the first entry of each nonzero column of be and all other entries be zeros. Also set the observation matrix as , where is the set of observed index selected by i.i.d. . We adopt , , , and , so there are around constant number of corrupted samples in this example. Note that, here, is incoherent fulfilling conditions (12a), (12b), and (12c), while and are not. However, the output of algorithm falsely identifies all of the corrupted samples as the clean data. So the incoherence of cannot guarantee the exact recoverability of our model.Imposing incoherence conditions on and is not so surprising: there might be multiple solutions for the optimization model, and the lowrankness/sparseness decompositions of are nonunique (depending on which solution we are considering). Since and are two eligible decompositions of related to a fixed optimal solution pair, it is natural to consider imposing incoherence on them. Specifically, we first assume incoherence conditions (12a), (12b), and (12c) on . Note that these conditions guarantee that matrix cannot be sparse, so we can resolve the identifiability issue for the decomposition and hopefully recover the index . After that, the ambiguity between the low rankness and the row sparseness is not an issue any more, i.e., even for rowsparse underlying matrix we can still expect to recover its column space. Here is an example to illustrate this: suppose the low rank matrix is which has ones in the first rows and zeros elsewhere, and we have known some of the columns are corrupted by noise. Remove the outlier columns. Even we cannot fully observe the remaining entries, we can still expect to recover the column space since the information for the range space is sufficient to us. Therefore, we only need to impose condition (12a) on , which asserts that cannot be columnsparse.
IiB2 Ambiguity Conditions on ColumnSparse Term
Analogously, the columnsparse term has the identification issue as well. Suppose that is a rank1 matrix such that a constant fraction of the columns are zeros. This matrix is both lowrank and columnsparse, which cannot be correctly identified. To avoid this case, one needs the isotropic assumption [38], or the following ambiguity condition, on the columnsparse term , which is introduced by [31]:
(13) 
where can be any numerical constant. Here the isotropic assumption asserts that the covariance of the noise matrix is the identity. In fact, many noise models satisfy this assumption, e.g., i.i.d. Gaussian noise. So the normalized noise vector would uniformly distribute on the surface of a unit sphere centered at the origin, thus they cannot be in a lowdimensional subspace — in other words, not lowrank. Similarly, the ambiguity condition was proposed for the same purpose [31]. Geometrically, the spectral norm stands for the length of the first principal direction (we use operator to remove the scaling factor). So condition (13) asserts that the energy for each principal direction does not differ too much, namely, the data distribute around a ball (see Figure 1), and (13) holds once the directions of nonzero columns of scatter sufficiently randomly. Note that the isotropic assumption implies our ambiguity condition: if the columns of are isotropic, would be a constant even though the number of column support of is comparable to . Thus our ambiguity condition (13) is feasible. No matter what number of nonzero columns of is, the assumption guarantees matrix not to be lowrank.
IiB3 Probability Model
Our main results assume that the column support of and the entry support of measured set obey i.i.d. Bernoulli distribution with parameter and parameter , respectively. Such assumptions are mild because we have no further information on the positions of outlier and measurement. More specifically, we assume that throughout our proof, where determines the outlier positions and determines the outlier values. If an event holds with a probability at least , we say that the event happens with an overwhelming probability.
IiB4 Other Assumptions
Obviously, to guarantee the exact recovery of , the noiseless samples should span the same space as that of , i.e., . Otherwise, only a subspace of can be recovered, because the noises may be arbitrarily severe. So without loss of generality, we assume , as work [30, 10] did. Moreover, the noises should be identifiable, namely, they cannot lie in the ground truth .
IiC Summary of Main Notations
In this paper, matrice are denoted by capital symbols. For matrix , we represent or as the th column of . We denote by the entry at the th row, th column of the matrix. For matrix operators, and represent the conjugate transpose and the MoorePenrose pseudoinverse of , respectively, and stands for the matrix whose th entry is .
Several norms appear in this paper, both for vector and for matrix. The only vector norm we use is , which stands for the Euclidean norm or the vector norm. For matrix norm, we denote by the nuclear norm, which stands for the sum of singular values. The matrix norm analogous to the vector norm is the Frobenious norm, represented by . The pseudonorms, and , denote the number of nonzero entries and nonzero columns of a matrix, respectively; They are not real norms because the absolute homogeneity does not hold. The convex surrogates of and are matrix and norms, with definitions and , respectively. The dual norms of matrix and norms are and norms, represented by and . We also denote the operator norm of operator as .
Our analysis involves linear spaces as well. For example, and (similarly define for , we will not restate that for the following notations) denotes the column support of matrix . Without confusion, it forms a linear subspace. We use to represent the element support of a matrix, as well as the corresponding linear subspace. The column space of a matrix is written as script , while the row space is written as script or . For any space , stands for the orthogonal complement of space .
We also discuss some special matrices and spaces in our analysis. For example, denotes the ground truth. We represent as the optimal solutions of our model, where and guarantee the feasibility of the solution. We are especially interested in expanding the objective function at some particular points: For the exact recovery of the column support, we focus on ; for the the exact recovery of the column support, we focus on . Another matrix we are interested in is , which consists of normalized nonzero columns of and belongs to the subdifferential of norm. Similarly, the space is highly related to the subgradient of the nuclear norm. Namely, the subgradient of nuclear norm can be written in closed form as a term in plus a term in . The projection operator to space is denoted by , which equals .
Table I summarizes the main notations used in this paper.
Notations  Meanings  Notations  Meanings 

Column Support.  Element Support.  
.  .  
.  
,  Size of the data matrix .  ,  , . 
Grows in the same order of .  Grows equal to or less than the order of .  
Tensor product.  Vector whose th entry is and others are s.  
Capital  A matrix.  , ,  The identity matrix, allzero matrix, and allone vector. 
or  The th column of matrix .  The entry at the th row and th column of .  
Conjugate transpose of matrix .  MoorePenrose pseudoinverse of matrix .  
Matrix whose th entry is .  norm for vector, .  
Nuclear norm, the sum of singular values.  norm, number of nonzero entries.  
norm, number of nonzero columns.  norm, .  
norm, .  norm, .  
Frobenious norm, .  Infinity norm, .  
(Matrix) operator norm.  Ground truth.  
Optimal solutions, .  , .  
,  ,  Left and right singular vectors of .  
, ,  Column space of , , .  , ,  Row space of , , . 
Space .  Orthogonal complement of the space .  
,  , .  .  
, ,  Index of outliers of , , .  Outliers number of .  
The column support of is a subset of .  }.  
Obeys Bernoulli distribution with parameter .  Gaussian distribution (mean and variance ). 

Row  Row space of matrix .  Supp  Column support of matrix . 
The th singular value of matrix .  The th eigenvalue of matrix . 

General basis.  . 
Iii Exact Recoverability of the Model
Our main results in this paper show that, surprisingly, model (11) is able to exactly recover the range space of and identify the column support of with a closedform regularization parameter, even when only a small number of expansion coefficients are measured w.r.t. general basis and a constant fraction of columns are arbitrarily corrupted. Our theorem is as follows:
Theorem 1 (Exact Recoverability Under Bernoulli Sampling).
Any solution to the extended robust MC (11) with exactly recovers the column space of and the column support of with a probability at least , if the column support of subjects to i.i.d. , the support subjects to i.i.d. , and
(14) 
where , , , and are all constants independent of each other, and is the incoherence parameter in (12a), (12b), and (12c).
Remark 1.
According to [37], a recovery result under the Bernoulli model with parameter automatically implies a corresponding result for the uniform model with parameter at an overwhelming probability. So conditions (14) are equivalent to
(15) 
where the column support of is uniformly distributed among all sets of cardinality , the support is uniformly distributed among all sets of cardinality , and , , and are numerical constants.
Iiia Comparison to Previous Results
In the traditional lowrank MC problem, one seeks to complete a lowrank matrix from only a few measurements without corruptions. Recently, it has been shown that a constant fraction of the entries are allowed to be missing, even if the rank of intrinsic matrix is as high as . So compared with the result, our bound in Theorem 1 is tight up to a polylog factor. Note that the polylog gap comes from the consideration of arbitrary corruptions in our analysis. When , our theorem partially recovers the results of [8].
In the traditional lowrank matrix recovery problem, one tries to recover a lowrank matrix, or the range space of matrix, from fully observed corrupted data. To this end, our previous work [31] demonstrated that a constant fraction of the columns can be corrupted, even if the rank of intrinsic matrix is as high as . Compared with the result, our bound in Theorem 1 is tight up to a polylog factor as well, where the polylog gap comes from the consideration of missing values in our analysis. When , our theorem partially recovers the results of [31].
Probably the only lowrank model that can simultaneously complete the missing values, recover the ground truth subspace, and detect the corrupted samples is robust MC [10]. As a corollary, Chen et al. [10] showed that a constant fraction of columns and entries can be corrupted and missing, respectively, if the rank of is of order . Compared with this, though with stronger incoherence (ambiguity) conditions, our work extends the working range of robust MC model to the rank of order . Moreover, our results consider a set of more general basis, i.e., when , our theorem partially recovers the results of [10].
Wright et al. [39] produced a certificate of optimality for for the Compressive Principal Component Pursuit, given that is the optimal solution for Principal Component Pursuit. There are significant differences between their work and ours: 1. Their analysis assumed that certain entries are corrupted by noise, while our paper assumes that some whole columns are noisy. In some sense, theoretical analysis on column noise is more difficult than that on Principal Component Pursuit [31]. The most distinct difference is that we cannot expect our model to exactly recover and . Rather, only the column space of and the column support of can be exactly recovered [10, 30]. 2. Wright et al.’s analysis is based on the assumption that can be recovered by Principal Component Pursuit, while our analysis is independent of this requirement.
Iv Complete Proofs of Theorem 1
Theorem 1 shows the exact recoverability of our extended robust MC model w.r.t. general basis. This section is devoted to proving this result.
Iva Proof Sketch
We argue that it is not very straightforward to apply the existing proofs on Robust PCA/Matrix Completion to the case of general basis, since these proofs essentially require the observed entries and the outliers to be represented under the same basis [37]. To resolve the issue, generally speaking, we novelly divide the proof of Theorem 1 into two parts: The exact recoverability of column support and the exact recoverability of column space. We are able to attack the two problems separately thanks to the idea of expanding the objective function at the welldesigned points, i.e., for the recovery of column support and for the recovery of column space, respectively (see Sections IVB1 and IVC1 for details). This technique enables us to decouple the randomization of and , and so construct the dual variables easily by standard tools like the least squares and golfing scheme. We notice that our framework is general. It not only can be applied to the proof for easier model like Outlier Pursuit [31] (though we will sacrifice a small polylog factor for the probability of outliers), but can also hopefully simplify the proof for model with more complicated formulation, e.g., decomposing the data matrix into more than two structural components [39]. That is roughly the highlevel intuition why we can handle the general basis and improve over the previous work in this paper.
Specifically, for the exact recoverability of column support, we expand the objective function at to establish our first class of dual conditions. Though it is standard to construct dual variables by golfing scheme, many lemmas need to be generalized in the standard setting because of the existence of both and . All the preliminary work is done in Appendix A. When or , we claim that our lemmas return to the ones in [37, 10], thus being more general. The idea behind the proofs is to fix first and use the randomized argument for to have a onestep result, and then allow to be randomized to get our desired lemmas.
For the exact recoverability of column support, similarly, we expand the objective function at to establish our second class of dual conditions. We construct the dual variables by the least squares, and prove the correctness of our construction by using generalized lemmas as well. To this end, we also utilize the ambiguity condition, which guarantees that the outlier matrix cannot be lowrank. This enables us to improve the upper bound for the rankness of the ground truth matrix from to our .
In summary, our proof proceeds in two parallel lines. The steps are as follows.

We prove the exact recoverability of column support:

We then prove the exact recoverability of column space:
IvB Exact Recovery of Column Support
IvB1 Dual Conditions
We first establish dual conditions for the exact recovery of the column support. The following lemma shows that once we can construct dual variables satisfying certain conditions (a.k.a. dual conditions), the column support of the outliers can be exactly recovered with a high probability by solving our robust MC model (11). Basically, the proof is to find conditions which implies that belongs to the subdifferential of the objective function at the desired lowrank and columnsparse solution.
Lemma 1.
Let be any solution to the extended robust MC (11), , and . Assume that , and
where , , , , and . Then exactly recovers the column support of , i.e., .
Proof.
We first recall that the subgradients of nuclear norm and norm are as follows:
According to Lemma 7 and the feasibility of , . Let . Thus the pair is feasible to problem (11). Then we have
Now adopt such that and ^{1}^{1}1By the duality between the nuclear norm and the operator norm, there exists a such that and . Thus we take . It holds similarly for ., and note that . So we have
Notice that
So we have
Also, note that
That is
Comments
There are no comments yet.