# On Deterministic Sampling Patterns for Robust Low-Rank Matrix Completion

In this letter, we study the deterministic sampling patterns for the completion of low rank matrix, when corrupted with a sparse noise, also known as robust matrix completion. We extend the recent results on the deterministic sampling patterns in the absence of noise based on the geometric analysis on the Grassmannian manifold. A special case where each column has a certain number of noisy entries is considered, where our probabilistic analysis performs very efficiently. Furthermore, assuming that the rank of the original matrix is not given, we provide an analysis to determine if the rank of a valid completion is indeed the actual rank of the data corrupted with sparse noise by verifying some conditions.

There are no comments yet.

## Authors

• 5 publications
• 79 publications
• 49 publications
• ### A Characterization of Deterministic Sampling Patterns for Low-Rank Matrix Completion

Low-rank matrix completion (LRMC) problems arise in a wide variety of ap...
03/09/2015 ∙ by Daniel L. Pimentel-Alarcón, et al. ∙ 0

• ### Low-rank Matrix Recovery from Errors and Erasures

This paper considers the recovery of a low-rank matrix from an observed ...
04/03/2011 ∙ by Yudong Chen, et al. ∙ 0

• ### Low-Rank Matrix and Tensor Completion via Adaptive Sampling

We study low rank matrix and tensor completion and propose novel algorit...
04/17/2013 ∙ by Akshay Krishnamurthy, et al. ∙ 0

• ### Typical and Generic Ranks in Matrix Completion

We consider the problem of exact low-rank matrix completion from a geome...
02/26/2018 ∙ by Daniel Irving Bernstein, et al. ∙ 0

• ### Nonconvex Matrix Completion with Linearly Parameterized Factors

Techniques of matrix completion aim to impute a large portion of missing...
03/29/2020 ∙ by Ji Chen, et al. ∙ 0

• ### Adversarial Crowdsourcing Through Robust Rank-One Matrix Completion

We consider the problem of reconstructing a rank-one matrix from a revea...
10/23/2020 ∙ by Qianqian Ma, et al. ∙ 0

• ### Errata: Distant Supervision for Relation Extraction with Matrix Completion

The essence of distantly supervised relation extraction is that it is an...
11/17/2014 ∙ by Miao Fan, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

This letter considers the problem of recovering low rank matrix, when corrupted with a sparse noise. This problem is called Robust Matrix Completion. This problem has been studied widely, see for instance [1, 2, 3, 4, 5, 6, 7], where probabilistic guarantees for either a convex relaxation based approach or alternating minimization based approach are provided and strong assumptions on the value of the entries are required (like coherence condition). In this letter, we consider the deterministic sampling patterns when the data can be completed with a sparse noise and deterministic and probabilistic guarantees for finite and unique completability are provided.

The study of deterministic sampling patterns is motivated by the results in [8], where the authors studied the problem for low rank matrix completion. The necessary and sufficient conditions on the sampling pattern are provided for finite completability in [8]

. Moreover, the sampling probability that ensures finite completability is characterized using the deterministic analysis of the sampling pattern. In this work, we extend these results and analyses on the Grassmannian manifold to the case when the sampled data is corrupted by a sparse noise.

We further consider the case when each column has certain number of noisy entries and provide bounds for the number of samples required in each column. This result resolves the open question in [3], where the authors asked if measurements are enough per column for a matrix where and a fraction elements are noisy in each column. We answer the question in positive, further decreasing the number of measurements in each column to . The main idea is to consider all possibilities of the noise support and make use of the existing fundamental conditions on the sampling pattern for the noiseless scenario.

In many situations, the rank of the sampled matrix is unknown, and depending on the data and the sampled entries, there may be rank- matrices that agree with the observed entries, even if data is not rank-. Thus, guaranteeing whether if there exist a rank completion of the data, the rank of original data is indeed has been studied in [9, 10]

. In this paper, we will generalize this approach and results to estimate the rank of the sampled matrix corrupted with a sparse noise.

The rest of the letter is organized as follows. Section II describes the model of robust low-rank matrix completion. Section III gives the deterministic conditions on the sampling patterns when the data has infinite, finite, or unique completions in the presence of sparse noise. These results are then specialized in Section IV to the case when each column of the matrix has at-most noisy entries. Further, the result is extended to give probabilistic guarantees solving the open problem in [3]. Section V gives conditions to determine whether the rank of matrix is indeed if there exists a valid completion (which mismatches the observed entries on at most the given support) of rank . Some numerical results are provided in Section VI. Finally, Section VI concludes this paper.

## Ii Model and Notations

Suppose we have a rank data matrix having rank . Suppose the data has an added noise such that , where indicates the number of non-zero entries in . Let be a binary matrix which indicates the data points where the data is observed. Let for given matrices and (where is binary) be the matrix with the elements of corresponding to the entries where has entry 1, and is zero otherwise. The problem for robust matrix completion is to find the rank matrix when and .

Let denote the number of rows in and denote the number of columns in . Further, let be a modified matrix from a binary matrix as below.

Consider the -th column of with sampled entries. We construct columns (correspond to the -th column of ) with binary entries such that each column has exactly entries equal to one. Specifically, assume that are the row indices of all observed entries in this column. Let be the corresponding matrix to this column which is defined such that for any , the -th column has the value in rows and zeros elsewhere. Finally, define the binary matrix .

We next define the notion of proper submatrix of .

###### Definition 1.

A submatrix of is called a proper submatrix if its columns correspond to different columns of the sampling pattern .

In Section IV, we also consider the case that each column has almost noisy elements. In other words, each column of has L-0 norm less than or equal to .

## Iii Sampling Conditions for Matrix Completion with Noisy Entries

In this section, we will provide the deterministic conditions on the sampling patterns that determine finite or unique completions, when the observed data is corrupted by a sparse noise. The following lemma is Theorem 1 in [8], which provides the necessary and sufficient combinatorial condition on the sampling for finite completability of the matrix , where it is assumed to be noiseless, i.e., .

###### Lemma 1.

Suppose that the matrix is noiseless, i.e., . Assume that each column of has at least entries which are . For almost every , there exist at most finitely many rank- completions of if and only if the following holds. There exists a proper submatrix formed with columns of such that every matrix formed with a subset of columns in satisfies

 m(˘Ω′)≥n(˘Ω′)/r+r. (1)

The following theorem characterizes the conditions on sampling patterns, which results in finite completability for arbitrary values of (corrupted with a sparse noise). The main idea is to consider all possibilities of the noise support and make use of the existing fundamental conditions on the sampling pattern for the noiseless scenario.

###### Theorem 1 (Deterministic Finite Completions).

Assume that each column of has at least entries which are . For almost every and , there exist at most finitely many rank- completions of if the following holds. For each such that and the entry of is zero if the corresponding entry of is zero, there exists a proper submatrix formed with columns of such that every matrix formed with a subset of columns in satisfies

 m(Ω′)≥n(Ω′)/r+r, (2)
###### Proof.

Recall that in our model there exists at most noisy observed entries among all sampled entries (non-zero entries of ). Hence, there exists such that and all sampled entries corresponding to the entries of are noiseless. Moreover, according to the assumption of theorem, there exists a matrix formed with columns of such that every matrix formed with a subset of columns in satisfies (2). Then, according to Lemma 1, is finitely many completable with probability one. ∎

###### Remark 1.

The converse statement of Theorem 1 holds probabilistically and not necessarily deterministically anymore (with probability one). Because given that for some , (2) does not hold, with some probability the noise is at the entries where is and is . This probability depends on the location of the nonzero entries of . In general there are possibilities for the location of the noisy entries. So, the converse statement holds true with some probability between and depending on the location of noisy entries.

The following lemma is Theorem 2 in [8], which provides the sufficient combinatorial condition on the sampling for unique completability of the matrix , where it is assumed to be noiseless, i.e., .

###### Lemma 2.

Suppose that the matrix is noiseless, i.e., . Assume that each column of has at least entries which are . For almost every , there exist at most finitely many rank- completions of if and only if the following holds. There exist disjoint proper submatrices and formed with and columns of , respectively, such that every matrix formed with a subset of columns in satisfies

 m(˘Ω′)≥n(˘Ω′)/r+r, (3)

and every matrix formed with a subset of columns in satisfies

 m(˘Ω′1)≥n(˘Ω′1)+r. (4)

We will next show that if entries are removed rather than entries and the above guarantees hold, then the support of (or a superset of it if the support of is smaller than ) can be obtained. Having identified the support of , we get the conditions of unique completion as follows.

###### Theorem 2 (Deterministic Unique Completion).

Assume that each column of has at least entries which are . Suppose that for each such that and the entry of is zero if the corresponding entry of is zero, if contains two disjoint proper submatrices: formed with columns and formed with columns, such that

(i) every matrix formed with a subset of columns in satisfies

 m(Ω′)≥n(Ω′)/r+r, (5)

and (ii) every matrix formed with a subset of columns in satisfies

 m(Ω′)≥n(Ω′)+r. (6)

Then, almost every rank- matrix can be recovered from noise where the entries of are generically chosen.

###### Proof.

As the first step of the proof, given condition (5), we provide a simple algorithm, which identifies the support of noisy entries . The algorithm of completion that we use is the same as above, using every that has less entries. To see this, first assume that . We note that if the chosen set is the same as value of where the noise entries are removed, there are finite completions by [8]. However, if the above is not the case, the inherent rank of data with the known entries in is greater than since the entries of a matrix with rank were corrupted by generic entries. We note that since the entries have to match for , and if we remove one of the noisy entry (entry in but not in ) from to obtain , there are at most a finite number of completions fitting the missing entries.

Similarly we can show that if , there are finitely many completions since the set of removals in contain non-zero entries of , . Hence, for each value of that , there exist exactly one possible support of , which we have identified. Note that the finite sum of finite numbers is also a finite number, and therefore we showed the finite completability for . This finite number of completions will not match the entry at the noisy part with probability 1. Thus, there cannot be any possible completion with a rank- matrix which matches all entries of . Hence, we can identify the support of the noise , and therefore condition (6) in the statement of the Theorem guarantees unique completability following Lemma 2. ∎

We now restate Theorem 3 in [8] as the following lemma.

###### Lemma 3.

Suppose that the matrix is noiseless, i.e., . Assume and that each column of the sampled matrix is observed in at least entries, uniformly at random and independently across entries, where

 l>max{12 log(dϵ)+12,2r}. (7)

Also, assume that . Then, with probability at least , the assumption on the sampling pattern given in Lemma 1 holds, i.e., is finitely many completable. Moreover, ensures that with probability at least , is uniquely completable.

The uniform sampling result can be described as follows.

###### Theorem 3 (Probabilistic Finite and Unique Completion).

Suppose , and each column includes at least observed entries, where

 l−12(r+s+1)log(l/(r+s+1))> max{12(log(dϵ)+r+s+1),2r,2r+s+1}. (8)

Then, with probability at least , almost every will be finitely completable if and uniquely completable if .

###### Proof.

This is a simple extension of Lemma 3 by using union bound over all at most choices for each column since the property has to hold over all such choices of removals in each of the columns. ∎

## Iv Sampling Conditions for Completion with Noisy Entries in each Column

Having entries in anywhere in the data makes each column to have at least elements which is large for a large-scale matrix. We next consider a structure where each column has almost noisy elements. In other words, each column of has L-0 norm less than or equal to . Then, the Theorems 1 and 2 can be easily extended to consider different pattern on , and for completion the modified Theorems 1 and 2 are described as follows.

###### Theorem 4 (Deterministic Finite Completion for Column-wise Sparse Noise).

Assume that each column of has at least entries which are . Suppose that for each such that each column of has less ones than that in , and the entry of is zero if the corresponding entry of is zero, if contains a matrix: formed with columns such that

(i) every matrix formed with a subset of columns in satisfies

 m(Ω′)≥n(Ω′)/r+r. (9)

Then, for almost every rank- matrix , there exist finitely many rank- completions, where the entries of are generically chosen.

###### Theorem 5 (Deterministic Unique Completion for Column-wise Sparse Noise).

Assume that each column of has at least entries which are . Suppose that for each such that each column of has less ones than that in , and the entry of is zero if the corresponding entry of is zero, if contains two disjoint proper submatrices: formed with columns and formed with columns, such that

(i) every matrix formed with a subset of columns in satisfies

 m(Ω′)≥n(Ω′)/r+r, (10)

and (ii) every matrix formed with a subset of columns in satisfies

 m(Ω′)≥n(Ω′)+r. (11)

Then almost every rank- matrix can be recovered from noise where the entries of are generically chosen.

Having identified the sampling conditions on robust data completion, we now determine the uniform random sampling conditions for the data completion.

###### Theorem 6 (Probabilistic Finite and Unique Completion for Column-wise Sparse Noise).

Suppose , and each column includes at least observed entries, where

 l−12(g+1)log(l/(g+1))> max{12(log(dϵ)+g+1),2r,r+g+1}. (12)

Then, with probability at least , almost every will be finitely completable if and uniquely completable if .

###### Proof.

This is a simple extension of Lemma 3 by using union bound over all choices in the column since the property has to hold over all such choices of removals in each of the columns. ∎

We note that for () in each column where entries are observed, the number of samples needed in each column is . We note that this setting proposed an an open problem in [3], where the authors asked if the observations of per column suffice. In this paper, we answer this question in positive, and further reducing the number of observations needed to . This result does not necessarily need , while will work as long as or .

## V Sampling Conditions for Rank Estimation with noisy entries

So far, we assumed that the rank of the matrix is known. In this section, we assume that the value of the rank, , is not given and we are interested in approximating it. The following lemma is restatement of Corollary 1 in [10].

###### Lemma 4.

Suppose that the matrix is noiseless, i.e., . Define , where is the maximum number such that the assumption on the sampling pattern given in Lemma 1 holds true, i.e., is the maximum number such that there are finitely many completions of of rank , and let . Then, with probability one, exactly one of the followings holds

(i) ;

(ii) For any arbitrary completion of the matrix of rank , we have .

The following theorem extends the above lemma to the case of existence of sparse noise over the entire data.

###### Theorem 7 (Deterministic Conditions for Rank Estimation for Robust Completion).

Define , where is the maximum number such that the assumption on the sampling pattern given in Theorem 1 holds true and let . Then, with probability one, exactly one of the followings holds

(i) ;

(ii) For any arbitrary completion of the matrix of rank , we have .

###### Proof.

According to Theorem 1, for any , there exist finitely many completions of of rank . The rest of the proof follows from Lemma 4. ∎

###### Theorem 8 (Probabilistic Conditions for Rank Estimation for Robust Completion).

Suppose , and let such that each column includes at least observed entries, where

 l−12(g+1)log(l/(g+1))> max{12(log(dϵ)+g+1),2r′,r′+g+1}. (13)

Then, with probability one, exactly one of the followings holds

(i) ;

(ii) For any arbitrary completion of the matrix of rank , we have .

###### Proof.

Define , where is the maximum number such that the assumption on the sampling pattern given in Theorem 1 holds true. According to Theorem 3, there exist finitely many completions of . Hence, , and therefore . The rest of the proof follows from Theorem 7. ∎

###### Remark 2.

Theorems 7 and 8 can be directly written for noisy entries in each column, where assumption on the sampling pattern given in Theorems 1 and 3 are replaced by the assumption on the sampling pattern given in Theorems 4 and 6, respectively.

###### Remark 3.

Define , where is the maximum number such that the assumption on the sampling pattern given in Theorem 1 holds true. Assume that there exist a completion of the matrix of rank . Then, according to Theorem 7, with probability one, .

## Vi Numerical Results

In this section, we consider and change the value of rank from to and . We consider the uniform number of noisy entries in each column and compare the bounds given in (7) (noiseless) and (6) (noisy) for and . For example, means that each column has one noisy entry and noisy entries in total. The corresponding bounds result in different portions of the samples, which are shown in Figure 1.

## Vii Conclusions

We studied the conditions on the sampling patterns for the completion of low rank matrix, when corrupted with a sparse noise. Both general sparse noise in the matrix and column-wise sparse noise models are considered. Using these results, an open question in [3]

is resolved with improved results. Furthermore, assuming that the rank of the original matrix is not given, we provide an analysis to verify if the rank of a given valid completion is indeed the actual rank of the matrix. The approach in this paper can be easily extended to other tensor structures like Tucker rank, Tensor-train rank, CP-rank, multi-view data, since the corresponding results without noise are given in

[11, 12, 13, 14, 10, 15, 16].

Finding computationally efficient algorithms that achieve close to these bounds is an open problem. Some of the existing algorithms use alternating minimization based approaches [3, 7], which could also be extended to tensors following approaches in [17, 18, 19].

## Acknowledgment

This work was supported in part by the U.S. National Science Foundation (NSF) under grant CIF1064575, and in part by the U.S. Office of Naval Research (ONR) under grant N000141410667. The authors would like to thank Ruijiu Mao of Purdue University for helpful discussions.