# Rank Determination for Low-Rank Data Completion

Recently, fundamental conditions on the sampling patterns have been obtained for finite completability of low-rank matrices or tensors given the corresponding ranks. In this paper, we consider the scenario where the rank is not given and we aim to approximate the unknown rank based on the location of sampled entries and some given completion. We consider a number of data models, including single-view matrix, multi-view matrix, CP tensor, tensor-train tensor and Tucker tensor. For each of these data models, we provide an upper bound on the rank when an arbitrary low-rank completion is given. We characterize these bounds both deterministically, i.e., with probability one given that the sampling pattern satisfies certain combinatorial properties, and probabilistically, i.e., with high probability given that the sampling probability is above some threshold. Moreover, for both single-view matrix and CP tensor, we are able to show that the obtained upper bound is exactly equal to the unknown rank if the lowest-rank completion is given. Furthermore, we provide numerical experiments for the case of single-view matrix, where we use nuclear norm minimization to find a low-rank completion of the sampled data and we observe that in most of the cases the proposed upper bound on the rank is equal to the true rank.

## Authors

• 5 publications
• 49 publications
• 78 publications
• ### Fundamental Conditions for Low-CP-Rank Tensor Completion

We consider the problem of low canonical polyadic (CP) rank tensor compl...
03/31/2017 ∙ by Morteza Ashraphijuo, et al. ∙ 0

• ### Lower and Upper Bounds on the VC-Dimension of Tensor Network Models

Tensor network methods have been a key ingredient of advances in condens...
06/22/2021 ∙ by Behnoush Khavari, et al. ∙ 0

• ### Characterization of Deterministic and Probabilistic Sampling Patterns for Finite Completability of Low Tensor-Train Rank Tensor

In this paper, we analyze the fundamental conditions for low-rank tensor...
03/22/2017 ∙ by Morteza Ashraphijuo, et al. ∙ 0

• ### Tensor completion using enhanced multiple modes low-rank prior and total variation

In this paper, we propose a novel model to recover a low-rank tensor by ...
04/19/2020 ∙ by Haijin Zeng, et al. ∙ 1

• ### Rank Bounds for Approximating Gaussian Densities in the Tensor-Train Format

Low rank tensor approximations have been employed successfully, for exam...
01/22/2020 ∙ by Paul B. Rohrbach, et al. ∙ 0

• ### Beyond the Signs: Nonparametric Tensor Completion via Sign Series

We consider the problem of tensor estimation from noisy observations wit...
01/31/2021 ∙ by Chanwoo Lee, et al. ∙ 0

• ### Low Rank Approximation at Sublinear Cost by Means of Subspace Sampling

Low Rank Approximation (LRA) of a matrix is a hot research subject, fund...
06/11/2019 ∙ by Victor Y. Pan, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

The low-rank data completion problem is concerned with completing a matrix or tensor given a subset of its entries and some rank constraints. Various applications can be found in many fields including image and signal processing [1, 2], data mining [3], network coding [4], compressed sensing [5, 6, 7], reconstructing the visual data [8], etc. There is an extensive literature on developing various optimization methods to treat this problem including minimizing a convex relaxation of rank [9, 10, 11, 7, 12], non-convex approaches [13], and alternating minimization [14, 15], etc. More recently, fundamental conditions on the sampling pattern that lead to different numbers of completion (unique, finite, or infinite) given the rank constraints have been investigated in [16, 17, 18, 19, 20].

However, in many practical low-rank data completion problems, the rank may not be known a priori. In this paper, we investigate this problem and we aim to approximate the rank based on the given entries, where it is assumed that the original data is generically chosen from the manifold corresponding to the unknown rank. The only existing work that treats this problem for a single-view matrix data based on the sampling pattern is [21], which requires some strong assumptions including the existence of a completion whose rank is a lower bound on the unknown true rank , i.e.,

. We start by investigating the single-view matrix to provide a new analysis that does not require such assumption and also we can extend our approach to treat the CP rank tensor model. Moreover, we further generalize our approach to treat vector rank data models including the multi-view matrix, the Tucker rank tensor and the tensor-train (TT) rank tensor. For each of these data models, we obtain the upper bound on the scalar rank or component-wise upper bound on the unknown vector rank, deterministically based on the sampling pattern and the rank of a given completion. We also obtain such bound that holds with high probability based on the sampling probability. Moreover, for the single-view matrix, we provide some numerical results to show how tight our probabilistic bounds on the rank are (in terms of the sampling probability). In particular, we used nuclear norm minimization to find a completion and demonstrate our proposed method in obtaining a tight bound on the unknown rank.

We take advantage of the geometric analysis on the manifold of the corresponding data which leads to the fundamental conditions on the sampling pattern (independent of the value of entries) [16, 22, 18, 23, 20] such that given an arbitrary low-rank completion we can provide a tight upper bound on the rank. To illustrate how such approximation is even possible consider the following example. Assume that an rank- matrix is chosen generically from the corresponding manifold. Hence, any submatrix of this matrix is full-rank with probability one (due to the genericity assumption). Moreover, note that any submatrix of this matrix is not full-rank. As a result, by observing the sampled entries we can find some bounds on the rank. Using the analysis in [16, 22, 18, 23, 20] on finite completablity of the sampled data (finite number of completions) for different data models, we characterize both deterministic and probablistic bounds on the unknown rank.

The remained of the paper is organized as follows. In Section II, we introduce the data models and problem statement. In Sections III and IV we characterize our determintic and probablistic bounds for scalar-rank cases (single-view matrix and CP tensor) and vector-rank cases (multi-view matrix, Tucker tensor and TT tensor), respectively. Finally, Section V concludes the paper.

## Ii Data Models and Problem Statement

### Ii-a Matrix Models

#### Ii-A1 Single-View Matrix

Assume that the sampled matrix is chosen generically from the manifold of the matrices of rank , where is unknown. The matrix is called a basis for if each column of can be written as a linear combination of the columns of . Denote as the binary sampling pattern matrix that is of the same size as and if is observed and otherwise, where represents the entry corresponding to row number and column number . Moreover, define as the matrix obtained from sampling according to , i.e.,

 UΩ(→x)={U(→x)if  Ω(→x) =1,0if  Ω(→x) =0. (1)

#### Ii-A2 Multi-View Matrix

The matrix is sampled. Denote a partition of as where and represent the first and second views of data, respectively. The sampling pattern is defined as , where and represent the sampling patterns corresponding to the first and second views of data, respectively. Assume that , and , and also is chosen generically from the manifold structure with above parameters. Denote which is assumed unknown.

### Ii-B Tensor Models

Assume that a -way tensor is sampled. For the sake of simplicity in notation, define , and . Denote as the binary sampling pattern tensor that is of the same size as and if is observed and otherwise, where represents an entry of tensor with coordinate . Moreover, define as the tensor obtained from sampling according to , i.e.,

 UΩ(→x)={U(→x)if  Ω(→x) =1,0if  Ω(→x) =0. (2)

For each subtensor of the tensor , define as the number of observed entries in according to the sampling pattern .

Define the matrix as the -th unfolding of the tensor , such that , where and are two bijective mappings.

Let be the -th matricization of the tensor , such that , where is a bijective mapping. Observe that for any arbitrary tensor , the first matricization and the first unfolding are the same, i.e., .

In what follows, we introduce three different tensor ranks, i.e., the CP rank, Tucker rank and TT rank.

#### Ii-B1 CP Decomposition

The CP rank of a tensor , , is defined as the minimum number such that there exist for and , such that

 U=r∑l=1al1⊗al2⊗⋯⊗ald, (3)

or equivalently,

 U(x1,x2,…,xd)=r∑l=1al1(x1)al2(x2)…ald(xd), (4)

where denotes the tensor product (outer product) and denotes the -th entry of vector . Note that is a rank- tensor, .

#### Ii-B2 Tucker Decomposition

Given and , the product is defined as

 U′(x1,⋯,xi−1,ki,xi+1,⋯,xd)≜ni∑xi=1U(x1,⋯,xi−1,xi,xi+1,⋯,xd)X(xi,ki). (5)

The Tucker rank of a tensor is defined as where , i.e., the rank of the -th matricization, . The Tucker decomposition of is given by

 U(→x)=m1∑k1=1⋯md∑kd=1C(k1,…,kd)T1(k1,x1)…Td(kd,xd), (6)

or in short

 U=C×di=1Ti, (7)

where is the core tensor and are orthogonal matrices.

#### Ii-B3 TT Decomposition

The separation or TT rank of a tensor is defined as where , i.e., the rank of the -th unfolding, . Note that in general and also is simply the conventional matrix rank when . The TT decomposition of a tensor is given by

 U(→x)=u1∑k1=1⋯ud−1∑kd−1=1U(1)(x1,k1)(d−1∏i=2U(i)(ki−1,xi,ki))U(d)(kd−1,xd), (8)

or in short

 U=U(1)…U(d), (9)

where the -way tensors for and matrices and are the components of this decomposition.

For each matrix or tensor model, we assume that the true rank of or is or which is unknown, and also or is chosen generically from the corresponding manifold.

### Ii-C Problem Statement

For each one of the above data models, we are interested in obtaining the upper bound on the unknown scalar-rank or component-wise upper bound on the unknown vector-rank , deterministically based on the sampling pattern or and the rank of a given completion. Also, we aim to provide such bound that holds with high probability based only on the sampling probability of the entries and the rank of a given completion. Moreover, for the single-view matrix model and CP-rank tensor model, where the rank is a scalar, we provide both deterministic and probabilistic conditions such that the unknown rank can be exactly determined.

## Iii Scalar-Rank Cases

### Iii-a Single-View Matrix

Previously, this problem has been treated in [21], where strong assumptions including the existence of a completion with rank have been used. In this section, we provide an analysis that does not require such assumption and moreover our analysis can be extended to multi-view data and tensors in the following sections. Furthermore, we show the tightness of our theoretical bounds via numerical examples.

#### Iii-A1 Deterministic Rank Analysis

The following assumption will be used frequently in this subsection.

Assumption : Each column of the sampled matrix includes at least sampled entries.

Consider an arbitrary column of the sampled matrix , where . Let denote the number of observed entries in the -th column of . Assumption results that .

We construct a binary valued matrix called constraint matrix based on and a given number . Specifically, we construct columns with binary entries based on the locations of the observed entries in such that each column has exactly entries equal to one. Assume that are the row indices of all observed entries in this column. Let be the corresponding matrix to this column which is defined as the following: for any , the -th column has the value in rows and zeros elsewhere. Define the binary constraint matrix as [16], where .

Assumption : There exists a submatrix111Specified by a subset of rows and a subset of columns (not necessarily consecutive). of such that and for any and any submatrix of we have

 rf(˘Ω′′r)−r2≥K′, (10)

where denotes the number of nonzero rows of .

Note that exhaustive enumeration is needed in order to check whether or not Assumption holds. Hence, the deterministic analysis cannot be used in practice for large-scale data. However, it serves as the basis of the subsequent probabilistic analysis that will lead to a simple lower bound on the sampling probability such that Assumption holds with high probability, which is of practical value.

In the following, we restate Theorem in [16] which will be used later.

###### Lemma 1.

For almost every , there are finitely many completions of the sampled matrix if and only if Assumptions and hold.

Recall that the true rank is assumed unknown.

###### Definition 1.

Let denote the set of all natural numbers such that both Assumptions and hold.

###### Lemma 2.

There exists a number such that .

###### Proof.

Assume that and . It suffices to show . By contradiction, assume that . Therefore, according to Lemma 1, there exist infinitely many completions of of rank . Consider the decomposition , where and are the matrices of variables. Then, each observed entry of results in a polynomial in terms of the entries of and

 U(i,j)=r−1∑l=1X(i,l)Y(l,j), (11)

and let denote the set of all such polynomials. Since there exist infinitely many completions of of rank , the maximum number of algebraically independent polynomials among all the polynomials in the set is less than which is the dimension of the manifold of matrices of rank [24]; since otherwise, accroding to Bernstein’s theorem [25], there are at most finitely many completions. Hence, as we have and thus , the maximum number of algebraically independent polynomials in is less than as well, which is the dimension of the manifold of matrices of rank . Therefore, accroding to Bernstein’s theorem with probability one, there exist infinitely many completions of the sampled matrix of rank and this contradicts the assumption. ∎

The following theorem provides a relationship between the unknown rank and .

###### Theorem 1.

With probability one, exactly one of the following statements holds

(i) ;

(ii) For any arbitrary completion of the sampled matrix of rank , we have .

###### Proof.

Suppose that there does not exist a completion of the sampled matrix of rank such that . Therefore, it is easily verified that statement (ii) holds and statement (i) does not hold. On the other hand, assume that there exists a completion of the sampled matrix of rank , where . Hence, statement (ii) does not hold and to complete the proof it suffices to show that with probability one, statement (i) holds.

Observe that , and therefore Assumption holds. Hence, each column of includes at least observed entries. On the other hand, the existence of a completion of the sampled matrix of rank results in the existence of a basis such that each column of is a linear combination of the columns of , and thus there exists such that . Hence, given , each observed entry results in a degree- polynomial in terms of the entries of as the following

 U(i,j)=r∑l=1X(i,l)Y(l,j). (12)

Consider the first column of and recall that it includes at least observed entries. The genericity of the coefficients of the above-mentioned polynomials results that using of the observed entries the first column of can be determined uniquely. This is because there exists a unique solution for a system of linear equations in variables that are linearly independent. Then, there exists at least one more observed entry besides these observed entries in the first column of and it can be written as a linear combination of the observed entries that have been used to obtain the first column of . Let , , denote the observed entries that have been used to obtain the first column of and denote the other observed entry. Hence, the existence of a completion of the sampled matrix of rank results in an equation as the following

 U(ir+1,1)=r∑l=1tlU(il,1), (13)

where ’s are constant scalars, . Assume that , i.e., statement (i) does not hold. Then, note that and is chosen generically from the manifold of rank- matrices, and therefore an equation of the form of (13) holds with probability zero. Moreover, according to Lemma 1 there exist at most finitely many completions of the sampled matrix of rank . Therefore, there exist a completion of of rank with probability zero, which contradicts the initial assumption that there exists a completion of the sampled matrix of rank , where . ∎

###### Corollary 1.

Consider an arbitrary number . Similar to Theorem 1, it follows that with probability one, exactly one of the followings holds

(i) ;

(ii) For any arbitrary completion of the sampled matrix of rank , we have .

As a result of Corollary 1, we have the following.

###### Corollary 2.

Assuming that there exists a rank- completion of the sampled matrix such that , then with probability one .

###### Corollary 3.

Let denote an optimal solution to the following NP-hard optimization problem

 minimizeU′∈Rn1×n2 rank(U′) (14) subject to U′Ω=UΩ.

Also, let denote a suboptimal solution to the above optimization problem. Then, Corollary 1 results the following statements:

(i) If , then with probability one.

(ii) If , then with probability one.

###### Remark 1.

One challenge of applying Corollary 3 or any of the other obtained deterministic results is the computation of , which involves exhaustive enumeration to check Assumption . Next, for each number , we provide a lower bound on the sampling probability in terms of that ensures with high probability. Consequently, we do not need to compute but instead we can certify the above results with high probability.

#### Iii-A2 Probabilistic Rank Analysis

The following lemma is a re-statement of Theorem in [16], which is the probabilistic version of Lemma 1.

###### Lemma 3.

Suppose and that each column of the sampled matrix is observed in at least entries, uniformly at random and independently across entries, where

 l>max{12 log(n1ϵ)+12,2r}. (15)

Also, assume that . Then, with probability at least , .

The following lemma is taken from [23] and will be used to derive a lower bound on the sampling probability that leads to the similar statement as Theorem 1 with high probability.

###### Lemma 4.

Consider a vector with entries where each entry is observed with probability independently from the other entries. If , then with probability at least , more than entries are observed.

The following proposition characterizes the probabilistic version of Theorem 1.

###### Proposition 1.

Suppose , and that each entry of the sampled matrix is observed uniformly at random and independently across entries with probability , where

 p>1n1max{12 log(n1ϵ)+12,2r}+14√n1. (16)

Then, with probability at least , we have .

###### Proof.

Consider an arbitrary column of and note that resulting from Lemma 4 the number of observed entries at this column of is greater than with probability at least . Therefore, the number of sampled entries at each column satisfies

 l>max{12 log(n1ϵ)+12,2r}, (17)

with probability at least . Thus, resulting from Lemma 3 with probability at least
, we have .∎

Finally, we have the following probabilistic version of Corollary 3.

###### Corollary 4.

Assume that and and (16) holds for , where denotes an optimal solution to the optimization problem (14). Then, according to Proposition 1 and Corollary 3, with probability at least , . Similarly, assume that and and (16) holds for , where denotes a suboptimal solution to the optimization problem (14). Then, with probability at least , .

#### Iii-A3 Numerical Results

In Fig. 1 and Fig. 2, the x-axis represents the sampling probability, and the y-axis denotes the value of . The color scale represents the lower bound on the probability of event . For example, as we can observe in Fig. 1, for any we have with probability at least (approximately based on the color scale since the corresponding points are orange) given that .

We consider the sampled matrix and in Fig. 1 and Fig. 2, respectively. In particular, for fixed values of sampling probability and , we first find a “small” that (16) holds by trial-and-error. Then, according to Proposition 1, we conclude that with probability at least , .

The purpose of Fig. 3 is to show how tight our proposed upper bounds on rank can be. Here, we first generate an random matrix of a given rank

by multiplying a random (entries are drawn according to a uniform distribution on real numbers within an interval)

matrix and matrix. Then, each entry of the randomly generated matrix is sampled uniformly at random and independently across entries with some sampling probability . Afterwards, we apply the nuclear norm minimization method proposed in [26] for matrix completion, where the non-convex objective function in (14) is relaxed by using nuclear norm, which is the convex hull of the rank function, as follows

 minimizeU′∈Rn1×n2 ∥U′∥∗ (18) subject to U′Ω=UΩ,

where denotes the nuclear norm of . Let denote an optimal solution to (18) and recall that denotes an optimal solution to (14). Since (18) is a convex relaxation to (14), we conclude that is a suboptimal solution to (14), and therefore . We used the Matlab program found online [27] to solve (18).

As an example, we generate a random matrix (the same size as the matrix in Fig. 1) of rank as described above for and some values of the sampling probability . Then, we obtain the rank of the completion given by (18) and denote it by . Due to the randomness of the sampled matrix, we repeat this procedure times. We calculate the “gap” in each of these runs and denote the maximum and minimum among these 5 numbers by and , respectively. Hence, and represent the loosest (worst) and tightest (best) gaps between the rank obtained by (18) and rank of the original sampled matrix over runs, respectively. In Fig. 3, the maximum and minimum gaps are plotted as a function of rank of the matrix, for different sampling probabilities.

We have the following observations.

• According to Fig. 1, for and we can ensure that the rank of any completion is an upper bound on the rank of the sampled matrix or with probability at least and , respectively.

• As we can observe in Fig. 3(a)-(d), the defined gap is always a nonnegative number, which is consistent with previous observation that for and we can certify that with high probability () the rank of any completion is an upper bound on the rank of the sampled matrix or .

• For and that we have theoretical results (as mentioned in the first observation) the gap obtained by (18) is very close to zero. This phenomenon (that we do not have a rigorous justification for) shows that as soon as we can certify our proposed theoretical results (i.e., as soon as the rank of a completion provides an upper bound on the rank of the sampled matrix or ) by increasing the sampling probability, the upper bound found through (18) becomes very tight; in some cases this bound is exactly equal to (red curves) and in some cases this bound is almost equal to (blue curves). However, these gaps are not small (specially blue curves) for and and note that according to Fig. 1, for these values of we cannot guarantee the bounds on the value of rank hold with high probability.

### Iii-B CP-Rank Tensor

In this subsection, we assume that the sampled tensor is chosen generically from the manifold of tensors of rank , where is unknown.

Assumption : Each row of the -th matricization of the sampled tensor, i.e., includes at least observed entries.

We construct a binary valued tensor called constraint tensor based on and a given number . Consider any subtensor of the tensor . The sampled tensor includes subtensors that belong to and let for denote these subtensors. Define a binary valued tensor , where and its entries are described as the following. We can look at as tensors each belongs to . For each of the mentioned tensors in we set the entries corresponding to of the observed entries equal to . For each of the other observed entries, we pick one of the tensors of and set its corresponding entry (the same location as that specific observed entry) equal to and set the rest of the entries equal to . In the case that we simply ignore , i.e.,

By putting together all tensors in dimension , we construct a binary valued tensor , where and call it the constraint tensor. Observe that each subtensor of which belongs to includes exactly nonzero entries. In [18], an example is given on the construction of .

Assumption : consists a subtensor such that and for any and any subtensor of we have

 (19)

where denotes the number of nonzero rows of the -th matricization of .

The following lemma is a re-statement of Theorem in [18].

###### Lemma 5.

For almost every , there are only finitely many rank- completions of the sampled tensor if and only if Assumptions and hold.

###### Definition 2.

Let denote the set of all natural numbers such that both Assumptions and hold.

###### Lemma 6.

There exists a number such that .

###### Proof.

The proof is similar to the proof of Lemma 2 with the only difference that the dimension of the manifold of CP rank- tensors is [18], which is an increasing function in . ∎

The following theorem gives an upper bound on the unknown rank .

###### Theorem 2.

For almost every , with probability one, exactly one of the following statements holds

(i) ;

(ii) For any arbitrary completion of the sampled tensor of rank , we have .

###### Proof.

Similar to the proof of Theorem 1, it suffices to show that the assumption results that there exists a completion of of CP rank , where , with probability zero. Define as the basis of the rank- CP decomposition of as in (3), where is a rank- tensor and is defined in (3) for and . Define and . Observe that .

Observe that each row of includes at least observed entries since Assumption holds. Moreover, the existence of a completion of the sampled tensor of rank results in the existence of a basis such that there exists and . As a result, given , each observed entry of results in a degree- polynomial in terms of the entries of as

 U(→x)=r∑l=1Vl(x1,…,xd−1)ald(xd). (20)

Note that and each row of includes at least observed entries. Consider of the observed entries of the first row of and we denote them by , , , where the last component of the vector is equal to one, . Similar to the proof of Theorem 1, genericity of results in

 U(→xr+1)=r∑l=1tlU(→xi), (21)

where ’s are constant scalars, . On the other hand, according to Lemma 5 there exist at most finitely many completions of the sampled tensor of rank . Therefore, there exist a completion of of rank with probability zero. Moreover, an equation of the form of (21) holds with probability zero as and is chosen generically from the manifold of tensors of rank-. Therefore, there exists a completion of rank with probability zero. ∎

###### Corollary 5.

Consider an arbitrary number . Similar to Theorem 2, it follows that with probability one, exactly one of the followings holds

(i) ;

(ii) For any arbitrary completion of the sampled tensor of rank , we have .

###### Corollary 6.

Assuming that there exists a CP rank- completion of the sampled tensor such that , we conclude that with probability one .

###### Corollary 7.

Let denote an optimal solution to the following NP-hard optimization problem

 minimizeU′∈Rn1×⋯×nd rankCP(U′) (22) subject to U′Ω=UΩ.

Assume that . Then, Corollary 6 results that with probability one.

The following lemma is Lemma in [18], which is the probabilistic version of Lemma 5 in terms of the sampling probability.

###### Lemma 7.

Assume that , , and . Moreover, assume that the sampling probability satisfies

 p>1nd−2max{27 log(nϵ)+9 log(2r(d−2)ϵ)+18,6r}+14√nd−2. (23)

Then, with probability at least , we have .

The following corollary is the probabilistic version of Corollaries 6 and 7.

###### Corollary 8.

Assuming that there exists a CP rank- completion of the sampled tensor such that the conditions given in Lemma 7 hold, with the sampling probability satisfying (23), we conclude that with probability at least we have . Therefore, given that (23) holds for and denotes an optimal solution to the optimization problem (22), with probability at least we have .

## Iv Vector-Rank Cases

### Iv-a Multi-View Matrix

The following assumptions will be used frequently in this subsection.

Assumption : Each column of and include at least and sampled entries, respectively.

We construct a binary valued matrix called constraint matrix for multi-view matrix as , where