# Fundamental Conditions for Low-CP-Rank Tensor Completion

We consider the problem of low canonical polyadic (CP) rank tensor completion. A completion is a tensor whose entries agree with the observed entries and its rank matches the given CP rank. We analyze the manifold structure corresponding to the tensors with the given rank and define a set of polynomials based on the sampling pattern and CP decomposition. Then, we show that finite completability of the sampled tensor is equivalent to having a certain number of algebraically independent polynomials among the defined polynomials. Our proposed approach results in characterizing the maximum number of algebraically independent polynomials in terms of a simple geometric structure of the sampling pattern, and therefore we obtain the deterministic necessary and sufficient condition on the sampling pattern for finite completability of the sampled tensor. Moreover, assuming that the entries of the tensor are sampled independently with probability p and using the mentioned deterministic analysis, we propose a combinatorial method to derive a lower bound on the sampling probability p, or equivalently, the number of sampled entries that guarantees finite completability with high probability. We also show that the existing result for the matrix completion problem can be used to obtain a loose lower bound on the sampling probability p. In addition, we obtain deterministic and probabilistic conditions for unique completability. It is seen that the number of samples required for finite or unique completability obtained by the proposed analysis on the CP manifold is orders-of-magnitude lower than that is obtained by the existing analysis on the Grassmannian manifold.

## Authors

• 5 publications
• 49 publications
• ### Characterization of Deterministic and Probabilistic Sampling Patterns for Finite Completability of Low Tensor-Train Rank Tensor

In this paper, we analyze the fundamental conditions for low-rank tensor...
03/22/2017 ∙ by Morteza Ashraphijuo, et al. ∙ 0

• ### Rank Determination for Low-Rank Data Completion

Recently, fundamental conditions on the sampling patterns have been obta...
07/03/2017 ∙ by Morteza Ashraphijuo, et al. ∙ 0

• ### A Characterization of Deterministic Sampling Patterns for Low-Rank Matrix Completion

Low-rank matrix completion (LRMC) problems arise in a wide variety of ap...
03/09/2015 ∙ by Daniel L. Pimentel-Alarcón, et al. ∙ 0

• ### A Sampling Based Method for Tensor Ring Decomposition

We propose a sampling based method for computing the tensor ring (TR) de...
10/16/2020 ∙ by Osman Asif Malik, et al. ∙ 0

• ### On O( max{n_1, n_2 }log ( max{ n_1, n_2 } n_3) ) Sample Entries for n_1 × n_2 × n_3 Tensor Completion via Unitary Transformation

One of the key problems in tensor completion is the number of uniformly ...
12/16/2020 ∙ by Guang-Jing Song, et al. ∙ 0

• ### Recovery Guarantees for Quadratic Tensors with Limited Observations

We consider the tensor completion problem of predicting the missing entr...
10/31/2018 ∙ by Hongyang Zhang, et al. ∙ 0

• ### Tensor Estimation with Nearly Linear Samples

There is a conjectured computational-statistical gap in terms of the num...
07/01/2020 ∙ by Christina Lee Yu, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

The fast progress in data science and technology has given rise to the extensive applications of multi-way datasets, which allow us to take advantage of the inherent correlations across different attributes. The classical matrix analysis limits its efficiency in exploiting the correlations across different features from a multi-way perspective. In contrast, analysis of multi-way data (tensor), which was originally proposed in the field of psychometrics and recently found applications in machine learning and signal processing, is capable of taking full advantage of these correlations

[1, 2, 3, 4, 5, 6]. The problem of low-rank tensor completion, i.e., reconstructing a tensor from a subset of its entries given the rank, which is generally NP hard [7], arises in compressed sensing [8, 9, 10], visual data reconstruction [11, 12], seismic data processing [13, 14, 15], etc.. Existing approaches to low-rank data completion mainly focus on convex relaxation of matrix rank [16, 17, 18, 19, 20] or different convex relaxations of tensor ranks [10, 21, 22, 23].

Tensors consisting of real-world datasets usually have a low rank structure. The manifold of low-rank tensors has recently been investigated in several works [4, 24, 25]. In this paper, we focus on the canonical polyadic (CP) decomposition [26, 27, 28, 29] and the corresponding CP rank, but in general there are other well-known tensor decompositions including Tucker decomposition [30, 31, 32], tensor-train (TT) decomposition [33, 34], tubal rank decomposition [35] and several other methods [36, 37]. Note that most existing literature on tensor completion based on various optimization formulations use CP rank [10, 38].

In this paper, we study the fundamental conditions on the sampling pattern to ensure finite or unique number of completions, where these fundamental conditions are independent of the correlations of the entries of the tensor, in contrast to the common assumption adopted in literature such as incoherence. Given the rank of a matrix, Pimentel-Alarcón et. al. in [39] obtains such fundamental conditions on the sampling pattern for finite completability of the matrix. Previously, we treated the same problem for multi-view matrix [40], tensor given its Tucker rank [24], and tensor given its TT rank[25]. In this paper, the structure of the CP decomposition and the geometry of the corresponding manifold are investigated to obtain the fundamental conditions for finite completability given its CP rank.

To emphasize the contribution of this work, we highlight the differences and challenges in comparison with the Tucker and TT tensor models. In CP decomposition, the notion of tensor multiplication is different from those for Tucker and TT, and therefore the geometry of the manifold and the algebraic variety are completely different. Moreover, in CP decomposition we are dealing with the sum of several tensor products, which is not the case in Tucker and TT decompositions, and therefore the equivalence classes or geometric patterns that are needed to study the algebraic variety are different. Moreover, CP rank is a scalar and the ranks of matricizations and unfoldings are not given in contrast with the Tucker and TT models.

Let denote the sampled tensor and denote the binary sampling pattern tensor that is of the same dimension and size as . The entries of that correspond to the observed entries of are equal to and the rest of the entries are set as . Assume that the entries of are sampled independently with probability . This paper is mainly concerned with treating the following three problems.

Problem (i): Given the CP rank, characterize the necessary and sufficient conditions on the sampling pattern , under which there exist only finitely many completions of .

We consider the CP decomposition of the sampled tensor, where all rank- tensors in this decomposition are unknown and we only have some entries of . Then, each sampled entry results in a polynomial such that the variables of the polynomial are the entries of the rank- tensors in the CP decomposition. We propose a novel analysis on the CP manifold to obtain the maximum number of algebraically independent polynomials, among all polynomials corresponding to the sampled entries, in terms of the geometric structure of the sampling pattern . We show that if the maximum number of algebraically independent polynomials is a given number, then the sampled tensor is finitely completable. Due to the fundamental differences between the CP decomposition and the Tucker or TT decomposition, this analysis is completely different from our previous works [24, 25]. Moreover, note that our proposed algebraic geometry analysis on the CP manifold is not a simple generalization of the existing analysis on the Grassmannian manifold [39] even though the CP decomposition is a generalization of rank factorization of a matrix, as almost every step needs to be developed anew.

Problem (ii): Characterize conditions on the sampling pattern to ensure that there is exactly one completion for the given CP rank.

Similar to Problem (i), our approach is to study the algebraic independence of the polynomials corresponding to the samples. We exploit the properties of a set of minimally algebraically dependent polynomials to add additional constraints on the sampling pattern such that each of the rank- tensors in the CP decomposition can be determined uniquely.

Problem (iii): Provide a lower bound on the total number of sampled entries or the sampling probability such that the proposed conditions on the sampling pattern for finite and unique completability are satisfied with high probability.

We develop several combinatorial tools together with our previous graph theory results in [24] to obtain lower bounds on the total number of sampled entries, i.e., lower bounds on the sampling probability , such that the deterministic conditions for Problems (i) and (ii) are met with high probability. Particularly, it is shown in [38], samples are required to recover the tensor of rank . Recall that in this paper, we obtain the number samples to ensure finite/unique completability that is independent of the completion algorithm. As we show later, using the existing analysis on the Grassmannian manifold results in samples to ensure finite/unique completability. However, our proposed analysis on the CP manifold results in samples to guarantee the finiteness of the number of completions, which is significantly lower than that given in [38]. Hence, the fundamental conditions for tensor completion motivate new optimization formulation to close the gap in the number of required samples.

The remainder of this paper is organized as follows. In Section II, the preliminaries and problem statement are presented. In Section III, we develop necessary and sufficient deterministic conditions for finite completability. In Section IV, we develop probabilistic conditions for finite completability. In Section V, we consider unique completability and obtain both deterministic and probabilistic conditions. Some numerical results are provided in Section VI. Finally, Section VII concludes the paper.

## Ii Background

### Ii-a Preliminaries and Notations

In this paper, it is assumed that a -way tensor is sampled. Throughout this paper, we use CP rank as the rank of a tensor, which is defined as the minimum number such that there exist for and and

 U=r∑l=1al1⊗al2⊗⋯⊗ald, (1)

or equivalently,

 U(x1,x2,…,xd)=r∑l=1al1(x1)al2(x2)…ald(xd), (2)

where denotes the tensor product (outer product) and denotes the entry of the sampled tensor with coordinates and denotes the

-th entry of vector

. Note that is a rank- tensor, .

Denote as the binary sampling pattern tensor that is of the same size as and if is observed and otherwise. Also define .

For a nonempty , define and also denote . Let be the unfolding of the tensor corresponding to the set such that
, where , , and are two bijective mappings. For and , we denote the unfolding corresponding to by and we call it the -th matricization of tensor .

### Ii-B Problem Statement and A Motivating Example

We are interested in finding necessary and sufficient deterministic conditions on the sampling pattern tensor under which there are infinite, finite, or unique completions of the sampled tensor that satisfy the given CP rank constraint. Furthermore, we are interested in finding probabilistic conditions on the number of samples or the sampling probability that ensure the obtained deterministic conditions for finite and unique completability hold with high probability.

To motivate our proposed analysis in this paper on the CP manifold, we compare the following two approaches using a simple example to emphasize the exigency of our proposed analysis: (i) analyzing each of the unfoldings individually, (ii) analyzing based on the CP decomposition.

Consider a three-way tensor with CP rank of . Assume that the entries and are observed. As a result of Lemma 8 in this paper, all unfoldings of this tensor are rank- matrices. It is shown in Section II of [24] that having any entries of a rank- matrix, there are infinitely many completions for it. As a result, any unfolding of is infinitely many completable given only the corresponding rank constraint. Next, using the CP decomposition (1), we show that there are only finitely many completions of the sampled tensor of CP rank .

Define , and . Then, according to (1), we have the following

 U(1,1,1) =xyz, U(2,2,1) =x′y′z, (3) U(2,1,1) =x′yz, U(2,1,2) =x′yz′, U(1,2,1) =xy′z, U(1,2,2) =xy′z′, U(1,1,2) =xyz′, U(2,2,2) =x′y′z′.

Recall that and are the observed entries. Hence, the unknown entries can be determined uniquely in terms of the observed entries as

 U(2,2,1) = x′y′z=U(2,1,1)U(1,2,1)U(1,1,1), (4) U(2,1,2) = x′yz′=U(2,1,1)U(1,1,2)U(1,1,1), U(1,2,2) = xy′z′=U(1,2,1)U(1,1,2)U(1,1,1), U(2,2,2) = x′y′z′=U(2,1,1)U(1,2,1)U(1,1,2)U(1,1,1)U(1,1,1).

Therefore, based on the CP decomposition, the sampled tensor is finitely (uniquely) many completable. Hence, this example illustrates that collapsing a tensor into a matrix results in loss of information and thus motivate the investigation of the tensor CP manifold.

## Iii Deterministic Conditions for Finite Completability

In this section, we characterize the necessary and sufficient condition on the sampling pattern for finite completability of the sampled tensor given its CP rank. In Section III-A, we define a polynomial based on each observed entry and through studying the geometry of the manifold of the corresponding CP rank, we transform the problem of finite completability of to the problem of including enough number of algebraically independent polynomials among the defined polynomials for the observed entries. In Section III-B, a binary tensor is constructed based on the sampling pattern , which allows us to study the algebraic independence of a subset of polynomials among all defined polynomials based on the samples. In Section III-C, we characterize the connection between the maximum number of algebraically independent polynomials among all the defined polynomials and finite completability of the sampled tensor.

### Iii-a Geometry

Suppose that the sampled tensor is chosen generically from the manifold of the tensors in of rank . Assume that vectors are unknown for and . For notational simplicity, define the tuples for and . Moreover, define . Note that each of the sampled entries results in a polynomials in terms of the entries of as in (2).

Here, we briefly mention the following two facts to highlight the fundamentals of our proposed analysis.

• Fact : As it can be seen from (2), any observed entry results in an equation that involves one entry of , and . Considering the entries of as variables (right-hand side of (2)), each observed entry results in a polynomial in terms of these variables. Moreover, for any observed entry , the value of specifies the location of the entry of that is involved in the corresponding polynomial, and .

• Fact : It can be concluded from Bernstein’s theorem [41] that in a system of polynomials in variables with coefficients chosen generically, the polynomials are algebraically independent with probability one, and therefore there exist only finitely many solutions. Moreover, in a system of polynomials in variables (or less), polynomials are algebraically dependent with probability one.

The following assumption will be used frequently in this paper.

Assumption : Each row of the -th matricization of the sampled tensor, i.e., includes at least observed entries.

###### Lemma 1.

Given ’s for and Assumption , can be determined uniquely.

###### Proof.

Each row of has entries and also as it can be seen from (2), each observed entry in the -th row of results in a degree- polynomial in terms of the entries of the -th row of . Since Assumption holds, for each row of that has variables, we have at least degree- polynomials. Genericity of the coefficients of these polynomials results that each row of can be determined uniquely. ∎

As a result of Lemma 1, we can obtain in terms of the entries of ’s for . As mentioned earlier, each observed entry is equivalent to a polynomial in the format of (2). Consider all such polynomials excluding those that have been used to obtain ( samples at each row of ) and denote this set of polynomials in terms of the entries of ’s for by .

We are interested in defining an equivalence class such that each class includes exactly one of the decompositions among all rank- decompositions of a particular tensor and the pattern in Lemma 3 characterizes such an equivalence class. Lemma 2 below is a re-statement of Lemma in [25], which characterizes such an equivalence class or equivalently geometric pattern for a matrix instead of tensor. This lemma will be used to show Lemma 3 later.

###### Lemma 2.

Let denote a generically chosen matrix from the manifold of matrices of rank and also be a given full rank matrix. Then, there exists a unique decomposition such that , and , where represents a submatrix111Specified by a subset of rows and a subset of columns (not necessarily consecutive). of .

In Lemma 3, we generalize Lemma 2 and characterize the similar pattern for a multi-way tensor. Assuming that represents the submatrix of consists of the first columns and the first rows of and also is equal to the identity matrix, this pattern is called the canonical decomposition of . The canonical decomposition is shown for a rank- matrix as the following

,

where ’s and ’s can be determined uniquely as

 y1 y2 y3 y4

and           .

Also, the above canonical decomposition can be written as the following

+           .

Generalization of the canonical decomposition for multi-way tensor is as the following

,

and for

.

###### Lemma 3.

Let be a fixed number and define . Assume that the full rank matrix and matrices with nonzero entries for are given. Also, let denote an arbitrary submatrix of , , where and for . Then, with probability one, there exists exactly one rank- decomposition of such that , .

###### Proof.

First we claim that there exists at most one rank- decomposition of such that , . We assume that , and also is given. Then, it suffices to show that the rest of the entries of can be determined in at most a unique way (no more than one solution) in terms of the given parameters such that (1) holds. Note that if a variable can be determined uniquely through two different ways (two sets of samples or equations), in general either it can be determined uniquely if both ways result in the same value or it does not have any solution otherwise. Let denote the row number of submatrix for and denote the row numbers of submatrix .

As the first step of proving our claim, we show that can be determined uniquely. Consider the subtensor which includes entries. Having CP decomposition (2), each entry of results in one degree- polynomial in terms of the entries of with coefficients in terms of the entries of ’s. Let the matrix represent the entries of . Moreover, define where for and is the -th column of .

Observe that CP decomposition (2) for the entries of can be written as . Recall that is full rank, and therefore ’s are linearly independent, . Also, a system of equations with at least linearly independent degree- polynomials in variables does not have more than one solution. Hence, ’s are also linearly independent for , and therefore is full rank. As a result, can be determined uniquely. In the second step, similar to the first step, we can show that the rest of ’s have at most one solution having one entry of which has been already obtained.

Finally, we also claim that there exists at least one rank- decomposition of such that , . We show this by induction on . For , this is a result of Lemma 2. Induction hypothesis states that the claim holds for and we need to show that it also holds for . Since by merging dimension and for each of the rank- tensors of the corresponding CP decomposition and using induction hypothesis this step reduces to showing a rank- matrix can be decomposed to two vectors such that one component of one of them is given which is again a special case of Lemma 2 for rank- scenario. ∎

Assume that denotes the set of all possible ’s for given without any polynomial constraint. Lemma 3 results in a pattern that characterizes exactly one rank- decomposition among all rank- decompositions, and therefore the dimension of is equal to the number of unknowns, i.e., number of entries of ’s for excluding those that are involved in the pattern ’s in Lemma 3 which is .

###### Lemma 4.

For almost every , the sampled tensor is finitely completable if and only if the maximum number of algebraically independent polynomials in is equal to .

###### Proof.

The proof is omitted due to the similarity to the proof of Lemma in [24] with the only difference that here the dimension is instead of which is the dimension of the core in Tucker decomposition. ∎

### Iii-B Constraint Tensor

In this section, we provide a procedure to construct a binary tensor based on such that and each polynomial can be represented by one -way subtensor of which belongs to . Using , we are able to recognize the observed entries that have been used to obtain the in terms of the entries of , and we can study the algebraic independence of the polynomials in which is directly related to finite completability through Lemma 4.

For each subtensor of the sampled tensor , let denote the number of sampled entries in . Specifically, consider any subtensor of the tensor . Then, since of the polynomials have been used to obtain , contributes polynomial equations in terms of the entries of among all polynomials in .

The sampled tensor includes subtensors that belong to and let for denote these subtensors. Define a binary valued tensor , where and its entries are described as the following. We can look at as tensors each belongs to . For each of the mentioned tensors in we set the entries corresponding to the observed entries that are used to obtain equal to . For each of the other observed entries that have not been used to obtain , we pick one of the tensors of and set its corresponding entry (the same location as that specific observed entry) equal to and set the rest of the entries equal to . In the case that we simply ignore , i.e.,

By putting together all tensors in dimension , we construct a binary valued tensor , where and call it the constraint tensor. Observe that each subtensor of which belongs to includes exactly nonzero entries. In the following we show this procedure for an example.

###### Example 1.

Consider an example in which and and . Assume that if and otherwise, where

 S={(1,1,1),(1,2,1),(2,3,1),(3,3,1),(1,1,2),(2,1,2),(3,2,2),(1,3,3),(3,2,3)},

represents the set of observed entries. Hence, observed entries belong to , observed entries belong to , and observed entries belong to . As a result, , , and . Hence, , , and , and therefore the constraint tensor belongs to .

Also, assume that the entries that we use to obtain in terms of the entries of and are , , , , and . Note that (correspond to entries of that have been used to obtain ), and also for the two other observed entries we have (correspond to ) and (correspond to ) and the rest of the entries of are equal to zero. Similarly, and the rest of the entries of are equal to zero.

Then, if and otherwise, where

 ˘S={(1,1,1),(1,2,2),(2,3,1),(2,3,2),(3,3,1),(3,3,2),(1,1,3),(2,1,3),(3,2,3)}.

Note that each subtensor of that belongs to represents one of the polynomials in besides showing the polynomials that have been used to obtain . More specifically, consider a subtensor of that belongs to with nonzero entries. Observe that exactly of them correspond to the observed entries that have been used to obtain . Hence, this subtensor represents a polynomial after replacing entries of by the expressions in terms of entries of , i.e., a polynomial in .

### Iii-C Algebraic Independence

In this section, we obtain the maximum number of algebraically independent polynomials in in terms of the simple geometrical structure of nonzero entries of , i.e., the locations of the sampled entries. On the other hand, Lemma 4 provides the required number of algebraically independent polynomials in for finite completability. Hence, at the end of this section, we obtain the necessary and sufficient deterministic conditions on the sampling pattern for finite completability.

According to Lemma 3, as we consider one particular equivalence class some of the entries of ’s are known, i.e., in the statement of the lemma. Therefore, in order to find the number of variables (unknown entries of ’s) in a set of polynomials, we should subtract the number of known entries in the corresponding pattern from the total number of involved entries. Also, recall that the sampled tensor is chosen generically from the corresponding manifold, and therefore according to Fact , the independency of the polynomials can be studied through studying the number of variables involved in each subset of them.

###### Definition 1.

Let be a subtensor of the constraint tensor . Let denote the number of nonzero rows of . Also, let denote the set of polynomials that correspond to nonzero entries of .

The following lemma gives an upper bound on the maximum number of algebraically independent polynomials in the set for an arbitrary subtensor of the constraint tensor. Note that includes exactly polynomials as each subtensor belonging to represents one polynomial.

###### Lemma 5.

Suppose that Assumption holds. Consider an arbitrary subtensor of the constraint tensor . The maximum number of algebraically independent polynomials in is at most

 r((d−1∑i=1mi(˘Ω′))−min{max{m1(˘Ω′),…,md−1(˘Ω′)},r}−(d−2)). (5)
###### Proof.

As a consequence of Fact , the maximum number of algebraically independent polynomials in a subset of polynomials of is at most equal to the total number of variables that are involved in the corresponding polynomials. Note that by observing the structure of (2) and Fact , the number of entries of that are involved in the polynomials is equal to , . Therefore, the total number of entries of that are involved in the polynomials is equal to . However, some of the entries of are known and depending on the equivalence class we should subtract them from the total number of involved entries.

For a fixed number in Lemma 3, it is easily verified that the total number of variables (unknown entries) of that are involved in the polynomials is equal to , with . Note that , . However, is not a fixed number in general. Therefore, the maximum number of known entries of that are involved in the polynomials is equal to
, which results that the number of variables of that are involved in the polynomials is equal to (5). ∎

The set of polynomials corresponding to , i.e., is called minimally algebraically dependent if the polynomials in are algebraically dependent but polynomials in every of its proper subsets are algebraically independent. The following lemma which is Lemma in [24], provides an important property about a set of minimally algebraically dependent . This lemma will be used later to derive the maximum number of algebraically independent polynomials in .

###### Lemma 6.

Suppose that Assumption holds. Consider an arbitrary subtensor of the constraint tensor . Assume that polynomials in are minimally algebraically dependent. Then, the number of variables (unknown entries) of that are involved in is equal to .

Given an arbitrary subtensor of the constraint tensor , we are interested in obtaining the maximum number of algebraically independent polynomials in based on the structure of nonzero entries of . The next lemma can be used to characterize this number in terms of a simple geometric structure of nonzero entries of .

###### Lemma 7.

Suppose that Assumption holds. Consider an arbitrary subtensor of the constraint tensor . The polynomials in are algebraically independent if and only if for any and any subtensor of we have

 r((d−1∑i=1mi(˘Ω′′))−min{max{m1(˘Ω′′),…,md−1(˘Ω′′)},r}−(d−2))≥t′. (6)
###### Proof.

First, assume that all polynomials in are algebraically independent. Also, by contradiction assume that there exists a subtensor of that (6) does not hold for. Note that includes polynomials. On the other hand, according to Lemma 5, the maximum number of algebraically independent polynomials in is no greater than the LHS of (6), and therefore the polynomials in are not algebraically independent. Hence, the polynomials in are not algebraically independent as well.

In order to prove the other side of the statement, assume that the polynomials in are algebraically dependent. Hence, there exists a subset of the polynomials that are minimally algebraically dependent and let us denote it by , where is a subtensor of . As stated in Lemma 6, the number of involved variables in polynomials in is equal to . On the other hand, in the proof of Lemma 5, we showed that the number involved variables is at least equal the LHS of (6). Therefore, the LHS of (6) is less than or equal to or equivalently

 r((d−1∑i=1mi(˘Ω′′))−min{max{m1(˘Ω′′),…,md−1(˘Ω′′)},r}−(d−2))

Finally, the following theorem characterizes the necessary and sufficient condition on for finite completability of the sampled tensor .

###### Theorem 1.

Suppose that Assumption holds. For almost every , the sampled tensor is finitely completable if and only if contains a subtensor such that (i) and (ii) for any and any subtensor of , (6) holds.

###### Proof.

Lemma 4 states that for almost every , there exist finitely many completions of the sampled tensor if and only if includes algebraically independent polynomials. Moreover, according to Lemma 7, polynomials corresponding to a subtensor