# Spectral Norm and Nuclear Norm of a Third Order Tensor

The spectral norm and the nuclear norm of a third order tensor play an important role in the tensor completion and recovery problem. We show that the spectral norm of a third order tensor is equal to the square root of the spectral norm of three fourth order positive semi-definite bisymmetric tensors, and the square roots of the nuclear norms of those three fourth order positive semi-definite bisymmetric tensors are lower bounds of the nuclear norm of that third order tensor. This provides a way to estimate and to evaluate the spectral norm and the nuclear norm of that third order tensor. Some upper and lower bounds for the spectral norm and nuclear norm of a third order tensor, by spectral radii and nuclear norms of some symmetric matrices, are presented.

## Authors

• 20 publications
• 7 publications
• ### Nuclear Norm and Spectral Norm of Tensor Product

We show that the nuclear norm of the tensor product of two tensors is no...
09/24/2019 ∙ by Liqun Qi, et al. ∙ 0

• ### Stochastic modelling of symmetric positive-definite material tensors

Spatial symmetries and invariances play an important role in the descrip...
09/16/2021 ∙ by Sharana Kumar Shivanand, et al. ∙ 0

• ### Nuclear Norm Under Tensor Kronecker Products

Derksen proved that the spectral norm is multiplicative with respect to ...
01/20/2020 ∙ by Robert Cochrane, et al. ∙ 0

• ### Exact nuclear norm, completion and decomposition for random overcomplete tensors via degree-4 SOS

In this paper we show that simple semidefinite programs inspired by degr...
11/18/2020 ∙ by Bohdan Kivva, et al. ∙ 0

• ### Norm of Tensor Product, Tensor Norm, Cubic Power and Gelfand Limit

We establish two inequalities for the nuclear norm and the spectral norm...
09/24/2019 ∙ by Liqun Qi, et al. ∙ 0

• ### Tensor Norm, Cubic Power and Gelfand Limit

We establish two inequalities for the nuclear norm and the spectral norm...
09/24/2019 ∙ by Liqun Qi, et al. ∙ 0

• ### Incoherent Tensor Norms and Their Applications in Higher Order Tensor Completion

In this paper, we investigate the sample size requirement for a general ...
06/10/2016 ∙ by Ming Yuan, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The spectral norm and the nuclear norm of a third order tensor play an important role in the tensor completion and recovery problem [8, 10]. It is NP-hard to compute them [1]. It is an active research topic to study them more [2, 3, 4].

In this paper, unless otherwise stated, all the discussions will be carried out in the filed of real numbers. The spectral norm of a third norm is the largest singular value of that tensor. The nuclear norm is the dual norm of the spectral norm. Hence, singular values of a third order tensor form the base of the spectral norm and the nuclear norm. Recall that the product of a (maybe rectangular) matrix and the transpose of that matrix is a positive semi-definite symmetric (square) matrix. There is a one to one equality between the singular values of the original matrix and the square roots of the eigenvalues of that positive semi-definite symmetric matrix. Then the spectral norm of the original matrix is equal to the square root of the spectral radius of that positive semi-definite symmetric matrix. Does such a relation still exist for a third order tensor? In the next section, we give a firm answer to this question. We show that if we make contraction of a third order tensor with itself on one index, then we get a fourth order positive semi-definite bisymmetric tensor. A real number is a singular value of that third order tensor if and only if it is the square root of an M-eigenvalue of that fourth order positive semi-definite bisymmetric tensor. Thus, the spectral norm of that third order tensor is the square root of the spectral norm of that fourth order positive semi-definite bisymmetric tensor.

In Section 3, we show that the square root of the nuclear norm of that fourth order positive semi-definite bisymmetric tensor is a lower bound of the nuclear norm of that third order tensor. The equality may not hold in general.

The equality between the spectral norm of a third order tensor and the spectral norm of a fourth order positive semi-definite bisymmetric tensor does not change the complexity of the problem, but provides us an alternative way to attack the problem. In Sections 4 and 5, by this relation, we present several upper and lower bounds for the spectral norm of a third order tensor, by spectral radii of some symmetric matrices. In Section 6, we establish some relations between these upper and lower bounds, and thus give a range for the spectral norm of that third order tensors.

In Section 7, we present some lower bounds for the nuclear norm of a third order tensor, by the nuclear norms of some symmetric matrices.

Some final remarks are made in Section 8.

## 2 Spectral Norm

Suppose that and are positive integers. Without loss of generality, we may assume that .

Let be the space of third order tensors of dimension . The singular values of a tensor are defined as follows [5].

###### Definition 2.1

A real number is called a singular value of

if there are vectors

such that the following equations are satisfied: For ,

 d2∑j=1d3∑k=1aijkyjzk=λxi; (2.1)

For ,

 d1∑i=1d3∑k=1aijkxizk=λyj; (2.2)

For ,

 d1∑i=1d2∑j=1aijkxiyj=λzk; (2.3)

and

 x⊤x=y⊤y=z⊤z=1. (2.4)

Then and are called the corresponding singular vectors.

If is a singular value of , with singular vectors and , then by definition, is also a singular value of , with singular vector and . For , their inner product is defined as

 ⟨A,B⟩:=d1∑i=1d2∑j=1d3∑k=1aijkbijk.

In a special case, if is rank-one, i.e., for some nonzero vectors , or equivalently for and , then

 ⟨A,x⊗y⊗z⟩≡d1∑i=1d2∑j=1d3∑k=1aijkxiyjzk.
###### Definition 2.2

The spectral norm of is defined [1, 2, 3, 4] as

 ∥A∥:=max{⟨A,x⊗y⊗z⟩:x⊤x=y⊤y=z⊤z=1,x∈Rd1,y∈Rd2,z∈Rd3}. (2.5)

Then the spectral norm of is equal to the largest singular value of [1, 2, 3, 4].

We now consider fourth order bisymmetric tensors.

###### Definition 2.3

Let be the space of fourth order tensors of dimension . Let . The tensor is called bisymmetric if for all and , we have

 tijpq=tpqij.

The tensor is called positive semi-definite if for any and ,

 ⟨T,x⊗y⊗x⊗y⟩≡d1∑i,p=1d2∑j,q=1tijpqxiyjxpyq≥0.

The tensor is called positive definite if for any and ,

 ⟨T,x⊗y⊗x⊗y⟩≡d1∑i,p=1d2∑j,q=1tijpqxiyjxpyq>0.

The spectral norm of is defined by

 ∥T∥:=max{|⟨T,x⊗y⊗x⊗y⟩|:x⊤x=y⊤y=1,x∈Rd1,y∈Rd2}. (2.6)

We may check that defines a norm in .

###### Definition 2.4

Suppose that is bisymmetric. A number is called an M-eigenvalue of if there are vectors such that the following equations are satisfied: For ,

 d1∑p=1d2∑j,q=1tijpqyjxpyq=μxi; (2.7)

For ,

 d1∑i,p=1d2∑q=1tijpqxixpyq=μyj; (2.8)

and

 x⊤x=y⊤y=1. (2.9)

Then and

are called the corresponding M-eigenvectors.

###### Theorem 2.5

Suppose that is bisymmetric. Then its M-eigenvalues always exist. The spectral norm of is equal to the largest absolute value of its M-eigenvalues. Furthermore, is positive semi-definite if and only if all of its M-eigenvalues are nonnegative; is positive definite if and only if all of its M-eigenvalues are positive. If is positive semi-definite, then its spectral norm is equal to its largest M-eigenvalue.

Proof Consider the optimization problem

 min{⟨T,x⊗y⊗x⊗y⟩:x⊤x=y⊤y=1,x∈Rd1,y∈Rd2}. (2.10)

Since the objective function is continuous and the feasible region is compact, this optimization problem always has an optimal solution. Since the linear independence constraint qualification in optimization is satisfied, the optimality condition holds at that optimal solution. By optimization theory, the optimality condition of (2.10) has the form (2.7-2.9), and the optimal Langrangian multiplier always exists at the solution. This shows that always has an M-eigenvalue.

Suppose that is an M-eigenvalue of with corresponding vectors and . By (2.7) and (2.8), we have

 μ=⟨T,x⊗y⊗x⊗y⟩.

By this and (2.6), the spectral norm of is equal to the largest absolute value of its M-eigenvalues. By this and (2.10), is positive semi-definite if and only if all of its M-eigenvalues are nonnegative; is positive definite if and only if all of its M-eigenvalues are positive. If is positive semi-definite, then all of its M-eigenvalues are nonnegative. This implies that its spectral norm is equal to its largest M-eigenvalue in this case. .

For , the elastic tensor in sold mechanics falls in the form of , with two additional symmetric properties between indices and , and between indices and . Then, the positive definiteness condition of corresponds the strong ellipticity condition in solid mechanics. In 2009, M-eigenvalues were introduced for the elastic tensor to characterize the strong ellipticity condition in [7]. An algorithm for computing the largest M-eigenvalue was presented in [9]. Also see [6] for details. Here, we extend M-eigenvalues to general bisymmetric tensors and study their spectral norms.

For , consider its contraction with itself on the third index, , defined by

 tijpq=d3∑k=1aijkapqk. (2.11)

Then is bisymmetric. For any and ,

 ⟨T,x⊗y⊗x⊗y⟩=d3∑k=1(d1∑i=1d2∑j=1aijkxiyj)2≥0.

Hence is also positive semi-definite.

###### Theorem 2.6

Let and be constructed as above. Then is a nonzero singular value of , with and as its corresponding singular vectors, if and only if it is a square root of an M-eigenvalue of , with and as corresponding M-eigenvectors. This also implies that the spectral norm of is equal to the square root of the largest M-eigenvalue of .

Proof Suppose that is a singular value of , with corresponding singular vectors and , satisfying (2.1-2.4). Multiply (2.1) and (2.2) by and substitute

 λzk=d1∑p=1d2∑q=1apqkxpyq

into these two equations, we see that is an M-eigenvalue of , with and as the corresponding M-eigenvectors.

On the other hand, assume that is an M-eigenvalue of , with corresponding M-eigenvectors and , satisfying (2.7-2.9), where is constructed as above. Let with

 zk=1λd1∑i=1d2∑j=1aijkxiyj.

Then (2.3) is satisfied.

 z⊤z =1λ2d3∑k=1(d1∑i=1d2∑j=1aijkxiyjd1∑p=1d2∑q=1apqkxpyq) =1μd1∑i,p=1d2∑j,q=1(d3∑k=1aijkapqk)xiyjxpyq =1μd1∑i=1(d1∑p=1d2∑j,q=1tijpqyjxpyq)xi =d1∑i=1x2i =1.

This proves (2.4). We also have

 d2∑j=1d3∑k=1aijkyjzk =1λd1∑p=1d2∑j,q=1d3∑k=1aijkapqkxpyjyq =1λd1∑p=1d2∑j,q=1tijpqxpyjyq =μxiλ =λxi.

This proves (2.1). We may prove (2.2) similarly. Hence, is a singular value of , with and as the corresponding singular vectors.

By Theorem 2.5, we now conclude that the spectral norm of is equal to the square root of the largest M-eigenvalue of . .

Example 1 Let the entries of be

 a111 = 4, a121 = 1, a112 = 3, a122 = 2, a113 = 2, a123 = −1, a211 = −1, a221 = 2, a212 = −5, a222 = 1, a213 = 3, a223 = 4.

Calculate the spectral norm of by definition, we see that the spectral norm of is . Then the entries of are , , , , , , , , , . Calculate the spectral norm of by definition, we see that the spectral norm of is . Its square root is , which is equal to the spectral norm of . .

###### Corollary 2.7

We may also consider the contraction of and itself over its second index or the first index. Then we have a tensor and a tensor in . Theorem 2.6 is true for and these two fourth order positive semi-definite bisymmetric tensors too.

Our numerical computation confirms the results of Theorem 2.6 and Corollary 2.7.

## 3 Nuclear Norm

The nuclear norm is somewhat more important in the tensor completion and recovery problem [8, 10].

###### Definition 3.1

The nuclear norm of is defined [1, 4] as

 ∥A∥∗:=inf{r∑i=1|λi|:A=r∑i=1λiui⊗vi⊗wi,u⊤iui=v⊤ivi=w⊤iwi=1,λi∈R,ui∈Rd1,vi∈Rd2,wi∈Rd3,i=1,⋯,r}. (3.12)

Then we have [1, 4]

 ∥A∥∗:=max{⟨A,B⟩:∥B∥=1,B∈Rd1×d2×d3}. (3.13)

We may define the nuclear norm of a tensor in similarly.

###### Definition 3.2

The nuclear norm of is defined as

 ∥T∥∗:=inf{r∑i=1|λi|:T=r∑i=1λiui⊗vi⊗wi⊗si,u⊤iui=v⊤ivi=w⊤iwi=s⊤isi=1,λi∈R,ui,wi∈Rd1,vi,si∈Rd2,i=1,⋯,r}. (3.14)

Then we have the following theorem.

###### Theorem 3.3

Suppose that , and is constructed by (2.11). Assume and are defined by (3.12) and (3.14) respectively. Then

 (3.15)

Proof For any , by (3.12), we have positive integer and such that

 u⊤iui=v⊤ivi=w⊤iwi=1,

for , and

 A=r∑i=1λiui⊗vi⊗wi

and

 ∥A∥∗+ϵ≥r∑i=1|λi|.

By (2.11), we have

 T=r∑i,j=1λiλjαijui⊗vi⊗uj⊗v,

where . Then by (3.14), we have

 (∥A∥∗+ϵ)2≥∥T∥∗

for any . This proves the first inequality in (3.15).

For the lower bound in (3.15), suppose that is such that

 ∥B∥=1 and ⟨A,B⟩=∥A∥∗.

For simplicity of notation, denote by the matrix as for all . Similarly, we have matrices ’s for . Since is the maximum of over all tensors with unit spectral norm, and the spectral norm is defined by maximizing a multilinear function over the joint sphere (cf. (2.5)), we must have that

 ⟨Ak,Bk⟩≥0 for all k=1,…,d3 and ∥A∥∗=d3∑i=1⟨Ak,Bk⟩.

Let the tensor be defined similarly to for , i.e., . It follows from Theorem 2.5 that

 ∥S∥=1.

Then, by (3.13), we have

 ∥T∥∗≥⟨T,S⟩=d3∑k=1⟨Ak,Bk⟩2≥1d3(d3∑k=1⟨Ak,Bk⟩)2=1d3∥A∥2∗.

The second inequality in (3.15) is thus proved. .

Numerical computations shows that strict inequality may hold in (3.15).

###### Corollary 3.4

We may also consider the contraction of and itself over its second index or the first index. Then we have a tensor and a tensor in . Theorem 3.3 is true for and these two fourth order positive semi-definite bisymmetric tensors too.

Numerical computation shows that the nuclear norms of these three fourth order positive semi-definite bisymmetric tensors can be different for a third order tensor .

## 4 Upper Bounds

Theorems 2.6 and 3.3 connected the spectral norm and nuclear norm of a third order tensor with the spectral norms and nuclear norms of three fourth order positive semi-definite bisymmetric tensors. This does not change the complexity of the problem. But they provide us an alternative way to attack the problem. In particular, a fourth order bisymmetric tensor has more structure such as the diagonal structure. In 2009, Wang, Qi and Zhang [9] presented a practical method for the largest M-eigenvalue of a fourth order bisymmetric tensor. Thus, we may apply that method to compute the spectral norm of a fourth order bisymmetric tensor.

We first present an attainable bound for a fourth order bisymmetric tensor.

Let be a bisymmetric tensor. We may unfold to a matrix , where is regarded as one index , and is regard as another index, . Since is bisymmetric, matrix is symmetric. Note that even if is positive semi-definite, may not be positive semi-definite. On the other hand, if is positive semi-definite, is always positive semi-definite. If is constructed by a third order tensor as the previous sections, it can be shown that the corresponding matrix is indeed positive semi-definite. We do not go to this detail.

We say that is rank-one if there are nonzero and such that .

###### Theorem 4.1

Suppose that is a bisymmetric tensor. Let the symmetric matrix be constructed as above. Then the spectral radius of is an upper bound of the spectral norm of . This upper bound is attained if is rank-one. Thus, this upper bound is attainable even if is the contraction of a third order tensor with itself by (2.11).

Proof The spectral radius of the symmetric matrix can be calculated as follows.

 ρ(T)=max{∣∣s⊤Ts∣∣:s⊤s=1,s∈Rd1d2}. (4.16)

We may fold to a matrix . Then

 s⊤Ts=⟨T,S⊗S⟩≡d1∑i,p=1d2∑j,q=1tijpqsijspq.

On the other hand, let for . Then implies the vector , corresponding the matrix , satisfying . Compare the maximal problems in (2.6) and (4.16). The feasible region of (2.6) is a subset of the feasible region of (4.16). In the feasible region of (2.6), the two objective functions are equal. Thus, the optimal objective function value of (4.16), i.e., the spectral radius of the symmetric matrix , is an upper bound of the optimal objective function value of (4.16), i.e., the spectral norm of . When is rank-one, The feasible regions of (2.6) and (4.16) are the same, and the objective function values of (2.6) and (4.16) are equal. Then the upper bound is attained in this case. If is rank-one, then formed by (2.11) is also rank-one. Thus this upper bound is attainable even if is formed by (2.11). .

Example 1 (Continued) In this example, we have

 T=⎛⎜ ⎜ ⎜⎝298−131986−140−13−14355190521⎞⎟ ⎟ ⎟⎠.

By calculation, the spectral radius of is . Its square root is . This gives an upper bound for the spectral norm of . .

As in Corollaries 2.7 and 3.4, if we take contraction of the first or the second indices of a third order tensor , we may get different upper bounds for the spectral norm of . Hence, there are totally three upper bounds for the spectral norm of a third order tensor. For Example 1, the two other upper bounds are and , which are not better than . Also, this approach involves the calculation of the spectral radius of a (or or ) symmetric matrix. When and are large, this approach involves the calculation of the spectral radius of a high dimensional symmetric matrix.

We now present a different way to obtain this upper bound. Consider the contraction of with itself on the second and third indices. This result a matrix , with

 bij=d2∑k=1d3∑l=1aiklajkl. (4.17)

Then is a symmetric matrix.

###### Theorem 4.2

Let and be constructed by (4.17). The matrix is positive semi-definite. The square root of its spectral radius is an upper bound of the spectral norm of . This upper bound is equal to the upper stated in Theorem 4.1, when in Theorem 4.1 is the contraction of with itself on its first index. Thus, this upper bound is also attainable.

Proof We may unfold to a matrix , where is regarded as one index . The spectral norm of matrix can be calculated as

 ∥A∥=max{x⊤As:x⊤x=s⊤s=1,x∈Rd1,s∈Rd2d3}. (4.18)

Compare the maximal problems in (2.5) and (4.18). The feasible region of (2.5) is a subset of (4.18). In the feasible region of (2.5), the two objective functions are equal. Hence, the optimal objective function value of (4.18), i.e., the spectral norm of the matrix , is an upper bound of the optimal objective function value of (2.5), i.e., the spectral norm of . The spectral norm of the matrix is the largest singular value of , which is equal to the square root of the spectral radius of . We now can recognize that . Thus, is symmetric and positive semi-definite, and the square root of its spectral radius is an upper bound of the spectral norm of .

When in Theorem 4.1 is the contraction of with itself on its first index (i.e., in the example, the upper bound obtained there is equal to the upper bound obtained here. In fact, in this case, the upper bounds stated in Theorem 4.1, when is obtained by contraction of with itself on the first index, is the square root of the spectral radius of , while the upper bound given here is the square root of the spectral radius of . By linear algebra, they are equal. Hence, this upper bound is also attainable. .

As is a symmetric matrix, this approach is relatively easy to be handled. We may also consider the contraction of with itself on the first and third indices, or on the first and second indices. This results in another way to calculate the two other upper bounds for the spectral norm of .

## 5 Lower Bounds

We present two attainable lower bounds for the spectral norm of the bisymmetric tensor in this section.

Let be a bisymmetric tensor. We say that is diagonal with respect to its first and third indices if whenever . We say that is diagonal with respect to its second and fourth indices if whenever .

###### Theorem 5.1

Let be a bisymmetric tensor. A lower bound for the spectral norm of is the maximum of the spectral radii of symmetric matrices , where is fixed, for . This lower bound is attained if is diagonal with respect to its second and fourth indices. Another lower bound for the spectral norm of is the maximum of the spectral radii of symmetric matrices , where is fixed, for . This lower bound is attained if is diagonal with respect to its first and third indices.

Proof Fix . Let be a unit vector in such that its th component is and its other components are zero. Then the objective function of (2.11) is equal to

 ⟨T,x⊗y⊗x⊗y⟩=d1∑i,p=1tijpjxixp.

Let be the eigenvector of the symmetric matrix such that

 d1∑i,p=1tijpjxixp=ρ(tijpj),

where is the spectral radius of the symmetric matrices . This is true for . Hence, the maximum of the spectral radii of symmetric matrices , where is fixed, for , is a lower bound for the spectral norm of . Let is diagonal with respect to its second and fourth indices. Then the objective function value of (2.11) is equal to a convex combination of the spectral radii of symmetric matrices , where is fixed, for . Then this lower bound is attained in this case. The other conclusion can be proved similarly. .

Example 1 (Continued) In this example, fix and , respectively, we have two symmetric matrices

 (29−13−1335),  (60021).

Their spectral radii are and , respectively. The maximum of these two spectral radii is . This gives a lower bound of the spectral norm of . Its square root is . This gives a lower bound for the spectral norm of .

Similarly, fix and , respectively, we have two symmetric matrices

 (29886),  (355521).

Their spectral radii are and , respectively. The maximum of these two spectral radii is . Its square root is . This gives another lower bound for the spectral norm of . .

A question is for which kind of third order tensor , these two lower bounds are attained.

As in Corollaries 2.7 and 3.4, if we take contraction of the first or the second indices of a third order tensor , we may get different lower bounds for the spectral norm of . Hence, there are totally six lower bounds for the spectral norm of a third order tensor. In particular, for the example in Example 1, if we take contraction of the first index of the third order tensor in that example, we get a lower bound for the spectral norm of . As the spectral norm of is , this lower bound is about close to the true value. Surprised by this accuracy, we calculate randomly generated examples of tensors. We found that the lower bounds obtained in this way fall within and are and , respectively. This shows that for such a third order tensor, there is a big chance to give a good lower bound in this way.

In this approach, spectral radii of symmetric matrices for , are calculated. This only involves relatively low dimensional matrices. Therefore, this approach is relatively efficient.

## 6 Relation

The first lower bound of in Theorem 5.1 may be denoted as

 L=max{ρ((tijpj)):j is fixed, j=1,⋯,d2}.

Suppose that is constructed by (2.11) from a third order tensor . Then

 L=max{max{d1∑i,p=1d3∑k=1aijkapjkxixp:x⊤x=1,x∈Rd1}:j=1,⋯,d2}.

On the other hand, the spectral radius of the matrix , constructed by (4.17), is as follows.

 ρ(B)=max{d1∑i,p=1d2∑j=1d3∑k=1aijkapjkxixp:x⊤x=1,x∈Rd1}.

Then we find that

 ρ(B)≤d2L.

Combining this with Theorems 4.2 and 5.1, we have the following theorem.

###### Theorem 6.1

Let , and be constructed as above. Then we have

 1d2ρ(B)≤L≤∥A∥≤ρ(B)≤d2L.

This establishes a range of by either or . We may contract on other indices and obtain similar results. Combining them together, we may get a better range of .

## 7 Lower Bounds for Nuclear Norms

Suppose that is a bisymmetric tensor. Let symmetric matrix be constructed as in Section 4. Then is a matrix flattening of the tensor . As Lemma 3.1 of [2], there is a one to one correspondence between the symmetric matrices and bisymmetric tensors. Hence, with an argument similar to the proof of Proposition 4.1 of [2] we have the following result.

###### Theorem 7.1

Suppose that is a bisymmetric tensor. Let symmetric matrix be constructed as in Section 4. Then .

Combining Theorems 3.3 and 7.1, we have a lower bound for the nuclear norm of a third order tensor, by the nuclear norm of a matrix. Note that the nuclear norm of a tensor is NP-hard to compute, while the nuclear norm of a matrix is relatively easy to be computed.

## 8 Final Remarks

In [3], it was shown that the spectral norm and the nuclear norm of a tensor is equal to the spectral norm and the nuclear norm of the Tucker core of that tensor. As the size of the Tucker core may be smaller than the size of the original tensor, maybe we may combine our results with that approach.

We may also explore more algorithms like that one in [9] to compute the largest M-eigenvalue of a fourth order positive semi-definite bisymmetric tensor, and use them for computing the spectral norm of a third order tensor.

We hope that some further research may explore more applications of the equality between singular values of a third order tensor and M-eigenvalues of the related fourth order positive semi-definite bisymmetric tensor.

Acknowledgment The authors are thankful to Yannan Chen for the discussion on Theorem 4.2, and the calculation, and to Qun Wang for the calculation.

## References

• [1] S. Friedland and L.H. Lim, “Nuclear norm of high-order tensors”, Mathematics of Computation 97 (2017) 1255-1281.
• [2] S. Hu, “Relations of the nuclear norm of a tensor and its matrix flattenings”, Linear Algebra and Its Applications 478 (2015) 188-199.
• [3] B. Jiang, D. Yang and S. Zhang, “Tensor and its tucker core: The invariance relationships”, Numerical Linear Algebra with Applications 24 (2017) e2086.
• [4] Z. Li, “Bounds of the spectral norm and the nuclear norm of a tensor based on tensor partitions”, SIAM J. Matrix Analysis and Applications 37 (2016) 1440-1452.
• [5] L.H. Lim, “Singular values and eigenvalues of tensors: a variational approach”, 1st IEEE Internatyional Workshop on Computational Advances in MultiSensor Adaptive Processing, Puerto Vallarta, Mexico (2005) 129-132.
• [6] L. Qi, H. Chen and Y. Chen, Tensor Eigenvalues and Their Applications, Springer, New York, 2018.
• [7] L. Qi, H.H. Dai and D. Han, “Conditions for styrong ellipticity and M-eigenvalues”, Frontiers of Mathematics in China 4 (2009) 349-364.
• [8] Q. Song, H. Ge, J. Caverlee and X. Hu, “Tensor completion algorithms in big data analytics”, ACM Transactions on Knowledge Discovery from Data 13 (2019) Article 6.
• [9] Y. Wang, L. Qi and X. Zhang, “A practical method for computing the largest M-eigenvalue of a fourth-order partially symmetric tensor”, Numerical Linear Algebra with Applications 16 (2009) 137-150.
• [10] M. Yuan and C.H. Zhang, On tensor completion via nuclear minimization, Foundations of Computational Mathematics 16 (2016) 1031-1068.