DeepAI

# The low-rank approximation of fourth-order partial-symmetric and conjugate partial-symmetric tensor

We present an orthogonal matrix outer product decomposition for the fourth-order conjugate partial-symmetric (CPS) tensor and show that the greedy successive rank-one approximation (SROA) algorithm can recover this decomposition exactly. Based on this matrix decomposition, the CP rank of CPS tensor can be bounded by the matrix rank, which can be applied to low rank tensor completion. Additionally, we give the rank-one equivalence property for the CPS tensor based on the SVD of matrix, which can be applied on the rank-one approximation for CPS tensors.

• 1 publication
• 1 publication
• 4 publications
02/06/2020

### Low Rank Triple Decomposition and Tensor Recovery

A simple approach for matrix completion and recovery is via low rank mat...
10/08/2020

### Orthogonal Decomposition of Tensor Trains

In this paper we study the problem of recovering a tensor network decomp...
06/11/2021

### Understanding Deflation Process in Over-parametrized Tensor Decomposition

In this paper we study the training dynamics for gradient flow on over-p...
03/03/2020

### A Riemannian Newton Optimization Framework for the Symmetric Tensor Rank Approximation Problem

The symmetric tensor rank approximation problem (STA) consists in comput...
07/25/2022

### Approximate Low-Rank Decomposition for Real Symmetric Tensors

We investigate the effect of an ε-room of perturbation tolerance on symm...
11/15/2016

### Multilinear Low-Rank Tensors on Graphs & Applications

We propose a new framework for the analysis of low-rank tensors which li...
10/23/2018

### Statistical mechanics of low-rank tensor decomposition

Often, large, high dimensional datasets collected across multiple modali...

## 1 Introduction

Tensor decomposition and approximation have significant applications in computer vision, data mining, statistical estimation and so on. We refer to

kolda2009tensor for the survey. Moreover, it is general that tensor from real applications with special symmetric structure. For instance, the symmetric outer product decomposition is particularly important in the process of blind identification of under-determined mixtures comon2008symmetric .

Jiang et al.jiang2016characterizing studied the functions in multivariate complex variables which always take real values. They proposed the conjugate partially symmetric (CPS) tensor to characterize such polynomial functions, which is the generalization of Hermitian matrix. Various examples of conjugate partial-symmetric tensors can be encountered in engineering applications arising from singal processing, electrical engineering, and control theoryaubry2013ambiguity ; de2007fourth . Ni et al.Ni2019Hermitian and Nie et al.Nie2019Hermitian researched on the Hermitian tensor decomposition. Motivated by Lieven et al.de2007fourth , we proposed a new orthogonal matrix outer product decomposition model for CPS tensors, which explore the orthogonality of these matrices.

It is well known that unlike the matrix case, the best rank-r () approximation of a general tensor may not exist, and even if it admits a solution, it is NP-hard to solvede2008tensor . The greedy successive rank-one approximation (SROA) algorithm can be applied to compute the rank-r () approximation of tensor. However, the theoretical guarantee for obtaining the best rank-r approximation is less developed. Zhang et al.zhang2001rank first proved the successive algorithm exactly recovers the symmetric and orthogonal decomposition of the underlying real symmetrically and orthogonally decomposable tensors. Fu et al.2018Successive showed that SROA algorithm can exactly recover unitarily decomposable CPS tensors. We offer the theoretical guarantee of SROA algorithm for our matrix decomposition model of CPS tensor tensors.

Many multi-dimensional data from real practice are fourth-order tensors and can be formulated as low-CP-rank tensor. However it is very difficult to compute CP-rank of a tensor. Jiang et al.Jiang2018Low showed that CP-rank can be bounded by the corresponding rank of square unfolding matrix of tensors. Following the idea we research on the low rank tensor completion for fourth order partial-symmetric tensor in particular.

Recently, Jiang et al. 2015Tensor proposed convex relaxations for solving a tensor optimization problem closely related to the best rank-one approximation problem for symmetric tensors. They proved an equivalence property between a rank-one symmetric tensor and its unfolding matrix. Yang et al. Yuning2016Rank studied the rank-one equivalence property for general real tensors. Based on these rank-one equivalence properties, the above mentioned tensor optimization problem can be casted into a matrix optimization problem, which alleviates the difficulty of solving the tensor problem. In line with this idea, we study the rank-one equivalence property for the fourth-order CPS tensor and transform the best rank-one tensor approximation problem into a matrix optimization problem.

In Section 2, we give some notations and definitions. The outer product approximation model based on matrix is proposed and the successive rank-one approximation (SMROA) algorithm is given to solve it in Section 3. We show that the SMROA algorithm can exactly recover the matrix outer product decomposition or approximation of the CPS tensor in Section 4. Section 5 discusses applications of our model simply. In Section 6, we present the rank-one equivalence property of fourth-order CPS tensor, and based on it an application is proposed. Numerical examples are in Section 7.

## 2 Preliminary

All tensors in this paper are fourth-order. For any complex number , denotes the conjugate of . "" denotes the outer product of matrices, namely means that

 Aijkl=XijYkl.

denotes the set of by symmetric matrices, the entries of these matrices can be complex or only real according to the context, without causing ambiguity. The inner product between is defined as

 ⟨A, B⟩=n∑i,j,k,l=1Aijkl¯Bijkl.
###### Definition 1.

A fourth-order tensor is called symmetric if is invariant under all permutations of its indices, i.e.,

 Aijkl=Aπ(ijkl),i, j, k, l=1,⋯n.
###### Definition 2.

Ni2019Hermitian A fourth-order complex tensor is called a Hermitian tensor if

 Ai1i2j1j2=¯Aj1j2i1i2.

Jiang et al.jiang2016characterizing introduced the concept of conjugate partial-symmetric tensors as following.

###### Definition 3.

A fourth-order complex tensor is called conjugate partial-symmetric (CPS) if

 Aijkl =Aπ(ij)π(kl) Aijkl =¯¯¯¯¯Aklij,i, j, k, l=1,⋯n.
###### Definition 4.

A fourth-order tensor is called partial-symmetric if

 Aijkl=Aπ(ij)π(kl)=Aπ(kl)π(ij),i, j, k, l=1,⋯n.
###### Example 1.

de2007fourth In the blind source separation problem, the cumulant tensor is computed as

 C=R∑r=1krar∘¯ar∘¯ar∘ar.

By a permutation of the indices, it is in fact a conjugate partial-symmetric tensor.

###### Definition 5.

The square unfolding form of a fourth-order tensor is defined as

 M(A)(j−1)n+i,(l−1)n+k=Aijkl.

## 3 Matrix outer product approximation model

Jiang et al.Jiang2018Low introduced the new notions of M-decomposition for an even-order tensor , which is exactly the rank-one decomposition of , followed by the notion of tensor M-rank.

For each , let be the SVD of , then has the following decomposition form

 A=n2∑i=1σiUi∘Vi, (1)

where , , for . , .

We are particularly interested in the tensor with some symmetric properties. And analogous to Lieven et al.de2007fourth , we prove that the CPS tensor has a decomposition based on matrix as following.

###### Theorem 1.

If is a conjugate partial-symmetric tensor, then it can be decomposed as,

 A=r∑i=1λiEi∘¯¯¯¯¯¯Ei, (2)

where , are symmetric matrices and , for . And the decomposition is unique when are different from each other.

###### Proof.

Since is conjugate partial-symmetric, then the unfold matrix is Hermitian, and can be decomposed as

 M(A)=r∑i=1λieie∗i,

where , , are mutually orthogonal. Folding into matrix via , thus , are mutually orthogonal, that is . In this case, we have .

From the eigen-decomposition of , we have , for i.e., , for any . Since , for all , then , thus is symmetric. The uniqueness of the decomposition follows the property of eigen-decomposition of Hermitian matrix naturally. ∎

###### Remark 1.

Jiang et al.jiang2016characterizing gave the decomposition theorem for CPS tensor like Theorem 3. However, they established this theorem in the view of polynomial decomposition and did not explore the mutually orthogonality of matrices in the decomposition model.

###### Definition 6.

is a CPS tensor,

 rankM(A)=min{r∣A=r∑i=1λiEi∘¯Ei,λi∈R,Ei∈Sn2}.

The is actually the strongly symmetric M-rank defined by JiangJiang2018Low . For symmetric tensor , they also proved the equivalence between and

 ranksm(A)=min{r∣A=r∑i=1λiEi∘Ei,λi∈R,Ei∈Cn2}.

This is also true for CPS fourth-order tensor.

###### Theorem 2.

Let be a CPS tensor, then

###### Proof.

It is obvious that On the other hand, if , we have . Since , we obtain the desired conclusion. ∎

###### Corollary 3.

Let be a partial-symmetric tensor, then one has,

 A=r∑i=1λiEi∘Ei, (3)

where , are symmetric matrices and , for . .

###### Proof.

The first part is obvious according to Theorem 3. Since all matrices belonging to form a

-dimensional vector space, we have

. ∎

Fu et al. gave a rank-one decomposition of vector form for the CPS tensor based on Theorem 1 as follows,

###### Theorem 4.

(fu2018decompositions, , Theorem 3.2) is CPS if and only if it has the following partial-symmetric decomposition

 A=∑iλi¯ai∘¯ai∘ai∘ai,

where and . That is, a CPS tensor can be decomposed as the sum of rank-one CPS tensors.

However, when we restricted the decomposition on real domain, the decomposition does not seem to hold, since , where , , can only represent symmetric tensor. Thus, an extended rank-one approximation model for the partial-symmetric tensor can be proposed based on Corollary 3.

###### Corollary 5.

Let be a partial-symmetric tensor, then it can be decomposed as the sum of simple low rank partial-symmetric tensor,

 A=∑iλi(pi∘pi∘qi∘qi+qi∘qi∘pi∘pi). (4)
###### Proof.

From Corollary 3, partial-symmetric tensor , where are symmetric. So it can be decomposed as , thus

 A = r∑i=1λi(ri∑j=1βjiuji(uji)⊤)∘(ri∑k=1βkiuki(uki)⊤) = r∑i=1λi(ri∑j=1ri∑k=jβjiβki(uji∘uji∘uki∘uki+uki∘uki∘uji∘uji).

The desired decomposition form follows. ∎

###### Remark 2.

From the proof of Corollary 5, we can see that if , . Whether this decomposition form is the compactest will be one of our future work.

We can discuss the case of skew partial-symmetric tensor in parallel.

###### Theorem 6.

We call skew partial-symmetric tensor if

 Aijkl=Aπ(ij)π(kl)=−Aπ(kl)π(ij),i, j, k, l=1,2,⋯,n.

Then one has

 A=∑iλi(Ui∘Vi−Vi∘Ui),

and

 A=∑iλi(pi∘pi∘qi∘qi−qi∘qi∘pi∘pi).
###### Proof.

is skew-symmetric according to the definition of the skew partial-symmetric tensor. Then . The rest of the proof is similar to that for partial-symmetric tensor, here we omit it. ∎

Based on Theorem 1, we propose a matrix outer product approximation model for the CPS tensor as following.

 minλi∈R, Xi∈Sn ∥A−r∑i=1λiXi∘¯Xi∥2F (5) s.t. ⟨Xi,Xj⟩=δij.

The successive rank-one approximation algorithm can also be applied to the conjugate partial symmetric tensors to find the matrix outer product decompositions or approximations, as shown in Algorithm 1.

The main optimization problem in Algorithm 1 could be expressed as

 (λ∗,X∗)∈argmin∥X∥F=1,X∈Sn,λ∈R∥A−λX∘¯X∥2F, (6)

The objective function of (6) can be rewritten as

 ∥A−λX∘¯X∥2F=∥A∥2F+λ2−2λ⟨A,X∘¯X⟩

From which we can derive that problem (6) is equivalent to

 X∗∈argmax∥X∥F=1,X∈Sn∣⟨A,X∘¯X⟩∣, (7)

and . We can solve (7) by transforming it into matrix eigenproblem as follows,

 x∗∈argmax∥x∥=1,x∈Cn2∣x∗M(A)x∣. (8)
###### Remark 3.

Zhang et al.zhang proved that if is symmetric,

 min∥xi∥=1i=1,2,3,4∥A−λx1∘x2∘x3∘x4∥F=min∥x∥=1∥A−λx∘x∘x∘x∥F;

if is symmetric about the first two and the last two mode respectively,

 min∥xi∥=1i=1,2,3,4∥A−λx1∘x2∘x3∘x4∥F=min∥x∥=∥y∥=1∥A−λx∘x∘y∘y∥F.

It is obvious that for partial-symmetric tensor, we also have

 min∥Xi∥F=1,Xi∈Rn2,λ∈R∥A−λX1∘X2∥F=min∥X∥F=1,X∈Sn,λ∈R∥A−λX∘X∥F.
###### Remark 4.

It is well-known that (6) is equivalent to the nearset Kronecker product problem golub2013matrix as below

 (λ∗,X∗)∈argmin∥X∥F=1,X∈Sn,λ∈R∥A−λX⊗¯X∥2F,

where , "" denotes the kronecker product of matrices.

## 4 Exact Recovery for CPS tensors

In this section, we give the theoretical analysis of exact recovery for CPS tensors by the SMROA algorithm.

###### Theorem 7.

Let be a CPS tensor with , that is

 A=r∑i=1λiEi∘¯Ei.

If are different from each other, then the SMROA algorithm will obtain the exact decomposition of after iterations.

We first claim the following lemma before proving the above theorem.

###### Lemma 8.

Let be a CPS tensor with , that is

 A=r∑i=1λiEi∘¯Ei.

are different from each other. Suppose

 ^X1∈argmaxX∈Sn,∥X∥F=1∣⟨A,X∘¯X⟩∣, ^λ1=⟨A,X∘¯X⟩.

Then, there exists such that

 ^λ1=λj, ^X1=Ej.
###### Proof.

According to Theorem 1, are mutually orthogonal, thus is a subset of an orthonormal basis of and . Let , where for . Since =1, we have . Reorder the indices such that

 ∣λ1∣≥∣λ2∣≥⋯≥∣λr∣. (9)

Then we obtain

 ∣⟨A,^X1∘¯^X1⟩∣ =∣r∑i=1λi∣xi∣2∣ ≤∣λ1∣.

On the other hand, the optimality leads to

 ∣⟨A,^X1∘¯^X1⟩∣ ≥∣⟨A,E1∘¯E1⟩∣ =∣λ1∣.

Hence,

 ∣λ1∣≤∣⟨A,^X1∘¯^X1⟩∣=∣^λ1∣≤∣λ1∣.

So, . Therefore, for any , and

 ^λ1=⟨A,^X1∘¯^X1⟩=⟨A,E1∘¯E1⟩=λ1.

Then let , we have . ∎

Now, we prove Theorem 7.

###### Proof.

By Lemma 8, there exists such that , . Let

 A1=A−^λ1^X1∘¯^X1=∑i≠jλiEi∘¯Ei,

and

 ^X2∈argmaxX∈Sn,∥X∥F=1∣⟨A1,X∘¯X⟩∣, ^λ2=⟨A,^X2∘¯^X2⟩.

By the similar proof of Lemma 8, we know that there exists a such that . Repeatedly, We can induce a permutation on such that

 ^λj=λπ(j), ^Xj=Eπ(j), j=1,2,⋯,r.

###### Corollary 9.

Let

 A=r∑i=1λiEi∘¯Ei,

where is a subset of an orthonormal basis of . are different from each other, and are ordered as

 ∣λ1∣≥∣λ2∣≥⋯≥∣λr∣.

Suppose is the output of the SMROA algorithm for input . Then, . , for . In particular, if , we have ; if , we have .

This proposition directly follows from the proof of Lemma 8.

###### Remark 5.

According to the proof in Lemma 8, if is a partial-symmetric tensor with entries whose imaginary part is not zero, the SMROA algorithm may fail. This is because that if , .

## 5 Applications of Matrix Outer Product Model

### 5.1 Low-CP-Rank Tensor Completion

The following theorem shows that the CP rank of the CPS tensor can be bounded by .

###### Theorem 10.

For CPS tensor , it holds that

 rankM(A)≤rankCP(A)≤r2rankM(A),

where .

Scenarios like colored video data with static background is more likely to be a low-CP-rank tensorJiang2018Low . So the completion problem for this kind of data can be formulated as

 min rankCP(X) (10) s.t. PΩ(X)=PΩ(A).

(10) is intractable to deal with directly, since the CP rank of a tensor is generally hard to estimate. Here we follow the idea of Jiang et al.Jiang2018Low to cope with the completion problem of partial-symmetric tensor. According to Theorem 10, we may approximate it by

 min rankM(X) (11) s.t. PΩ(X)=PΩ(A),

The following example gives an intuitive explanation for the rationality of the approximation.

###### Example 2.

Suppose , is partial-symmetric. Then , while is a low-rank matrix.

(11) is relaxed as a low rank approximation problem with nuclear norm regular term,

 min μ∥X∥∗+12∥PΩ(X)−PΩ(A)∥2F (12) s.t. X=M(X).

in above problems is required to be partial-symmetric. And the sample set is partial-symmetric.

We can apply the Fixed Point Continuation(FPC) algorithmMa2011Fixed to solve (12).

, where . The convergence of this algorithm is guaranteed Ma2011Fixed . Since the iteration in Algorithm 2 does not change the symmetric property of when and are partial-symmetric, the solution is still partial-symmetric.

### 5.2 Low Rank Matrix Outer Product Approximation

Parallel to the sparse rank-one approximation problem, we can also discuss the low rank matrix outer product "rank-one" approximation as follows based on the matrix decomposition model proposed in last section.

 min ∥A−αX∘X∥2F+λ∥X∥∗s.t.α∈R,X∈Sn, ∥X∥F=1, (13)

where is partial-symmetric.

We modify the proximal linearized minimization algorithm (PLMA) proposed by Bolte et al.Bolte2014Proximal to solve problem (13). The iterative scheme is

 ^Xk+1∈argminX{f(αk,Xk)+⟨X−Xk,∇Xf(αk,Xk)⟩+tk2∥X−Xk∥2F+λ∥X∥∗},Xk+1=^Xk+1∥^Xk+1∥F,αk+1=⟨A,Xk+1∘Xk+1⟩. (14)

where , and

 ∇Xf(α,X)=4α2∥X∥2FX−4αAX. (15)

To solve (14

), there exists simple singular value thresholding operator for the nuclear norm, see

Cai2008A .

###### Lemma 11.

(Cai2008A, , Theorem 2.1) Let be an arbitrary matrix and be its SVD. It is known that

 ∂∥X∥∗={UVT+W: W∈Rn1×n2, UTW=0, WV=0, ∥W∥2≤1}.
 Dτ(X) =argminY 12∥Y−X∥2F+τ∥Y∥∗ (16) =UDτ(Σ)VT =U0(Σ0−τI)VT0,

where , are the singular vectors associated with singular values greater than .

Thus we can compute the analytical solution of in (14), that is,

 ^Xk+1=Dλt(Xk−1t∇Xf(αk,Xk)). (17)
###### Lemma 12.

The -sublevel set of the objective function of (13), , is bounded.

It is obvious since when .

For the iterative scheme, we have the following sufficient descent property.

###### Lemma 13.

Let be continuously diferentiable over and its gradient be -Lipschitz continuous locally. Then for any , it holds that

 f(αk+1,Xk+1)+λ∥Xk+1∥∗≤f(αk,Xk)+λ∥Xk∥∗−tk−Lf2∥Xk−Xk+1∥2.
###### Proof.

Since is -Lipschitz continuous locally, we have

 f(αk,^Xk+1)≤f(αk,Xk)+⟨^Xk+1−Xk,∇Xf(αk,Xk)⟩+Lf2∥^Xk+1−Xk∥2F. (18)

According to (14), we also obtain that

 ⟨^Xk+1−Xk,∇Xf(αk,Xk)⟩+tk2∥^Xk+1−Xk∥2F+λ∥^Xk+1∥∗≤λ∥Xk∥∗. (19)

Add (18) and (19), we have

 f(αk,^Xk+1)+λ∥^Xk+1∥∗≤f(αk,Xk)+λ∥Xk∥∗−tk−Lf2∥Xk−^Xk+1∥2.

Since minimizes

 f(α,Xk+1)+λ∥Xk+1∥∗,

we obtain that

 f(αk+1,Xk+1)+λ∥Xk+1∥∗≤f(αk,^Xk+1)+λ∥^Xk+1∥∗.

The desired inequlity then follows. ∎

###### Theorem 14.

For the sequence generated by (17), its any cluster is a stationary point of (13).

The proof is similar to that in 0A ; Wang2017Low , we just omit it.

## 6 The equivalence property of CPS tensors

In this section, we prove that a fourth-order CPS tensor, denoted as , is rank-one if and only if a specific matrix unfolding of is rank-one. We first prove the following lemma.

###### Lemma 15.

If if rank-one, then there exist and such that .

###### Proof.

Since is rank-one, we have

 T=x∘y∘w∘z,x,y,w,z∈Cn.

According to the conjugate symmetry of , we obtain that is a Hermitian matrix, thus, there exists such that . This further implies that there are such that , and . Therefore, we have

 T=x∘y∘w∘z=λx∘y∘¯x∘¯y.

On the other hand, is symmetric about the first and the second indexes, so and

 T=λx∘x∘¯x∘¯x.

Then we prove that if rank-one if and only if if a rank-one matrix.

###### Lemma 16.

Let , if the unfolding matrix is rank-one, then is a rank-one tensor.

###### Proof.

According to Lemma 15, can be represented as

 T=r∑i=1λiai∘a