# Robust Tensor Completion Using Transformed Tensor SVD

In this paper, we study robust tensor completion by using transformed tensor singular value decomposition (SVD), which employs unitary transform matrices instead of discrete Fourier transform matrix that is used in the traditional tensor SVD. The main motivation is that a lower tubal rank tensor can be obtained by using other unitary transform matrices than that by using discrete Fourier transform matrix. This would be more effective for robust tensor completion. Experimental results for hyperspectral, video and face datasets have shown that the recovery performance for the robust tensor completion problem by using transformed tensor SVD is better in PSNR than that by using Fourier transform and other robust tensor completion methods.

## Authors

• 5 publications
• 27 publications
• 4 publications
09/16/2019

### Framelet Representation of Tensor Nuclear Norm for Third-Order Tensor Completion

The main aim of this paper is to develop a framelet representation of th...
02/08/2019

### A Fast Algorithm for Cosine Transform Based Tensor Singular Value Decomposition

Recently, there has been a lot of research into tensor singular value de...
03/29/2021

### An Orthogonal Equivalence Theorem for Third Order Tensors

In 2011, Kilmer and Martin proposed tensor singular value decomposition ...
01/22/2020

### Substitution of subspace collections with nonorthogonal subspaces to accelerate Fast Fourier Transform methods applied to conducting composites

We show the power of the algebra of subspace collections developed in Ch...
10/03/2019

### Quantum tensor singular value decomposition with applications to recommendation systems

In this paper, we present a quantum singular value decomposition algorit...
12/12/2017

### Constructing an orthonormal set of eigenvectors for DFT matrix using Gramians and determinants

The problem of constructing an orthogonal set of eigenvectors for a DFT ...
08/27/2019

### Correlation-based Initialization Algorithm for Tensor-based HSI Compression Methods

Tensor decomposition (TD) is widely used in hyperspectral image (HSI) co...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Tensor (multi-dimensional arrays) are generalizations of vectors and matrices, which can be used as a powerful tool in modeling multi-dimensional data such as videos

[29], color images [36, 40], hyperspectral images [11, 35, 49], and electroencephalography (EEG) [8]. Based on its multilinear algebraic properties, a tensor can take full advantage of its structures to provide better understanding and higher accuracy of the multi-dimensional data. In many tensor data applications [9, 20, 40, 27, 33, 37, 41, 47, 50]

, tensor data sets are often corrupted and/or incomplete owing to various unpredictable or unavoidable situations. It is motivated us to perform tensor completion and tensor robust principal component analysis for multi-dimensional data processing.

Compared with matrix completion and robust principal component analysis, tensor completion and tensor robust principal component analysis are far from being well-studied. The main issues are the definitions of tensor ranks and tensor decompositions. In the matrix case, it has been shown that the nuclear norm is the convex envelope of the matrix rank over a unit ball of spectral norm [12, 43]

. By solving a convex programming problem, one can recover a low rank matrix exactly with overwhelming probability, from a small fraction of its entries, even part of them are corrupted, provided that the corruptions are reasonably sparse

[4, 5, 7, 39, 42].

Unlike the matrix case, there exist different kinds of definitions of ranks of a tensor. For instance, the CANDECOMP/PARAFAC (CP) rank is defined as the minimal number of the rank one outer products of tensors, which is NP-hard to compute in general [26]. Although many authors [19, 22] have recovered some special low CP rank tensors by different methods, it is often computationally intractable to determine the CP rank or its best convex approximation. Tensor Train (TT) rank [38] is generated by the TT decomposition using the link structure of each core tensor. Since the link structure, the TT rank is only efficient for higher order tensor for tensor completion. Bengua et al. [2] proposed a novel approach based on TT rank for color images and videos completion. However, this method may be challenged when the third-dimension of the data is high, such as hyperspectral data. The Tucker rank (multi-rank) is actually a vector whose entries can be derived from the factors of Tucker decomposition [45]. Liu et al. [29] proposed to use the sum of the nuclear norms of unfolding matrices of a tensor to recover a low Tucker rank tensor. However, the sum of the nuclear norms of unfolding matrices of a tensor is not the convex envelope of the sum of ranks of unfolding matrices of a tensor [44]. Moreover, Mu et al. [34] showed that the sum of nuclear norms of unfolding matrices of a tensor is suboptimal and proposed a square deal method to recover a low rank and high-order tensor. While the square deal method only utilizes one mode information of unfolding matirces for third-order tensors. Other extensions can be found in [13] and references therein. In [16]

, Gu et al. provided a perfect recovery of two components (the low-rank tensor and the entrywise sparse corruption tensor) under restricted eigenvalue conditions. In

[18], Huang et al. proposed a tensor robust principal component analysis model for exact recovery guarantee under certain tensor incoherence conditions.

The tensor-tensor product (t-product) and associated algebraic construction based on the Fourier transform, cosine transform and any invertible transform for tensors of order three or higher are studied in [25, 23, 32], respectively. With this framework, Kilmer et al. [25] introduced an SVD-like factorization called the tensor SVD as well as the definition of tubal rank. Compared with other tensor decompositions, this tensor SVD has been shown to be superior in capturing the spatial-shifting correlation that is ubiquitous in real-world data [25, 32, 24, 53, 51]. Moreover, the tubal nuclear norm is the convex envelope of the tubal average rank within the unit ball of the tensor spectral norm. Motivated by the above results, Zhang et al. [52] derived theoretical performance bounds of the model proposed in [53] using the tensor SVD algebraic framework for third-order tensor recovery from limited sampling. Zhou et al. [54]

proposed a novel factorization method based on the tensor nuclear norm in the Fourier domain for solving the third-order tensor completion problem. Hu et al.

[17] proposed a twist tensor nuclear norm for tensor completion, which relaxes the tensor multi-rank of the twist tensor in the Fourier domain. Being different from tensor completion, robust tensor completion is more complex due to the sparse noise in the observations. Jiang and Ng [21] showed that one can recover a low tubal rank tensor exactly with overwhelming probability by simply solving a convex program, where the objective function is a weighted combination of tubal nuclear norm, a convex surrogate of the tubal-rank, and the -norm. Recently, Lu et al. [31] considered the tensor robust principal component analysis problem and proposed a tensor nuclear norm based on t-product and tensor SVD in the Fourier domain, where the theoretical guarantee for the exact recovery was also provided.

The main aim of this paper is to study robust tensor completion problems by using transformed tensor SVD, which employs unitary transform matrices instead of discrete Fourier transform matrix in the tensor SVD. The main motivation is that a lower tubal rank tensor can be obtained by using other unitary transform matrices than that by using discrete Fourier transform matrix. This would be more effective for robust tensor completion. The main contributions of this paper are given as follows. (i) One can recover a low transformed tubal rank tensor exactly with overwhelming probability provided that its rank is sufficiently small and its corrupted entries are reasonably sparse. Because of the use of unitary transformation, there are new results in the convex envelope of rank, the subgradient formula and tensor basis required in the proof. (ii) We propose a new unitary transformation that can lead to significant recovery results compared with the use of the Fourier transform. (iii) Experimental results for hyperspectral and face images and video data have shown that the recovery performance by using transformed tensor SVD is better in PSNR than that by using Fourier transform in tensor SVD and other tensor completion methods.

The outline of this paper is given as follows. In Section 2, we introduce transformed tensor SVD. In Section 3, we analyze the robust tensor completion problem and the algorithm for solving the model. In Section 4, numerical results are presented to show that the effectiveness of the proposed tensor SVD for the robust tensor completion problem. Finally, some concluding remarks are given in Section 5. All proofs are deferred to the Appendix.

### 1.1 Notation and Preliminaries

Throughout this paper, the fields of real number and complex number are denoted as and , respectively. Tensors and matrices are denoted by Euler letters and boldface capital letters, respectively. For a third-order tensor , we denote its -th entry as and use the Matlab notation and to denote the -th horizontal, lateral and frontal slices, respectively. Specifically, the frontal slice is denoted compactly as . denotes a tubal fiber obtained by fixing the first two indices and varying the third index. Moreover, a tensor tube of size is denoted as and a tensor column of size is denoted as .

The inner product of is given by , where denotes the conjugate transpose of and denotes the matrix trace. For a vector , the -norm is . The spectral norm of a matrix is denoted as , where is the -th largest singular value of . The nuclear norm of a matrix is defined as . For a tensor , the -norm is defined as , the infinity norm is defined as and the Frobenius norm is defined as . Suppose that is a tensor operator, then its operator norm is defined as .

## 2 Transformed Tensor Singular Value Decomposition

Let be the unitary transform matrix with , where

is the identity matrix.

represents a third-order tensor obtained via multiplying by on all tubes along the third dimension of , i.e.,

 vec(^AΦ(i,j,:))=Φ(% vec(A(i,j,:))),

where is the vectorization operator that maps the tensor tube to a vector. Here we write . Moreover, one can get from by using operation along the third-dimension of , i.e., .

We construct a block diagonal matrix based on the frontal slices of as follows:

 ¯¯¯¯¯A=blockdiag(A):=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝A(1)A(2)⋱A(n3)⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠,

Also we can convert the block diagonal matrix into a tensor by the following fold operator:

 fold(blockdiag(A))= fold(¯¯¯¯¯A):=A.

Kernfeld et al. [23] defined the -product between two tensors by the slices products in the transform domain, where is an arbitrary invertible transform. In this paper, we are mainly interested in the t-product which is based on unitary transforms.

###### Definition 1

The -product of and is a tensor , which is given by

 C=A⋄ΦB=ΦH[fold(blockdiag(^AΦ)×blockdiag(^BΦ))],

where denotes the standard matrix product.

The t-product [25] of and is a tensor given by

 C=A∗B=Foldvec(Circ(A)×Vec(B)), (2.1)

where is an operation that takes into a tensor, i.e., ,

 Vec(B)=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝B(1)B(2)⋮B(n3)⎞⎟ ⎟ ⎟ ⎟ ⎟⎠,

and

 Circ(A)=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝A(1)A(n3)A(n3−1)⋯A(2)A(2)A(1)A(n3)⋯A(3)⋮⋱⋱⋱⋮A(n3)A(n3−1)⋯A(2)A(1)⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠.

The t-product (2.1) can be seen as a special case of Definition 1. Recall that the block circulant matrix can be diagonalized by the discrete Fourier transform matrix and the diagonal matrices are the frontal slices of , i.e.,

 (Fn3⊗In1)×Circ(A)×(FHn3⊗In2)=blockdiag(^AFn3),

where is the Kronecker product. It follows that

 A∗B =Foldvec(Circ(A)×%Vec(B)) =Foldvec((FHn3⊗In1)×blockdiag(^AF)×(Fn3⊗In2)×Vec(B)) = Foldvec((FHn3⊗In1)×blockdiag(^AF)×% Vec(^BF)) =fold((FHn3⊗In1)×blockdiag(^AF)×% blockdiag(^BF)) =FHn3[fold(blockdiag(^AF)×blockdiag(^BF))] =A⋄Fn3B.

According to -product, we have the definitions of the conjugate transpose of , the identity tensor, the unitary tensor, and the inner product between two tensors.

###### Definition 2

The conjugate transpose of with respect to is the tensor obtained by

 AH=ΦH[fold(blockdiag(^AΦ)H)].
###### Definition 3

[23, Proposition 4.1] The identity tensor (with respect to ) is defined to be a tensor such that , where with each frontal slice being the identity matrix.

###### Definition 4

[23, Definition 5.1] A tensor is unitary with respect to -product if it satisfies

 QH⋄ΦQ=Q⋄ΦQH=IΦ,

where is the identity tensor.

###### Definition 5

The inner product of is defined as

 ⟨A,B⟩=n3∑i=1⟨A(i),B(i)⟩=⟨¯¯¯¯¯AΦ,¯¯¯¯BΦ⟩. (2.2)

In the above definition, is the standard inner product of two matrices. In addition, a tensor is called to be diagonal if each frontal slice is a diagonal matrix [25]. Based on the above definitions, we have the following transformed tensor SVD with respect to .

###### Theorem 1

[23, Theorem 5.1] Suppose that . Then can be factorized as follows:

 (2.3)

where are unitary tensors with respect to -product, and is a diagonal tensor.

The tensors , and in the transformed tensor SVD can be computed by SVDs of , which is summarized in Algorithm 1.

###### Remark 2.1

For computational improvement, we also use the skinny transformed tensor SVD. For any with (see in the following definition), the skinny transformed tensor SVD is given by , where are unitary tensors with respect to -product, and is a diagonal tensor.

Based on the transformed tensor SVD given in Theorem 1, the tensor tubal rank can be defined as follows.

###### Definition 6

The tubal multi-rank of a tensor is a vector with its -th entry being the rank of the -th frontal slice of , i.e., . The tensor tubal rank, denoted as , is defined as the number of nonzero singular tubes of , where comes from the transformed tensor SVD of , i.e.,

 rank(A)=#{i:S(i,i,:)≠0}=maxiri. (2.4)

It follows from [21] that the tensor spectral norm of relate to , denoted as , can be defined as . In other words, the tensor spectral norm of equals to the matrix spectral norm of its block diagonal form in the transform domain. Moreover, suppose that a tensor operator can be represented as a tensor via -product with , we have . The aim of this paper is to recover a low transformed tubal rank tensor, which motivates us to introduce the following definition of tensor nuclear norm.

###### Definition 7

The transformed tubal nuclear norm of a tensor , denoted as , is the sum of the nuclear norms of all the frontal slices of , i.e., .

Next we show that the transformed tubal nuclear norm (TTNN) of a tensor is the convex envelope of the sum of the elements of the tensor tubal multi-rank over a unit ball of the tensor spectral norm. This is why the TTNN is useful for low transformed tubal rank tensor recovery. We remark this is the new result in the literature, and the proof is different from [12, Theorem 1] because we consider the complex-valued matrices and tensors.

###### Lemma 1

For any tensor , denotes a transformed tubal multi-rank function. Then is the convex envelope of the function on the set .

The proof can be found in Appendix A. Next we will introduce two kinds of tensor basis which play important roles in tensor coordinate decomposition as well as introducing the tensor incoherence conditions in the sequel.

###### Definition 8

(i) The transformed column basis with respect to , denoted as , is a tensor of size with the -th tube of is equal to (each entry in the -th tube is 1) and the rest equaling to 0. Its associated conjugate transpose is called transformed row basis with respect to . (ii) The transformed tube basis with respect to , denoted as , is a tensor of size with the -th entry of equaling to 1 and the remaining entries equaling to 0.

Denote as a unit tensor with the -th entry equaling to 1 and others equaling to 0. Based on Definition 8, can be expressed as

 Eijk=Φ[→ei⋄Φ˚ek⋄Φ→ej]. (2.5)

Then for a third-order tensor , we can decompose it as

 A=∑i,j,k⟨Eijk,A⟩Eijk=∑i,j,kAijkEijk. (2.6)

These properties will be used many times in the proof of our main results in Section 3. These new bases are different from existing bases and they are required in the proof of unitary transform-based tensor recovery.

## 3 Recovery Results by Transformed Tensor SVD

Suppose that we are given a third-order tensor that has a low transformed tubal rank with respect to and is also corrupted by a sparse tensor , where the transformed tubal rank of is not known. Moreover, we have no idea about the locations of the nonzero entries of , not even how many there are. We would like to recover from a set of observed entries of . We use the TTNN of a tensor to get a low rank solution and norm to get a sparse solution. Mathematically, the model can be stated as follows:

 minL,E ∥L∥TTNN+λ∥E∥1,s.t.,PΩ(L+E)=PΩ(X), (3.7)

where is a penalty parameter and is a linear projection such that the entries in the set are given while the remaining entries are missing.

We remark that the convex optimization problems constructed in [53, 52, 21] can be seen as special cases of (3.7), which aim to solve the tensor completion and tensor robust principal component analysis, respectively. For instance, if the unitary transform is based on discrete Fourier transform, the transformed tubal nuclear norm can be replaced by the tubal nuclear norm (TNN).

Here we need some incoherence conditions on to ensure that it is not sparse.

###### Definition 9

Assume that and its skinny transformed tensor SVD is . is said to satisfy the transformed tensor incoherence conditions with parameter if

 maxi=1,…,n1∥UH⋄Φ→ei∥F≤√μrn1, (3.8) maxj=1,…,n2∥VH⋄Φ→ej∥F≤√μrn2, (3.9)

and

 ∥U⋄ΦVH∥∞≤√μrn1n2n3, (3.10)

where and are the tensor basis with respect to .

For convenience, we denote and . The main result of this paper can be stated in the following theorem.

###### Theorem 2

Suppose that obeys (3.8)-(3.10), and the observation set

is uniformly distributed among all sets of cardinality

. Also suppose that each observed entry is independently corrupted with probability . Then, there exist universal constants such that with probability at least , the recovery of with is exact, provided that

 r≤crn(2)μ(log(n(1)n3))2andγ≤cγ, (3.11)

where and are two positive constants.

###### Remark 3.1

By the inner product given in Definition 5, a direct generalization of the transformed tensor incoherence conditions listed in [21] for arbitrary unitary transform are

 maxi=1,…,n1∥UH⋄Φ→ei∥F≤√μrn1n3,  maxj=1,…,n2∥VH⋄Φ→ej∥F≤√μrn2n3,

and

 ∥U⋄ΦVH∥∞≤√μrn1n2n23.

The right hands of the three inequalities are obviously smaller than those given in (3.8)-(3.10), which means that the exact recovery conditions are weaker than those of [21].

The idea of the proof is to employ convex analysis to derive the conditions in which one can check whether the pair is the unique minimizer to (3.7), and to explicitly show that the conditions in Theorem 2 are met with overwhelming probability. The main tools of our proof are the non-commutative Bernstein Inequality (NBI) and the golfing scheme [4, 15]. The detailed proof is given in Appendix B.

### 3.1 Optimization Algorithm

In this subsection, we develop a symmetric Gauss-Seidel based multi-block alternating direction method of multipliers (sGS-ADMM) [6, 28] to solve the robust tensor completion problem (3.7). The sGS-ADMM has been validated the efficiency in many fields, e.g., see [6, 28, 1, 46, 51] and references therein. Let . Problem (3.7) can be rewritten as

 minL,E,M ∥L∥TTNN+λ∥E∥1s.t. L+E=M, PΩ(M)=PΩ(X). (3.12)

The augmented Lagrangian function of (3.12) is defined by

 L(L,E,M,Z):=∥L∥TTNN+λ∥E∥1−⟨Z,L+E−M⟩+β2∥L+E−M∥2F,

where is the Lagrangian multiplier and is the penalty parameter. Let . The iteration system of the sGS-ADMM can be described as follows:

 Mk+12=argminM∈D{L(Lk,Ek,M,Zk)}, (3.13) Lk+1=argminL{L(L,Ek,Mk+12,Zk)}, (3.14) Mk+1=argminM∈D{L(Lk+1,Ek,M,Zk)}, (3.15) Ek+1=argminE{L(Lk+1,E,Mk+1,Zk)}, (3.16) Zk+1=Zk−τβ(Lk+1+Ek+1−Mk+1), (3.17)

where is the step-length.

The solution with respect to can be given by

 M=⎧⎨⎩Xijk,if (i,j,k)∈Ω,(L+E−1βZ)ijk,% otherwise. (3.18)

Similar to the proximal mapping of the nuclear norm of a matrix, we give the proximal mapping of the TTNN of a tensor. The proximal mapping of at can be given in the following theorem.

###### Theorem 3

For any , the minimizer of the following problem

 minX{λ∥X∥TTNN+12∥X−Y∥2F} (3.19)

is given by

 \emphProxλ∥⋅∥TTNN(Y):=U⋄ΦSλ⋄ΦVH, (3.20)

where and .

By the definition of the TTNN, problem (3.19) can be rewritten as

 minX{n3∑i=1λ∥^X(i)Φ∥∗+12∥^X(i)Φ−^Y(i)Φ∥2F} (3.21)

By [3, Theorem 2.1], the minimizer of (3.21) is given by

 ^X(i)Φ=^U(i)Φ^Σλ(^V(i)Φ)H,

where . By using the inverse unitary transform along the third-dimension, we get that the optimal solution of (3.19) is given by (3.20).

The subproblem with respect to in (3.14) can be described as

 (3.22)

By Theorem 3, the minimizer of problem (3.22) is given by

 Lk+1=U⋄ΦSβ⋄ΦVH, (3.23)

where and with . The minimizer with respect to in (3.16) can be given by

 Ek+1=sgn(H)∘max{|H|−λβ,0}, (3.24)

where , denotes the point-wise product, and denotes the signum function. i.e.,

 sgn(y):=⎧⎪⎨⎪⎩1,if y>0,0,if y=0,−1,if y<0.

Since only two blocks of the objective function in (3.12) are nonsmooth and the other block is not involved in (3.12), the sGS-ADMM is convergent [28, Theorem 3]. The sGS-ADMM for solving (3.12) can be stated in Algorithm 2.

## 4 Experimental Results

In this section, numerical results are presented to show the effectiveness of the proposed method for robust tensor completion. We compare the transformed tensor SVD with the sum of nuclear norms of unfolding matrices of a tensor plus a sparse tensor (SNN) [14], tensor SVD using Fourier transform (t-SVD (Fourier)) [31], and low-rank tensor completion by parallel matrix factorization (TMac) [48]. All the experiments are performed under Windows 7 and MATLAB R2018a running on a desktop (Intel Core i7, @ 3.40GHz, 8.00G RAM).

### 4.1 Experimental Setting

The sampling ratio of observations is defined as , where is generated uniformly at random and denotes the number of the entries of

. In order to evaluate the performance of different methods for real-world tensors, the peak signal-to-noise ratio (PSNR) is used to measure the quality of the estimated tensors, which is defined as follows:

 PSNR:=10log10n1n2n3(Lmax−Lmin)2∥L−L0∥2F,

where is the recovered solution, is the ground-truth tensor, and are maximal and minimal entries of , respectively.

As suggested by Theorem 2, we set and adjust it slightly to obtain the best possible results. In all experiments, is selected from in Fourier transform and from in unitary and wavelet transforms. Moreover, is set to be for its convergence [28] and is chosen from to get the highest PSNR values in Algorithm 2. The Karush-Kuhn-Tucker (KKT) conditions of problem (3.12) are given by

 {Z∈∂∥L∥TTNN, Z∈∂λ∥E∥1,L+E=M, PΩ(M)=PΩ(X),

where and denote the subdifferentials of and , respectively. Note that is always satisfied in each iteration of the sGS-ADMM. We measure the accuracy of an approximate optimal solution by using the following relative KKT residual:

 ηres:=max{ηz,ηe,ηm},

where

 ηz=∥L−Prox∥⋅∥TTNN(Z+L)∥F1+∥Z∥F+∥L∥F,ηe=∥E−Proxλ∥⋅∥1(Z+E)∥F1+∥Z∥F+∥E∥F,ηm=∥L+E−M∥F1+∥L∥F+∥E∥F+∥M∥F.

Here is the proximal mapping of , i.e., . The stopping criterion of the Algorithm is set to and the maximum number of iterations is set to be 500.

For the sparse level of , a fraction

of its entries are uniformly corrupted by additive independent and identically distributed noise from a standard Gaussian distribution

at random, which generates the sparse tensor . The testing real-world tensors are rescaled in .

In the following test, we consider two different kinds of transformations in the proposed method. The first one is a Daubechies 4 (db4) discrete wavelet transform in the periodic mode [10] to compute transformed tensor SVD (called t-SVD (wavelet)). The second one is based on data to construct a unitary transform matrix. We note that is unfolded into a matrix along the third-dimension (called t-SVD (data)). Then we take the singular value decomposition of the unfolding matrix . It is interesting to see that is the optimal transformation to obtain a low rank matrix of :

 minrank(B)=k, unitary Φ ∥ΦA−B∥2F.

In practice, the initial estimator in the robust tensor completion problem by using the Fourier transform (i.e., t-SVD completion method) can be used to generate .

### 4.2 Hyperspectral Data

In this subsection, we use three hyperspectral datasets: Samson, Japser Ridge, and Urban datasets [55] to show the effectiveness of the proposed method. The testing datasets are third-order tensors (length width channels). We describe the three datasets in the following:

• For the Samson dataset, we only utilize a region of in each image, where each pixel is recorded at 156 frequency channels covering the wavelengths from 401 to 889. Then the spectral resolution is highly up to 3.13. Thus, the size of the resulting tensor is .

• For the Japser Ridge dataset, each pixel is recorded at 224 frequency channels with wavelengths being from 380 to 2500. The spectral resolution is up to 9.46. Since this hyperspectral image is too complex to get the ground truth, a subimage of pixels is considered. The first pixel starts from the th pixel in the original image. Due to dense water vapor and atmospheric effects, we only remain 198 channels. Therefore, the size of the resulting tensor is .

• For the Urban dataset, there are pixels of each image, each of which corresponds to a area. In this image, there are 210 wavelengths ranging from 400 to 2500, which results in a spectral resolution of 10. 162 channels of this dataset is remained due to dense water vapor and atmospheric effects. Hence, the size of the resulting tensor is .

We consider robust tensor completion problem for the testing hyperspectral data with different sampling ratios and . Figure 4.1 displays the visual comparisons of different methods for the Japser Ridge dataset with sampling ratio and corruption entries. We can observe that the visual quality obtained by t-SVD (data) is better than that obtained by SNN, TMac, t-SVD (Fourier), and t-SVD (wavelet). The PSNR values obtained by different methods are displayed in Table 4.1. We can observe that the PSNR values obtained by t-SVD (data) are much higher than those obtained by SNN, TMac, t-SVD, and t-SVD (wavelet) for different sampling ratios and . The improvements of t-SVD (data) are very impressive, especially for small . The performance of t-SVD (wavelet) is better than that of SNN, TMac, and t-SVD (Fourier) in terms of PSNR values for the Samson and Japser Ridge datasets. For the Urban dataset, the PSNR values obtained by t-SVD (Fourier) are slightly higher than those by t-SVD (wavelet), especially for large .

### 4.3 Video Data

In this subsection, we present three video data (length width frames) including Carphone (), Galleon (), and Announcer () to show the effectiveness of the proposed method in robust tensor completion problem, where the first channels of all frames in the original data are used. We just choose and frames for these videos to improve the computational time.

We display the visual comparisons of the testing data in robust tensor compeltion with sampling ratio and corruption entries by SNN, TMac, t-SVD (Fourier), t-SVD (wavelet), and t-SVD (data) in Figure 4.2. We can see that the images recovered by t-SVD (data) are better than those recovered by SNN, TMac, t-SVD (Fourier), and t-SVD (wavelet) in terms of visual quality. The t-SVD (data) can keep more details than SNN, TMac, t-SVD (Fourier), and t-SVD (wavelet) for the three testing videos.

We also show the PSNR values obtained by SNN, TMac, t-SVD (Fourier), t-SVD (wavelet), and t-SVD (data) for the Carphone, Galleon and Announcer data with different sampling ratios () and () in Table 4.2. It can be seen that the PSNR values obtained by t-SVD (data) are higher than those by SNN, TMac, t-SVD (Fourier), and t-SVD (wavelet). The PSNR values obtained by t-SVD (data) can be improved around 2dB compared with those by t-SVD (Fourier) for these data. For the Carphone and Galleon videos, the performance of t-SVD (wavelet) is better than that of t-SVD (Fourier) in terms of PSNR values. While the PSNR values obtain by t-SVD (Fourier) is slightly higher than those obtained by SNN, TMac, and t-SVD (wavelet) for large such as and cases.

### 4.4 Face Data

In this subsection, we use the extended Yale face database B555http://vision.ucsd.edu/iskwak/ExtYaleDatabase/ExtYaleB.html to test the robust tensor completion problem. To improve the computational time, we crop the original image to contain the face and resize it to . Moreover, we only choose first 30 subjects and 25 illuminations in our test. Then the size of the testing tensor is .

The visual comparisons of SNN, TMac, t-SVD (Fourier), t-SVD (wavelet), and t-SVD (data) for the extended Yale face database B are show in Figure 4.3, where the sampling ratio is and . It can be seen that the images obtained by t-SVD (data) are more clear than those obtained by SNN, TMac, t-SVD (Fourier), and t-SVD (wavelet).

In Table 4.3, we show the PSNR values of different sampling ratios and for the extended Yale face database B in the robust tensor completion. It can be seen that the PSNR values obtained by t-SVD (data) are higher than those obtained by SNN, TMac, t-SVD (Fourier), and t-SVD (wavelet) for these sampling ratios and . The PSNR values of t-SVD (data) can be improved around 2dB than those of t-SVD (Fourier).

## 5 Concluding Remarks

We have studied the robust tensor completion problems by using transformed tensor SVD, which employs other unitary transform matrices instead of discrete Fourier transform matrix. The algebraic structure of the associated tensor product between two tensors is not necessary to be known in general and the tensor product can be defined via the unitary transform directly. With this generalized tensor product, we have shown that one can recover a low transformed tubal rank tensor exactly with overwhelming probability provided that its rank is sufficiently small and its corrupted entries are reasonably sparse. Moreover, we have proposed an “optimal” data-dependent transform method in robust tensor completion problem for third-order tensors. Numerical examples on many real-word tensors show the usefulness of the transformed tensor SVD method with wavelet and data-dependent transforms, and demonstrate that the performance of the proposed method is better than that of existing tensor completion methods.

## Appendix A. Proof of Lemma 1

For convenience, we denote and . If the spectral norm of is less than or equal to , the conjugate of transformed tubal multi-rank function on a unit ball of the tensor spectral norm can be defined as

 Υ♯(Y)=sup∥X∥≤1(Re(⟨Y,X⟩)−ranksum(X)).

Then by the von Neumann’s theorem and the tensor inner product given in Definition 5, we can get

 (5.25)

where denotes the -th largest singular value of Let and . Since , we can choose and . Thus

 Re(⟨Y,X⟩