1 Introduction
Tensor (multidimensional arrays) are generalizations of vectors and matrices, which can be used as a powerful tool in modeling multidimensional data such as videos
[29], color images [36, 40], hyperspectral images [11, 35, 49], and electroencephalography (EEG) [8]. Based on its multilinear algebraic properties, a tensor can take full advantage of its structures to provide better understanding and higher accuracy of the multidimensional data. In many tensor data applications [9, 20, 40, 27, 33, 37, 41, 47, 50], tensor data sets are often corrupted and/or incomplete owing to various unpredictable or unavoidable situations. It is motivated us to perform tensor completion and tensor robust principal component analysis for multidimensional data processing.
Compared with matrix completion and robust principal component analysis, tensor completion and tensor robust principal component analysis are far from being wellstudied. The main issues are the definitions of tensor ranks and tensor decompositions. In the matrix case, it has been shown that the nuclear norm is the convex envelope of the matrix rank over a unit ball of spectral norm [12, 43]
. By solving a convex programming problem, one can recover a low rank matrix exactly with overwhelming probability, from a small fraction of its entries, even part of them are corrupted, provided that the corruptions are reasonably sparse
[4, 5, 7, 39, 42].Unlike the matrix case, there exist different kinds of definitions of ranks of a tensor. For instance, the CANDECOMP/PARAFAC (CP) rank is defined as the minimal number of the rank one outer products of tensors, which is NPhard to compute in general [26]. Although many authors [19, 22] have recovered some special low CP rank tensors by different methods, it is often computationally intractable to determine the CP rank or its best convex approximation. Tensor Train (TT) rank [38] is generated by the TT decomposition using the link structure of each core tensor. Since the link structure, the TT rank is only efficient for higher order tensor for tensor completion. Bengua et al. [2] proposed a novel approach based on TT rank for color images and videos completion. However, this method may be challenged when the thirddimension of the data is high, such as hyperspectral data. The Tucker rank (multirank) is actually a vector whose entries can be derived from the factors of Tucker decomposition [45]. Liu et al. [29] proposed to use the sum of the nuclear norms of unfolding matrices of a tensor to recover a low Tucker rank tensor. However, the sum of the nuclear norms of unfolding matrices of a tensor is not the convex envelope of the sum of ranks of unfolding matrices of a tensor [44]. Moreover, Mu et al. [34] showed that the sum of nuclear norms of unfolding matrices of a tensor is suboptimal and proposed a square deal method to recover a low rank and highorder tensor. While the square deal method only utilizes one mode information of unfolding matirces for thirdorder tensors. Other extensions can be found in [13] and references therein. In [16]
, Gu et al. provided a perfect recovery of two components (the lowrank tensor and the entrywise sparse corruption tensor) under restricted eigenvalue conditions. In
[18], Huang et al. proposed a tensor robust principal component analysis model for exact recovery guarantee under certain tensor incoherence conditions.The tensortensor product (tproduct) and associated algebraic construction based on the Fourier transform, cosine transform and any invertible transform for tensors of order three or higher are studied in [25, 23, 32], respectively. With this framework, Kilmer et al. [25] introduced an SVDlike factorization called the tensor SVD as well as the definition of tubal rank. Compared with other tensor decompositions, this tensor SVD has been shown to be superior in capturing the spatialshifting correlation that is ubiquitous in realworld data [25, 32, 24, 53, 51]. Moreover, the tubal nuclear norm is the convex envelope of the tubal average rank within the unit ball of the tensor spectral norm. Motivated by the above results, Zhang et al. [52] derived theoretical performance bounds of the model proposed in [53] using the tensor SVD algebraic framework for thirdorder tensor recovery from limited sampling. Zhou et al. [54]
proposed a novel factorization method based on the tensor nuclear norm in the Fourier domain for solving the thirdorder tensor completion problem. Hu et al.
[17] proposed a twist tensor nuclear norm for tensor completion, which relaxes the tensor multirank of the twist tensor in the Fourier domain. Being different from tensor completion, robust tensor completion is more complex due to the sparse noise in the observations. Jiang and Ng [21] showed that one can recover a low tubal rank tensor exactly with overwhelming probability by simply solving a convex program, where the objective function is a weighted combination of tubal nuclear norm, a convex surrogate of the tubalrank, and the norm. Recently, Lu et al. [31] considered the tensor robust principal component analysis problem and proposed a tensor nuclear norm based on tproduct and tensor SVD in the Fourier domain, where the theoretical guarantee for the exact recovery was also provided.The main aim of this paper is to study robust tensor completion problems by using transformed tensor SVD, which employs unitary transform matrices instead of discrete Fourier transform matrix in the tensor SVD. The main motivation is that a lower tubal rank tensor can be obtained by using other unitary transform matrices than that by using discrete Fourier transform matrix. This would be more effective for robust tensor completion. The main contributions of this paper are given as follows. (i) One can recover a low transformed tubal rank tensor exactly with overwhelming probability provided that its rank is sufficiently small and its corrupted entries are reasonably sparse. Because of the use of unitary transformation, there are new results in the convex envelope of rank, the subgradient formula and tensor basis required in the proof. (ii) We propose a new unitary transformation that can lead to significant recovery results compared with the use of the Fourier transform. (iii) Experimental results for hyperspectral and face images and video data have shown that the recovery performance by using transformed tensor SVD is better in PSNR than that by using Fourier transform in tensor SVD and other tensor completion methods.
The outline of this paper is given as follows. In Section 2, we introduce transformed tensor SVD. In Section 3, we analyze the robust tensor completion problem and the algorithm for solving the model. In Section 4, numerical results are presented to show that the effectiveness of the proposed tensor SVD for the robust tensor completion problem. Finally, some concluding remarks are given in Section 5. All proofs are deferred to the Appendix.
1.1 Notation and Preliminaries
Throughout this paper, the fields of real number and complex number are denoted as and , respectively. Tensors and matrices are denoted by Euler letters and boldface capital letters, respectively. For a thirdorder tensor , we denote its th entry as and use the Matlab notation and to denote the th horizontal, lateral and frontal slices, respectively. Specifically, the frontal slice is denoted compactly as . denotes a tubal fiber obtained by fixing the first two indices and varying the third index. Moreover, a tensor tube of size is denoted as and a tensor column of size is denoted as .
The inner product of is given by , where denotes the conjugate transpose of and denotes the matrix trace. For a vector , the norm is . The spectral norm of a matrix is denoted as , where is the th largest singular value of . The nuclear norm of a matrix is defined as . For a tensor , the norm is defined as , the infinity norm is defined as and the Frobenius norm is defined as . Suppose that is a tensor operator, then its operator norm is defined as .
2 Transformed Tensor Singular Value Decomposition
Let be the unitary transform matrix with , where
is the identity matrix.
represents a thirdorder tensor obtained via multiplying by on all tubes along the third dimension of , i.e.,where is the vectorization operator that maps the tensor tube to a vector. Here we write . Moreover, one can get from by using operation along the thirddimension of , i.e., .
We construct a block diagonal matrix based on the frontal slices of as follows:
Also we can convert the block diagonal matrix into a tensor by the following fold operator:
Kernfeld et al. [23] defined the product between two tensors by the slices products in the transform domain, where is an arbitrary invertible transform. In this paper, we are mainly interested in the tproduct which is based on unitary transforms.
Definition 1
The product of and is a tensor , which is given by
where denotes the standard matrix product.
The tproduct [25] of and is a tensor given by
(2.1) 
where is an operation that takes into a tensor, i.e., ,
and
The tproduct (2.1) can be seen as a special case of Definition 1. Recall that the block circulant matrix can be diagonalized by the discrete Fourier transform matrix and the diagonal matrices are the frontal slices of , i.e.,
where is the Kronecker product. It follows that
According to product, we have the definitions of the conjugate transpose of , the identity tensor, the unitary tensor, and the inner product between two tensors.
Definition 2
The conjugate transpose of with respect to is the tensor obtained by
Definition 3
[23, Proposition 4.1] The identity tensor (with respect to ) is defined to be a tensor such that , where with each frontal slice being the identity matrix.
Definition 4
[23, Definition 5.1] A tensor is unitary with respect to product if it satisfies
where is the identity tensor.
Definition 5
The inner product of is defined as
(2.2) 
In the above definition, is the standard inner product of two matrices. In addition, a tensor is called to be diagonal if each frontal slice is a diagonal matrix [25]. Based on the above definitions, we have the following transformed tensor SVD with respect to .
Theorem 1
[23, Theorem 5.1] Suppose that . Then can be factorized as follows:
(2.3) 
where are unitary tensors with respect to product, and is a diagonal tensor.
The tensors , and in the transformed tensor SVD can be computed by SVDs of , which is summarized in Algorithm 1.
Remark 2.1
For computational improvement, we also use the skinny transformed tensor SVD. For any with (see in the following definition), the skinny transformed tensor SVD is given by , where are unitary tensors with respect to product, and is a diagonal tensor.
Based on the transformed tensor SVD given in Theorem 1, the tensor tubal rank can be defined as follows.
Definition 6
The tubal multirank of a tensor is a vector with its th entry being the rank of the th frontal slice of , i.e., . The tensor tubal rank, denoted as , is defined as the number of nonzero singular tubes of , where comes from the transformed tensor SVD of , i.e.,
(2.4) 
It follows from [21] that the tensor spectral norm of relate to , denoted as , can be defined as . In other words, the tensor spectral norm of equals to the matrix spectral norm of its block diagonal form in the transform domain. Moreover, suppose that a tensor operator can be represented as a tensor via product with , we have . The aim of this paper is to recover a low transformed tubal rank tensor, which motivates us to introduce the following definition of tensor nuclear norm.
Definition 7
The transformed tubal nuclear norm of a tensor , denoted as , is the sum of the nuclear norms of all the frontal slices of , i.e., .
Next we show that the transformed tubal nuclear norm (TTNN) of a tensor is the convex envelope of the sum of the elements of the tensor tubal multirank over a unit ball of the tensor spectral norm. This is why the TTNN is useful for low transformed tubal rank tensor recovery. We remark this is the new result in the literature, and the proof is different from [12, Theorem 1] because we consider the complexvalued matrices and tensors.
Lemma 1
For any tensor , denotes a transformed tubal multirank function. Then is the convex envelope of the function on the set .
The proof can be found in Appendix A. Next we will introduce two kinds of tensor basis which play important roles in tensor coordinate decomposition as well as introducing the tensor incoherence conditions in the sequel.
Definition 8
(i) The transformed column basis with respect to , denoted as , is a tensor of size with the th tube of is equal to (each entry in the th tube is 1) and the rest equaling to 0. Its associated conjugate transpose is called transformed row basis with respect to . (ii) The transformed tube basis with respect to , denoted as , is a tensor of size with the th entry of equaling to 1 and the remaining entries equaling to 0.
Denote as a unit tensor with the th entry equaling to 1 and others equaling to 0. Based on Definition 8, can be expressed as
(2.5) 
Then for a thirdorder tensor , we can decompose it as
(2.6) 
These properties will be used many times in the proof of our main results in Section 3. These new bases are different from existing bases and they are required in the proof of unitary transformbased tensor recovery.
3 Recovery Results by Transformed Tensor SVD
Suppose that we are given a thirdorder tensor that has a low transformed tubal rank with respect to and is also corrupted by a sparse tensor , where the transformed tubal rank of is not known. Moreover, we have no idea about the locations of the nonzero entries of , not even how many there are. We would like to recover from a set of observed entries of . We use the TTNN of a tensor to get a low rank solution and norm to get a sparse solution. Mathematically, the model can be stated as follows:
(3.7) 
where is a penalty parameter and is a linear projection such that the entries in the set are given while the remaining entries are missing.
We remark that the convex optimization problems constructed in [53, 52, 21] can be seen as special cases of (3.7), which aim to solve the tensor completion and tensor robust principal component analysis, respectively. For instance, if the unitary transform is based on discrete Fourier transform, the transformed tubal nuclear norm can be replaced by the tubal nuclear norm (TNN).
Here we need some incoherence conditions on to ensure that it is not sparse.
Definition 9
Assume that and its skinny transformed tensor SVD is . is said to satisfy the transformed tensor incoherence conditions with parameter if
(3.8)  
(3.9) 
and
(3.10) 
where and are the tensor basis with respect to .
For convenience, we denote and . The main result of this paper can be stated in the following theorem.
Theorem 2
Suppose that obeys (3.8)(3.10), and the observation set
is uniformly distributed among all sets of cardinality
. Also suppose that each observed entry is independently corrupted with probability . Then, there exist universal constants such that with probability at least , the recovery of with is exact, provided that(3.11) 
where and are two positive constants.
Remark 3.1
By the inner product given in Definition 5, a direct generalization of the transformed tensor incoherence conditions listed in [21] for arbitrary unitary transform are
and
The right hands of the three inequalities are obviously smaller than those given in (3.8)(3.10), which means that the exact recovery conditions are weaker than those of [21].
The idea of the proof is to employ convex analysis to derive the conditions in which one can check whether the pair is the unique minimizer to (3.7), and to explicitly show that the conditions in Theorem 2 are met with overwhelming probability. The main tools of our proof are the noncommutative Bernstein Inequality (NBI) and the golfing scheme [4, 15]. The detailed proof is given in Appendix B.
3.1 Optimization Algorithm
In this subsection, we develop a symmetric GaussSeidel based multiblock alternating direction method of multipliers (sGSADMM) [6, 28] to solve the robust tensor completion problem (3.7). The sGSADMM has been validated the efficiency in many fields, e.g., see [6, 28, 1, 46, 51] and references therein. Let . Problem (3.7) can be rewritten as
(3.12) 
The augmented Lagrangian function of (3.12) is defined by
where is the Lagrangian multiplier and is the penalty parameter. Let . The iteration system of the sGSADMM can be described as follows:
(3.13)  
(3.14)  
(3.15)  
(3.16)  
(3.17) 
where is the steplength.
The solution with respect to can be given by
(3.18) 
Similar to the proximal mapping of the nuclear norm of a matrix, we give the proximal mapping of the TTNN of a tensor. The proximal mapping of at can be given in the following theorem.
Theorem 3
For any , the minimizer of the following problem
(3.19) 
is given by
(3.20) 
where and .
By the definition of the TTNN, problem (3.19) can be rewritten as
(3.21) 
By [3, Theorem 2.1], the minimizer of (3.21) is given by
where . By using the inverse unitary transform along the thirddimension, we get that the optimal solution of (3.19) is given by (3.20).
The subproblem with respect to in (3.14) can be described as
(3.22) 
By Theorem 3, the minimizer of problem (3.22) is given by
(3.23) 
where and with . The minimizer with respect to in (3.16) can be given by
(3.24) 
where , denotes the pointwise product, and denotes the signum function. i.e.,
4 Experimental Results
In this section, numerical results are presented to show the effectiveness of the proposed method for robust tensor completion. We compare the transformed tensor SVD with the sum of nuclear norms of unfolding matrices of a tensor plus a sparse tensor (SNN)^{1}^{1}1https://tonyzqin.wordpress.com/ [14], tensor SVD using Fourier transform (tSVD (Fourier))^{2}^{2}2https://canyilu.github.io/publications/ [31], and lowrank tensor completion by parallel matrix factorization (TMac)^{3}^{3}3https://xuyangyang.github.io/TMac/ [48]. All the experiments are performed under Windows 7 and MATLAB R2018a running on a desktop (Intel Core i7, @ 3.40GHz, 8.00G RAM).
4.1 Experimental Setting
The sampling ratio of observations is defined as , where is generated uniformly at random and denotes the number of the entries of
. In order to evaluate the performance of different methods for realworld tensors, the peak signaltonoise ratio (PSNR) is used to measure the quality of the estimated tensors, which is defined as follows:
where is the recovered solution, is the groundtruth tensor, and are maximal and minimal entries of , respectively.
As suggested by Theorem 2, we set and adjust it slightly to obtain the best possible results. In all experiments, is selected from in Fourier transform and from in unitary and wavelet transforms. Moreover, is set to be for its convergence [28] and is chosen from to get the highest PSNR values in Algorithm 2. The KarushKuhnTucker (KKT) conditions of problem (3.12) are given by
where and denote the subdifferentials of and , respectively. Note that is always satisfied in each iteration of the sGSADMM. We measure the accuracy of an approximate optimal solution by using the following relative KKT residual:
where
Here is the proximal mapping of , i.e., . The stopping criterion of the Algorithm is set to and the maximum number of iterations is set to be 500.
For the sparse level of , a fraction
of its entries are uniformly corrupted by additive independent and identically distributed noise from a standard Gaussian distribution
at random, which generates the sparse tensor . The testing realworld tensors are rescaled in .SNN  TMac  tSVD  tSVD  tSVD  

(Fourier)  (wavelet)  (data)  
Samson  0.1  30.22  23.15  38.53  45.43  53.30  
0.2  29.63  19.35  34.80  41.29  50.68  
0.3  28.06  16.82  32.26  38.22  45.87  
0.1  32.43  23.28  40.87  47.34  54.40  
0.2  30.84  19.42  36.82  44.69  52.49  
0.3  29.35  16.86  33.58  39.46  48.28  
Japser Ridge  0.1  30.13  21.73  39.22  40.60  45.13  
0.2  27.92  18.76  36.38  37.20  41.13  
0.3  26.00  16.47  33.63  33.59  37.38  
0.1  31.98  21.84  40.78  42.64  46.76  
0.2  29.61  18.81  37.88  38.87  43.13  
0.3  27.49  16.51  35.15  35.81  39.33  
Urban  0.1  27.88  22.20  38.78  39.10  47.76  
0.2  26.13  18.43  36.08  35.70  44.51  
0.3  24.69  16.06  33.39  32.16  39.63  
0.1  30.31  22.29  40.76  42.94  49.76  
0.2  27.89  18.45  37.77  38.44  45.98  
0.3  26.06  16.07  34.98  33.37  42.26 

SNN  TMac  tSVD  tSVD  tSVD  

(Fourier)  (wavelet)  (data)  
Carphone  0.1  26.80  20.86  30.70  31.21  32.38  
0.2  24.88  17.35  28.82  28.30  30.14  
0.3  23.39  15.19  27.21  27.32  28.09  
0.1  28.74  21.03  32.57  32.73  34.06  
0.2  26.35  17.47  30.13  30.25  31.29  
0.3  24.68  15.27  28.16  28.18  29.10  
Galleon  0.1  24.56  20.47  27.44  29.18  29.80  
0.2  22.14  17.19  25.63  26.55  27.55  
0.3  19.07  15.12  24.05  24.26  25.84  
0.1  26.86  20.75  29.37  30.67  31.71  
0.2  24.32  17.37  27.01  28.44  29.03  
0.3  22.10  15.23  25.08  26.26  26.93  
Announcer  0.1  29.58  21.14  37.90  38.24  39.44  
0.2  27.55  17.52  35.35  35.07  36.57  
0.3  26.36  15.33  33.02  31.94  34.14  
0.1  31.57  21.24  39.62  40.45  41.28  
0.2  28.67  17.58  36.61  35.91  37.97  
0.3  27.50  15.37  34.15  33.54  35.23 
In the following test, we consider two different kinds of transformations in the proposed method. The first one is a Daubechies 4 (db4) discrete wavelet transform in the periodic mode [10] to compute transformed tensor SVD (called tSVD (wavelet)). The second one is based on data to construct a unitary transform matrix. We note that is unfolded into a matrix along the thirddimension (called tSVD (data)). Then we take the singular value decomposition of the unfolding matrix . It is interesting to see that is the optimal transformation to obtain a low rank matrix of :
In practice, the initial estimator in the robust tensor completion problem by using the Fourier transform (i.e., tSVD completion method) can be used to generate .
4.2 Hyperspectral Data
In this subsection, we use three hyperspectral datasets: Samson, Japser Ridge, and Urban datasets [55] to show the effectiveness of the proposed method. The testing datasets are thirdorder tensors (length width channels). We describe the three datasets in the following:

For the Samson dataset, we only utilize a region of in each image, where each pixel is recorded at 156 frequency channels covering the wavelengths from 401 to 889. Then the spectral resolution is highly up to 3.13. Thus, the size of the resulting tensor is .

For the Japser Ridge dataset, each pixel is recorded at 224 frequency channels with wavelengths being from 380 to 2500. The spectral resolution is up to 9.46. Since this hyperspectral image is too complex to get the ground truth, a subimage of pixels is considered. The first pixel starts from the th pixel in the original image. Due to dense water vapor and atmospheric effects, we only remain 198 channels. Therefore, the size of the resulting tensor is .

For the Urban dataset, there are pixels of each image, each of which corresponds to a area. In this image, there are 210 wavelengths ranging from 400 to 2500, which results in a spectral resolution of 10. 162 channels of this dataset is remained due to dense water vapor and atmospheric effects. Hence, the size of the resulting tensor is .
We consider robust tensor completion problem for the testing hyperspectral data with different sampling ratios and . Figure 4.1 displays the visual comparisons of different methods for the Japser Ridge dataset with sampling ratio and corruption entries. We can observe that the visual quality obtained by tSVD (data) is better than that obtained by SNN, TMac, tSVD (Fourier), and tSVD (wavelet). The PSNR values obtained by different methods are displayed in Table 4.1. We can observe that the PSNR values obtained by tSVD (data) are much higher than those obtained by SNN, TMac, tSVD, and tSVD (wavelet) for different sampling ratios and . The improvements of tSVD (data) are very impressive, especially for small . The performance of tSVD (wavelet) is better than that of SNN, TMac, and tSVD (Fourier) in terms of PSNR values for the Samson and Japser Ridge datasets. For the Urban dataset, the PSNR values obtained by tSVD (Fourier) are slightly higher than those by tSVD (wavelet), especially for large .
4.3 Video Data
In this subsection, we present three video data (length width frames) including Carphone (), Galleon (), and Announcer ()^{4}^{4}4https://media.xiph.org/video/derf/ to show the effectiveness of the proposed method in robust tensor completion problem, where the first channels of all frames in the original data are used. We just choose and frames for these videos to improve the computational time.
We display the visual comparisons of the testing data in robust tensor compeltion with sampling ratio and corruption entries by SNN, TMac, tSVD (Fourier), tSVD (wavelet), and tSVD (data) in Figure 4.2. We can see that the images recovered by tSVD (data) are better than those recovered by SNN, TMac, tSVD (Fourier), and tSVD (wavelet) in terms of visual quality. The tSVD (data) can keep more details than SNN, TMac, tSVD (Fourier), and tSVD (wavelet) for the three testing videos.
We also show the PSNR values obtained by SNN, TMac, tSVD (Fourier), tSVD (wavelet), and tSVD (data) for the Carphone, Galleon and Announcer data with different sampling ratios () and () in Table 4.2. It can be seen that the PSNR values obtained by tSVD (data) are higher than those by SNN, TMac, tSVD (Fourier), and tSVD (wavelet). The PSNR values obtained by tSVD (data) can be improved around 2dB compared with those by tSVD (Fourier) for these data. For the Carphone and Galleon videos, the performance of tSVD (wavelet) is better than that of tSVD (Fourier) in terms of PSNR values. While the PSNR values obtain by tSVD (Fourier) is slightly higher than those obtained by SNN, TMac, and tSVD (wavelet) for large such as and cases.
4.4 Face Data
In this subsection, we use the extended Yale face database B^{5}^{5}5http://vision.ucsd.edu/iskwak/ExtYaleDatabase/ExtYaleB.html to test the robust tensor completion problem. To improve the computational time, we crop the original image to contain the face and resize it to . Moreover, we only choose first 30 subjects and 25 illuminations in our test. Then the size of the testing tensor is .
SNN  TMac  tSVD  tSVD  tSVD  

(Fourier)  (wavelet)  (data)  
0.1  24.57  19.51  26.00  26.85  28.76  
0.2  22.89  17.60  24.06  24.73  26.26  
0.3  21.76  15.77  22.22  22.97  23.75  
0.1  26.59  19.54  28.07  28.86  30.67  
0.2  24.53  17.79  25.54  26.02  27.60  
0.3  23.06  15.93  23.59  24.14  25.29 
The visual comparisons of SNN, TMac, tSVD (Fourier), tSVD (wavelet), and tSVD (data) for the extended Yale face database B are show in Figure 4.3, where the sampling ratio is and . It can be seen that the images obtained by tSVD (data) are more clear than those obtained by SNN, TMac, tSVD (Fourier), and tSVD (wavelet).
In Table 4.3, we show the PSNR values of different sampling ratios and for the extended Yale face database B in the robust tensor completion. It can be seen that the PSNR values obtained by tSVD (data) are higher than those obtained by SNN, TMac, tSVD (Fourier), and tSVD (wavelet) for these sampling ratios and . The PSNR values of tSVD (data) can be improved around 2dB than those of tSVD (Fourier).
5 Concluding Remarks
We have studied the robust tensor completion problems by using transformed tensor SVD, which employs other unitary transform matrices instead of discrete Fourier transform matrix. The algebraic structure of the associated tensor product between two tensors is not necessary to be known in general and the tensor product can be defined via the unitary transform directly. With this generalized tensor product, we have shown that one can recover a low transformed tubal rank tensor exactly with overwhelming probability provided that its rank is sufficiently small and its corrupted entries are reasonably sparse. Moreover, we have proposed an “optimal” datadependent transform method in robust tensor completion problem for thirdorder tensors. Numerical examples on many realword tensors show the usefulness of the transformed tensor SVD method with wavelet and datadependent transforms, and demonstrate that the performance of the proposed method is better than that of existing tensor completion methods.
Appendix A. Proof of Lemma 1
For convenience, we denote and . If the spectral norm of is less than or equal to , the conjugate of transformed tubal multirank function on a unit ball of the tensor spectral norm can be defined as
Then by the von Neumann’s theorem and the tensor inner product given in Definition 5, we can get
(5.25) 
where denotes the th largest singular value of Let and . Since , we can choose and . Thus
Comments
There are no comments yet.