Robust Tensor Completion Using Transformed Tensor SVD

In this paper, we study robust tensor completion by using transformed tensor singular value decomposition (SVD), which employs unitary transform matrices instead of discrete Fourier transform matrix that is used in the traditional tensor SVD. The main motivation is that a lower tubal rank tensor can be obtained by using other unitary transform matrices than that by using discrete Fourier transform matrix. This would be more effective for robust tensor completion. Experimental results for hyperspectral, video and face datasets have shown that the recovery performance for the robust tensor completion problem by using transformed tensor SVD is better in PSNR than that by using Fourier transform and other robust tensor completion methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

page 12

page 14

09/16/2019

Framelet Representation of Tensor Nuclear Norm for Third-Order Tensor Completion

The main aim of this paper is to develop a framelet representation of th...
02/08/2019

A Fast Algorithm for Cosine Transform Based Tensor Singular Value Decomposition

Recently, there has been a lot of research into tensor singular value de...
03/29/2021

An Orthogonal Equivalence Theorem for Third Order Tensors

In 2011, Kilmer and Martin proposed tensor singular value decomposition ...
10/03/2019

Quantum tensor singular value decomposition with applications to recommendation systems

In this paper, we present a quantum singular value decomposition algorit...
12/12/2017

Constructing an orthonormal set of eigenvectors for DFT matrix using Gramians and determinants

The problem of constructing an orthogonal set of eigenvectors for a DFT ...
08/27/2019

Correlation-based Initialization Algorithm for Tensor-based HSI Compression Methods

Tensor decomposition (TD) is widely used in hyperspectral image (HSI) co...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Tensor (multi-dimensional arrays) are generalizations of vectors and matrices, which can be used as a powerful tool in modeling multi-dimensional data such as videos

[29], color images [36, 40], hyperspectral images [11, 35, 49], and electroencephalography (EEG) [8]. Based on its multilinear algebraic properties, a tensor can take full advantage of its structures to provide better understanding and higher accuracy of the multi-dimensional data. In many tensor data applications [9, 20, 40, 27, 33, 37, 41, 47, 50]

, tensor data sets are often corrupted and/or incomplete owing to various unpredictable or unavoidable situations. It is motivated us to perform tensor completion and tensor robust principal component analysis for multi-dimensional data processing.

Compared with matrix completion and robust principal component analysis, tensor completion and tensor robust principal component analysis are far from being well-studied. The main issues are the definitions of tensor ranks and tensor decompositions. In the matrix case, it has been shown that the nuclear norm is the convex envelope of the matrix rank over a unit ball of spectral norm [12, 43]

. By solving a convex programming problem, one can recover a low rank matrix exactly with overwhelming probability, from a small fraction of its entries, even part of them are corrupted, provided that the corruptions are reasonably sparse

[4, 5, 7, 39, 42].

Unlike the matrix case, there exist different kinds of definitions of ranks of a tensor. For instance, the CANDECOMP/PARAFAC (CP) rank is defined as the minimal number of the rank one outer products of tensors, which is NP-hard to compute in general [26]. Although many authors [19, 22] have recovered some special low CP rank tensors by different methods, it is often computationally intractable to determine the CP rank or its best convex approximation. Tensor Train (TT) rank [38] is generated by the TT decomposition using the link structure of each core tensor. Since the link structure, the TT rank is only efficient for higher order tensor for tensor completion. Bengua et al. [2] proposed a novel approach based on TT rank for color images and videos completion. However, this method may be challenged when the third-dimension of the data is high, such as hyperspectral data. The Tucker rank (multi-rank) is actually a vector whose entries can be derived from the factors of Tucker decomposition [45]. Liu et al. [29] proposed to use the sum of the nuclear norms of unfolding matrices of a tensor to recover a low Tucker rank tensor. However, the sum of the nuclear norms of unfolding matrices of a tensor is not the convex envelope of the sum of ranks of unfolding matrices of a tensor [44]. Moreover, Mu et al. [34] showed that the sum of nuclear norms of unfolding matrices of a tensor is suboptimal and proposed a square deal method to recover a low rank and high-order tensor. While the square deal method only utilizes one mode information of unfolding matirces for third-order tensors. Other extensions can be found in [13] and references therein. In [16]

, Gu et al. provided a perfect recovery of two components (the low-rank tensor and the entrywise sparse corruption tensor) under restricted eigenvalue conditions. In

[18], Huang et al. proposed a tensor robust principal component analysis model for exact recovery guarantee under certain tensor incoherence conditions.

The tensor-tensor product (t-product) and associated algebraic construction based on the Fourier transform, cosine transform and any invertible transform for tensors of order three or higher are studied in [25, 23, 32], respectively. With this framework, Kilmer et al. [25] introduced an SVD-like factorization called the tensor SVD as well as the definition of tubal rank. Compared with other tensor decompositions, this tensor SVD has been shown to be superior in capturing the spatial-shifting correlation that is ubiquitous in real-world data [25, 32, 24, 53, 51]. Moreover, the tubal nuclear norm is the convex envelope of the tubal average rank within the unit ball of the tensor spectral norm. Motivated by the above results, Zhang et al. [52] derived theoretical performance bounds of the model proposed in [53] using the tensor SVD algebraic framework for third-order tensor recovery from limited sampling. Zhou et al. [54]

proposed a novel factorization method based on the tensor nuclear norm in the Fourier domain for solving the third-order tensor completion problem. Hu et al.

[17] proposed a twist tensor nuclear norm for tensor completion, which relaxes the tensor multi-rank of the twist tensor in the Fourier domain. Being different from tensor completion, robust tensor completion is more complex due to the sparse noise in the observations. Jiang and Ng [21] showed that one can recover a low tubal rank tensor exactly with overwhelming probability by simply solving a convex program, where the objective function is a weighted combination of tubal nuclear norm, a convex surrogate of the tubal-rank, and the -norm. Recently, Lu et al. [31] considered the tensor robust principal component analysis problem and proposed a tensor nuclear norm based on t-product and tensor SVD in the Fourier domain, where the theoretical guarantee for the exact recovery was also provided.

The main aim of this paper is to study robust tensor completion problems by using transformed tensor SVD, which employs unitary transform matrices instead of discrete Fourier transform matrix in the tensor SVD. The main motivation is that a lower tubal rank tensor can be obtained by using other unitary transform matrices than that by using discrete Fourier transform matrix. This would be more effective for robust tensor completion. The main contributions of this paper are given as follows. (i) One can recover a low transformed tubal rank tensor exactly with overwhelming probability provided that its rank is sufficiently small and its corrupted entries are reasonably sparse. Because of the use of unitary transformation, there are new results in the convex envelope of rank, the subgradient formula and tensor basis required in the proof. (ii) We propose a new unitary transformation that can lead to significant recovery results compared with the use of the Fourier transform. (iii) Experimental results for hyperspectral and face images and video data have shown that the recovery performance by using transformed tensor SVD is better in PSNR than that by using Fourier transform in tensor SVD and other tensor completion methods.

The outline of this paper is given as follows. In Section 2, we introduce transformed tensor SVD. In Section 3, we analyze the robust tensor completion problem and the algorithm for solving the model. In Section 4, numerical results are presented to show that the effectiveness of the proposed tensor SVD for the robust tensor completion problem. Finally, some concluding remarks are given in Section 5. All proofs are deferred to the Appendix.

1.1 Notation and Preliminaries

Throughout this paper, the fields of real number and complex number are denoted as and , respectively. Tensors and matrices are denoted by Euler letters and boldface capital letters, respectively. For a third-order tensor , we denote its -th entry as and use the Matlab notation and to denote the -th horizontal, lateral and frontal slices, respectively. Specifically, the frontal slice is denoted compactly as . denotes a tubal fiber obtained by fixing the first two indices and varying the third index. Moreover, a tensor tube of size is denoted as and a tensor column of size is denoted as .

The inner product of is given by , where denotes the conjugate transpose of and denotes the matrix trace. For a vector , the -norm is . The spectral norm of a matrix is denoted as , where is the -th largest singular value of . The nuclear norm of a matrix is defined as . For a tensor , the -norm is defined as , the infinity norm is defined as and the Frobenius norm is defined as . Suppose that is a tensor operator, then its operator norm is defined as .

2 Transformed Tensor Singular Value Decomposition

Let be the unitary transform matrix with , where

is the identity matrix.

represents a third-order tensor obtained via multiplying by on all tubes along the third dimension of , i.e.,

where is the vectorization operator that maps the tensor tube to a vector. Here we write . Moreover, one can get from by using operation along the third-dimension of , i.e., .

We construct a block diagonal matrix based on the frontal slices of as follows:

Also we can convert the block diagonal matrix into a tensor by the following fold operator:

Kernfeld et al. [23] defined the -product between two tensors by the slices products in the transform domain, where is an arbitrary invertible transform. In this paper, we are mainly interested in the t-product which is based on unitary transforms.

Definition 1

The -product of and is a tensor , which is given by

where denotes the standard matrix product.

The t-product [25] of and is a tensor given by

(2.1)

where is an operation that takes into a tensor, i.e., ,

and

The t-product (2.1) can be seen as a special case of Definition 1. Recall that the block circulant matrix can be diagonalized by the discrete Fourier transform matrix and the diagonal matrices are the frontal slices of , i.e.,

where is the Kronecker product. It follows that

According to -product, we have the definitions of the conjugate transpose of , the identity tensor, the unitary tensor, and the inner product between two tensors.

Definition 2

The conjugate transpose of with respect to is the tensor obtained by

Definition 3

[23, Proposition 4.1] The identity tensor (with respect to ) is defined to be a tensor such that , where with each frontal slice being the identity matrix.

Definition 4

[23, Definition 5.1] A tensor is unitary with respect to -product if it satisfies

where is the identity tensor.

Definition 5

The inner product of is defined as

(2.2)

In the above definition, is the standard inner product of two matrices. In addition, a tensor is called to be diagonal if each frontal slice is a diagonal matrix [25]. Based on the above definitions, we have the following transformed tensor SVD with respect to .

Theorem 1

[23, Theorem 5.1] Suppose that . Then can be factorized as follows:

(2.3)

where are unitary tensors with respect to -product, and is a diagonal tensor.

The tensors , and in the transformed tensor SVD can be computed by SVDs of , which is summarized in Algorithm 1.

Input: .
  1: ;
  2: for do
  3: ;
  4:
  5: end for
  6: , , .
Output:

Algorithm 1 Transformed tensor SVD for third-order tensors [23]
Remark 2.1

For computational improvement, we also use the skinny transformed tensor SVD. For any with (see in the following definition), the skinny transformed tensor SVD is given by , where are unitary tensors with respect to -product, and is a diagonal tensor.

Based on the transformed tensor SVD given in Theorem 1, the tensor tubal rank can be defined as follows.

Definition 6

The tubal multi-rank of a tensor is a vector with its -th entry being the rank of the -th frontal slice of , i.e., . The tensor tubal rank, denoted as , is defined as the number of nonzero singular tubes of , where comes from the transformed tensor SVD of , i.e.,

(2.4)

It follows from [21] that the tensor spectral norm of relate to , denoted as , can be defined as . In other words, the tensor spectral norm of equals to the matrix spectral norm of its block diagonal form in the transform domain. Moreover, suppose that a tensor operator can be represented as a tensor via -product with , we have . The aim of this paper is to recover a low transformed tubal rank tensor, which motivates us to introduce the following definition of tensor nuclear norm.

Definition 7

The transformed tubal nuclear norm of a tensor , denoted as , is the sum of the nuclear norms of all the frontal slices of , i.e., .

Next we show that the transformed tubal nuclear norm (TTNN) of a tensor is the convex envelope of the sum of the elements of the tensor tubal multi-rank over a unit ball of the tensor spectral norm. This is why the TTNN is useful for low transformed tubal rank tensor recovery. We remark this is the new result in the literature, and the proof is different from [12, Theorem 1] because we consider the complex-valued matrices and tensors.

Lemma 1

For any tensor , denotes a transformed tubal multi-rank function. Then is the convex envelope of the function on the set .

The proof can be found in Appendix A. Next we will introduce two kinds of tensor basis which play important roles in tensor coordinate decomposition as well as introducing the tensor incoherence conditions in the sequel.

Definition 8

(i) The transformed column basis with respect to , denoted as , is a tensor of size with the -th tube of is equal to (each entry in the -th tube is 1) and the rest equaling to 0. Its associated conjugate transpose is called transformed row basis with respect to . (ii) The transformed tube basis with respect to , denoted as , is a tensor of size with the -th entry of equaling to 1 and the remaining entries equaling to 0.

Denote as a unit tensor with the -th entry equaling to 1 and others equaling to 0. Based on Definition 8, can be expressed as

(2.5)

Then for a third-order tensor , we can decompose it as

(2.6)

These properties will be used many times in the proof of our main results in Section 3. These new bases are different from existing bases and they are required in the proof of unitary transform-based tensor recovery.

3 Recovery Results by Transformed Tensor SVD

Suppose that we are given a third-order tensor that has a low transformed tubal rank with respect to and is also corrupted by a sparse tensor , where the transformed tubal rank of is not known. Moreover, we have no idea about the locations of the nonzero entries of , not even how many there are. We would like to recover from a set of observed entries of . We use the TTNN of a tensor to get a low rank solution and norm to get a sparse solution. Mathematically, the model can be stated as follows:

(3.7)

where is a penalty parameter and is a linear projection such that the entries in the set are given while the remaining entries are missing.

We remark that the convex optimization problems constructed in [53, 52, 21] can be seen as special cases of (3.7), which aim to solve the tensor completion and tensor robust principal component analysis, respectively. For instance, if the unitary transform is based on discrete Fourier transform, the transformed tubal nuclear norm can be replaced by the tubal nuclear norm (TNN).

Here we need some incoherence conditions on to ensure that it is not sparse.

Definition 9

Assume that and its skinny transformed tensor SVD is . is said to satisfy the transformed tensor incoherence conditions with parameter if

(3.8)
(3.9)

and

(3.10)

where and are the tensor basis with respect to .

For convenience, we denote and . The main result of this paper can be stated in the following theorem.

Theorem 2

Suppose that obeys (3.8)-(3.10), and the observation set

is uniformly distributed among all sets of cardinality

. Also suppose that each observed entry is independently corrupted with probability . Then, there exist universal constants such that with probability at least , the recovery of with is exact, provided that

(3.11)

where and are two positive constants.

Remark 3.1

By the inner product given in Definition 5, a direct generalization of the transformed tensor incoherence conditions listed in [21] for arbitrary unitary transform are

and

The right hands of the three inequalities are obviously smaller than those given in (3.8)-(3.10), which means that the exact recovery conditions are weaker than those of [21].

The idea of the proof is to employ convex analysis to derive the conditions in which one can check whether the pair is the unique minimizer to (3.7), and to explicitly show that the conditions in Theorem 2 are met with overwhelming probability. The main tools of our proof are the non-commutative Bernstein Inequality (NBI) and the golfing scheme [4, 15]. The detailed proof is given in Appendix B.

3.1 Optimization Algorithm

In this subsection, we develop a symmetric Gauss-Seidel based multi-block alternating direction method of multipliers (sGS-ADMM) [6, 28] to solve the robust tensor completion problem (3.7). The sGS-ADMM has been validated the efficiency in many fields, e.g., see [6, 28, 1, 46, 51] and references therein. Let . Problem (3.7) can be rewritten as

(3.12)

The augmented Lagrangian function of (3.12) is defined by

where is the Lagrangian multiplier and is the penalty parameter. Let . The iteration system of the sGS-ADMM can be described as follows:

(3.13)
(3.14)
(3.15)
(3.16)
(3.17)

where is the step-length.

The solution with respect to can be given by

(3.18)

Similar to the proximal mapping of the nuclear norm of a matrix, we give the proximal mapping of the TTNN of a tensor. The proximal mapping of at can be given in the following theorem.

Theorem 3

For any , the minimizer of the following problem

(3.19)

is given by

(3.20)

where and .

By the definition of the TTNN, problem (3.19) can be rewritten as

(3.21)

By [3, Theorem 2.1], the minimizer of (3.21) is given by

where . By using the inverse unitary transform along the third-dimension, we get that the optimal solution of (3.19) is given by (3.20).

The subproblem with respect to in (3.14) can be described as

(3.22)

By Theorem 3, the minimizer of problem (3.22) is given by

(3.23)

where and with . The minimizer with respect to in (3.16) can be given by

(3.24)

where , denotes the point-wise product, and denotes the signum function. i.e.,

Since only two blocks of the objective function in (3.12) are nonsmooth and the other block is not involved in (3.12), the sGS-ADMM is convergent [28, Theorem 3]. The sGS-ADMM for solving (3.12) can be stated in Algorithm 2.

Input: . For perform the following steps:
  1: Compute by


2: Compute via (3.23).
  3: Compute by


4: Compute via (3.24).
  5: Update by (3.17).
  6: If a stopping criterion is not met, set and go to step 1.

Algorithm 2 A symmetric Gauss-Seidel based multi-block ADMM for solving (3.12)

4 Experimental Results

In this section, numerical results are presented to show the effectiveness of the proposed method for robust tensor completion. We compare the transformed tensor SVD with the sum of nuclear norms of unfolding matrices of a tensor plus a sparse tensor (SNN)111https://tonyzqin.wordpress.com/ [14], tensor SVD using Fourier transform (t-SVD (Fourier))222https://canyilu.github.io/publications/ [31], and low-rank tensor completion by parallel matrix factorization (TMac)333https://xu-yangyang.github.io/TMac/ [48]. All the experiments are performed under Windows 7 and MATLAB R2018a running on a desktop (Intel Core i7, @ 3.40GHz, 8.00G RAM).

4.1 Experimental Setting

The sampling ratio of observations is defined as , where is generated uniformly at random and denotes the number of the entries of

. In order to evaluate the performance of different methods for real-world tensors, the peak signal-to-noise ratio (PSNR) is used to measure the quality of the estimated tensors, which is defined as follows:

where is the recovered solution, is the ground-truth tensor, and are maximal and minimal entries of , respectively.

As suggested by Theorem 2, we set and adjust it slightly to obtain the best possible results. In all experiments, is selected from in Fourier transform and from in unitary and wavelet transforms. Moreover, is set to be for its convergence [28] and is chosen from to get the highest PSNR values in Algorithm 2. The Karush-Kuhn-Tucker (KKT) conditions of problem (3.12) are given by

where and denote the subdifferentials of and , respectively. Note that is always satisfied in each iteration of the sGS-ADMM. We measure the accuracy of an approximate optimal solution by using the following relative KKT residual:

where

Here is the proximal mapping of , i.e., . The stopping criterion of the Algorithm is set to and the maximum number of iterations is set to be 500.

For the sparse level of , a fraction

of its entries are uniformly corrupted by additive independent and identically distributed noise from a standard Gaussian distribution

at random, which generates the sparse tensor . The testing real-world tensors are rescaled in .

Figure 4.1: (a) Original images for Japser Ridge dataset; (b) Observed images ( sampling ratio and corrupted entries); (c) Recovered images by SNN [PSNR = 26.00]; (d) Recovered images by TMac [PSNR = 16.47]; (e) Recovered images by t-SVD (Fourier) [PSNR = 33.63]; (f) Recovered images by t-SVD (wavelet) [PSNR = 33.59]; (g) Recovered images by t-SVD (data) [PSNR = 37.38].
SNN TMac t-SVD t-SVD t-SVD
(Fourier) (wavelet) (data)
Samson 0.1 30.22 23.15 38.53 45.43 53.30
0.2 29.63 19.35 34.80 41.29 50.68
0.3 28.06 16.82 32.26 38.22 45.87
0.1 32.43 23.28 40.87 47.34 54.40
0.2 30.84 19.42 36.82 44.69 52.49
0.3 29.35 16.86 33.58 39.46 48.28
Japser Ridge 0.1 30.13 21.73 39.22 40.60 45.13
0.2 27.92 18.76 36.38 37.20 41.13
0.3 26.00 16.47 33.63 33.59 37.38
0.1 31.98 21.84 40.78 42.64 46.76
0.2 29.61 18.81 37.88 38.87 43.13
0.3 27.49 16.51 35.15 35.81 39.33
Urban 0.1 27.88 22.20 38.78 39.10 47.76
0.2 26.13 18.43 36.08 35.70 44.51
0.3 24.69 16.06 33.39 32.16 39.63
0.1 30.31 22.29 40.76 42.94 49.76
0.2 27.89 18.45 37.77 38.44 45.98
0.3 26.06 16.07 34.98 33.37 42.26
Table 4.1: PSNR values obtained by SNN, TMac, t-SVD (Fourier), t-SVD (wavelet), and t-SVD (data) for hyperspectral datasets. The boldface number is the best and the underline number is the second best.
Figure 4.2: Recovered images by SNN, TMac, t-SVD (Fourier), t-SVD (wavelet) and t-SVD (data) in robust tensor completion for video data with sampling ratio and corrupted entries. (a) Original images. (b) Observed images. (c) SNN. (d) TMac. (e) t-SVD (Fourier). (f) t-SVD (wavelet). (g) t-SVD (data).

SNN TMac t-SVD t-SVD t-SVD
(Fourier) (wavelet) (data)
Carphone 0.1 26.80 20.86 30.70 31.21 32.38
0.2 24.88 17.35 28.82 28.30 30.14
0.3 23.39 15.19 27.21 27.32 28.09
0.1 28.74 21.03 32.57 32.73 34.06
0.2 26.35 17.47 30.13 30.25 31.29
0.3 24.68 15.27 28.16 28.18 29.10
Galleon 0.1 24.56 20.47 27.44 29.18 29.80
0.2 22.14 17.19 25.63 26.55 27.55
0.3 19.07 15.12 24.05 24.26 25.84
0.1 26.86 20.75 29.37 30.67 31.71
0.2 24.32 17.37 27.01 28.44 29.03
0.3 22.10 15.23 25.08 26.26 26.93
Announcer 0.1 29.58 21.14 37.90 38.24 39.44
0.2 27.55 17.52 35.35 35.07 36.57
0.3 26.36 15.33 33.02 31.94 34.14
0.1 31.57 21.24 39.62 40.45 41.28
0.2 28.67 17.58 36.61 35.91 37.97
0.3 27.50 15.37 34.15 33.54 35.23
Table 4.2: The PSNR values of video data by SNN, TMac, t-SVD (Fourier), t-SVD (wavelet), and t-SVD (data). The boldface number is the best and the underline number is the second best.

In the following test, we consider two different kinds of transformations in the proposed method. The first one is a Daubechies 4 (db4) discrete wavelet transform in the periodic mode [10] to compute transformed tensor SVD (called t-SVD (wavelet)). The second one is based on data to construct a unitary transform matrix. We note that is unfolded into a matrix along the third-dimension (called t-SVD (data)). Then we take the singular value decomposition of the unfolding matrix . It is interesting to see that is the optimal transformation to obtain a low rank matrix of :

In practice, the initial estimator in the robust tensor completion problem by using the Fourier transform (i.e., t-SVD completion method) can be used to generate .

4.2 Hyperspectral Data

In this subsection, we use three hyperspectral datasets: Samson, Japser Ridge, and Urban datasets [55] to show the effectiveness of the proposed method. The testing datasets are third-order tensors (length width channels). We describe the three datasets in the following:

  • For the Samson dataset, we only utilize a region of in each image, where each pixel is recorded at 156 frequency channels covering the wavelengths from 401 to 889. Then the spectral resolution is highly up to 3.13. Thus, the size of the resulting tensor is .

  • For the Japser Ridge dataset, each pixel is recorded at 224 frequency channels with wavelengths being from 380 to 2500. The spectral resolution is up to 9.46. Since this hyperspectral image is too complex to get the ground truth, a subimage of pixels is considered. The first pixel starts from the th pixel in the original image. Due to dense water vapor and atmospheric effects, we only remain 198 channels. Therefore, the size of the resulting tensor is .

  • For the Urban dataset, there are pixels of each image, each of which corresponds to a area. In this image, there are 210 wavelengths ranging from 400 to 2500, which results in a spectral resolution of 10. 162 channels of this dataset is remained due to dense water vapor and atmospheric effects. Hence, the size of the resulting tensor is .

We consider robust tensor completion problem for the testing hyperspectral data with different sampling ratios and . Figure 4.1 displays the visual comparisons of different methods for the Japser Ridge dataset with sampling ratio and corruption entries. We can observe that the visual quality obtained by t-SVD (data) is better than that obtained by SNN, TMac, t-SVD (Fourier), and t-SVD (wavelet). The PSNR values obtained by different methods are displayed in Table 4.1. We can observe that the PSNR values obtained by t-SVD (data) are much higher than those obtained by SNN, TMac, t-SVD, and t-SVD (wavelet) for different sampling ratios and . The improvements of t-SVD (data) are very impressive, especially for small . The performance of t-SVD (wavelet) is better than that of SNN, TMac, and t-SVD (Fourier) in terms of PSNR values for the Samson and Japser Ridge datasets. For the Urban dataset, the PSNR values obtained by t-SVD (Fourier) are slightly higher than those by t-SVD (wavelet), especially for large .

4.3 Video Data

In this subsection, we present three video data (length width frames) including Carphone (), Galleon (), and Announcer ()444https://media.xiph.org/video/derf/ to show the effectiveness of the proposed method in robust tensor completion problem, where the first channels of all frames in the original data are used. We just choose and frames for these videos to improve the computational time.

We display the visual comparisons of the testing data in robust tensor compeltion with sampling ratio and corruption entries by SNN, TMac, t-SVD (Fourier), t-SVD (wavelet), and t-SVD (data) in Figure 4.2. We can see that the images recovered by t-SVD (data) are better than those recovered by SNN, TMac, t-SVD (Fourier), and t-SVD (wavelet) in terms of visual quality. The t-SVD (data) can keep more details than SNN, TMac, t-SVD (Fourier), and t-SVD (wavelet) for the three testing videos.

Figure 4.3: Recovered images by SNN, TMac, t-SVD (Fourier), t-SVD (wavelet), and t-SVD (data) in robust tensor completion for the extended Yale face database B with sampling ratio and corrupted entries. (a) Original images. (b) Observed images. (c) SNN. (d) TMac. (e) t-SVD (Fourier). (f) t-SVD (wavelet). (g) t-SVD (data).

We also show the PSNR values obtained by SNN, TMac, t-SVD (Fourier), t-SVD (wavelet), and t-SVD (data) for the Carphone, Galleon and Announcer data with different sampling ratios () and () in Table 4.2. It can be seen that the PSNR values obtained by t-SVD (data) are higher than those by SNN, TMac, t-SVD (Fourier), and t-SVD (wavelet). The PSNR values obtained by t-SVD (data) can be improved around 2dB compared with those by t-SVD (Fourier) for these data. For the Carphone and Galleon videos, the performance of t-SVD (wavelet) is better than that of t-SVD (Fourier) in terms of PSNR values. While the PSNR values obtain by t-SVD (Fourier) is slightly higher than those obtained by SNN, TMac, and t-SVD (wavelet) for large such as and cases.

4.4 Face Data

In this subsection, we use the extended Yale face database B555http://vision.ucsd.edu/iskwak/ExtYaleDatabase/ExtYaleB.html to test the robust tensor completion problem. To improve the computational time, we crop the original image to contain the face and resize it to . Moreover, we only choose first 30 subjects and 25 illuminations in our test. Then the size of the testing tensor is .

SNN TMac t-SVD t-SVD t-SVD
(Fourier) (wavelet) (data)
0.1 24.57 19.51 26.00 26.85 28.76
0.2 22.89 17.60 24.06 24.73 26.26
0.3 21.76 15.77 22.22 22.97 23.75
0.1 26.59 19.54 28.07 28.86 30.67
0.2 24.53 17.79 25.54 26.02 27.60
0.3 23.06 15.93 23.59 24.14 25.29
Table 4.3: The PSNR values of face data by SNN, TMac, t-SVD (Fourier), t-SVD (wavelet), and t-SVD (data). The boldface number is the best and the underline number is the second best.

The visual comparisons of SNN, TMac, t-SVD (Fourier), t-SVD (wavelet), and t-SVD (data) for the extended Yale face database B are show in Figure 4.3, where the sampling ratio is and . It can be seen that the images obtained by t-SVD (data) are more clear than those obtained by SNN, TMac, t-SVD (Fourier), and t-SVD (wavelet).

In Table 4.3, we show the PSNR values of different sampling ratios and for the extended Yale face database B in the robust tensor completion. It can be seen that the PSNR values obtained by t-SVD (data) are higher than those obtained by SNN, TMac, t-SVD (Fourier), and t-SVD (wavelet) for these sampling ratios and . The PSNR values of t-SVD (data) can be improved around 2dB than those of t-SVD (Fourier).

5 Concluding Remarks

We have studied the robust tensor completion problems by using transformed tensor SVD, which employs other unitary transform matrices instead of discrete Fourier transform matrix. The algebraic structure of the associated tensor product between two tensors is not necessary to be known in general and the tensor product can be defined via the unitary transform directly. With this generalized tensor product, we have shown that one can recover a low transformed tubal rank tensor exactly with overwhelming probability provided that its rank is sufficiently small and its corrupted entries are reasonably sparse. Moreover, we have proposed an “optimal” data-dependent transform method in robust tensor completion problem for third-order tensors. Numerical examples on many real-word tensors show the usefulness of the transformed tensor SVD method with wavelet and data-dependent transforms, and demonstrate that the performance of the proposed method is better than that of existing tensor completion methods.

Appendix A. Proof of Lemma 1

For convenience, we denote and . If the spectral norm of is less than or equal to , the conjugate of transformed tubal multi-rank function on a unit ball of the tensor spectral norm can be defined as

Then by the von Neumann’s theorem and the tensor inner product given in Definition 5, we can get

(5.25)

where denotes the -th largest singular value of Let and . Since , we can choose and . Thus