A Fast Algorithm for Cosine Transform Based Tensor Singular Value Decomposition

Recently, there has been a lot of research into tensor singular value decomposition (t-SVD) by using discrete Fourier transform (DFT) matrix. The main aims of this paper are to propose and study tensor singular value decomposition based on the discrete cosine transform (DCT) matrix. The advantages of using DCT are that (i) the complex arithmetic is not involved in the cosine transform based tensor singular value decomposition, so the computational cost required can be saved; (ii) the intrinsic reflexive boundary condition along the tubes in the third dimension of tensors is employed, so its performance would be better than that by using the periodic boundary condition in DFT. We demonstrate that the tensor product between two tensors by using DCT can be equivalent to the multiplication between a block Toeplitz-plus-Hankel matrix and a block vector. Numerical examples of low-rank tensor completion are further given to illustrate that the efficiency by using DCT is two times faster than that by using DFT and also the errors of video and multispectral image completion by using DCT are smaller than those by using DFT.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 15

07/02/2019

Robust Tensor Completion Using Transformed Tensor SVD

In this paper, we study robust tensor completion by using transformed te...
12/16/2021

Quaternion tensor singular value decomposition using a flexible transform-based approach

A flexible transform-based tensor product named ⋆_QT-product for Lth-ord...
10/03/2019

Quantum tensor singular value decomposition with applications to recommendation systems

In this paper, we present a quantum singular value decomposition algorit...
12/24/2019

Singular Value Decomposition in Sobolev Spaces: Part II

Under certain conditions, an element of a tensor product space can be id...
08/09/2021

Guaranteed Functional Tensor Singular Value Decomposition

This paper introduces the functional tensor singular value decomposition...
06/30/2021

CS decomposition and GSVD for tensors based on the T-product

This paper derives the CS decomposition for orthogonal tensors (T-CSD) a...
10/07/2020

A multi-surrogate higher-order singular value decomposition tensor emulator for spatio-temporal simulators

We introduce methodology to construct an emulator for environmental and ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A tensor is a multi-dimensional array of numbers, which is a generalization of a matrix. Compared to a “flat” matrix, a tensor provides a richer and more natural representation for many data. In this paper, we focus on the third-order tensor which looks like a magic cube. This format of data is widely used in color image and gray-scale video inpainting (1; 2; 3; 4; 5; 6), hyperspectral image (HSI) data recovery (7; 8; 9; 10), personalized web search (11), high-order web link analysis (12), magnetic resonance imaging (MRI) data recovery (13), and seismic data reconstruction (14).

Like the matrix decomposition, the tensor decomposition is an important multilinear algebra tool. There are many different tensor decompositions. The CANDECOMP/PAEAFAC (CP) decomposition (15) and the Tucker decomposition (16)

are the two most well-known ones. The CP decomposition can be considered as the higher order generalization of the matrix singular value decomposition (SVD). It tries to decompose a tensor into a sum of rank-one tensors. Similar to the rank-one matrix, third-order rank-one tensors can be written as the outer product of 3 vectors. The CP-rank of a tensor is defined as the minimum number of rank-one tensors whose sum generates the original tensor. This definition is an analog of the definition of matrix rank. The Tucker decomposition is the higher order generalization of the principal component analysis (PCA). It decomposes a tensor into a core tensor multiplied by a matrix along each mode. The Tucker rank based on Tucker decomposition is a vector whose

-th element is the mode- unfolding matrix rank.

Recent years, Kilmer and Martin (17; 18; 19) proposed a third-order tensor decomposition called tensor singular value decomposition (t-SVD). This decomposition strategy is based on the definition of the tensor product (see Section 2). After performing one-dimensional discrete Fourier transformation (DFT) on the third dimension of the tensor, this tensor product makes tensor decomposition be an analog of matrix decomposition. This strategy avoids the loss of structure information in matricization of the tensor. But because of performing one-dimensional DFT on the third dimension, the obtained tensor is a complex tensor. These complex numbers lead to higher computational cost and are not required. Why don’t we use another transformation instead of DFT to avoid its disadvantage? Discrete cosine transformation (DCT) (20) is the first alternative which expresses a finite sequence in terms of a sum of the cosine functions.

DCT only produces the real number for real input. This feature greatly reduces the data in the process of t-SVD, thus saving a lot of time. And there is another difference: DFT implies periodic boundary conditions (BC) when DCT implies reflexive BCs which yields a continuous extension at the boundaries (20). If the signal satisfies reflexive BCs (real data often satisfies), the new t-SVD based on DCT can achieve better results than DFT. We give the theoretical derivation of using DCT for t-SVD and verify the superiority compared to DFT.

The rest of this paper is as follows. In Section 2, we introduce some related notations and the original t-SVD with DFT background. In Section 3, we propose the theoretical derivation of new t-SVD with DCT. Based on the new t-SVD, we introduce the new tensor nuclear norm in Section 4. We conduct extensive experiments to demonstrate the effectiveness of the proposed method in Section 5. In Section 6, we give some concluding remarks.

2 Notations and Preliminaries

In this section, we introduce the basic notations and give the definitions related to the t-SVD. We use non-bold lowercase letters for scalars, e.g., , boldface lowercase letters for vectors, e.g., , boldface capital letters for matrices, e.g., , boldface Calligraphy letters for tensors, e.g., . and represent the field of real number and complex number, respectively. For a third-order tensor , we use the MATLAB notations , , and to denote the horizontal, lateral, and frontal slices, respectively, and , , and to denote the columns, rows, and tubes, respectively. For convenience, we use for the th frontal slice and for the -th tube . Both and represent the -th element. The Frobenius norm of is defined as . It is easily to see that .

Next, we introduce some definitions that are closely related to t-SVD. We use to represent the discrete Fourier transform of along each tube, i.e., . The block circulant matrix (18; 19) is defined as

(1)

The block diagonal matrix and the corresponding inverse operator (18; 19) are defined as

(2)

The unfold and fold operators in t-SVD (18; 19) are defined as

(3)

It is a important point that block circulant matrix can be block diagonalized.

Theorem ((17))
(4)

where denotes the Kronecker product, is an DFT matrix and is an identity matrix.

Definition (t-product (19))

Given and , the t-product is a third-order tensor of size

(5)

This definition is the core of t-SVD. It is like a one-dimensional convolution of two vectors under reflexive BCs, but the elements of vectors are the frontal slices of tensors. With Theorem 2, equation (5) can be rewritten as

(6)

Equation (6) means that the t-product in the spatial domain corresponds to the matrix multiplication of the frontal slices in the Fourier domain, which greatly simplifies the process of the algorithm.

Definition (identity tensor (19))

The identity tensor is a tensor whose first frontal slice is the identity matrix of size , and whose other frontal slices are all zeros.

Definition (orthogonal tensor (19))

A tensor is orthogonal if it satisfies , where is the tensor conjugate transpose of , which is obtained by conjugate transposing each frontal slice of .

Definition (f-diagonal tensor (19))

A tensor is called f-diagonal if each of its frontal slices is a diagonal matrix.

Theorem (t-SVD (19; 17))

Given a tensor , the t-SVD of is given by

(7)

where , are orthogonal tensors, and is a f-diagonal tensor.

Figure 1: the t-SVD of an tensor.
Definition (tensor multi-rank and tubal rank (21))

Given , its multi-rank is a vector whose -th element is the rank of the -th frontal slice of , i.e., . Its tubal rank is defined as the number of nonzero singular tubes, where the singular tubes of are the nonzero tubes of .

The tensor tubal rank is actually the largest element of multi-rank.

Definition (tensor nuclear norm (22; 23))

Given , based on the tensor multi-rank, the tensor nuclear norm (TNN) of is defined as

(8)

In order to avoid confusion with the new definition of TNN we proposed later, we call this definition TNN-F in this paper.

The computation of t-SVD on an tensor needs two steps. Firstly, the first step is to perform DFT by fast Fourier transformation (FFT) along each tube. The time complexity of the first step is . After DFT, the obtained tensor is a complex tensor which can be divided into a real number tensor and an imaginary number tensor. The computation of SVD along each frontal slice on the obtained tensor is actually equivalent to performing on the real number tensor and the imaginary number tensor respectively. The time complexity of the second step is , which is about the computational cost of the first step.

3 Cosine Transform Based Tensor Singular Value Decomposition

We discuss the DCT-based t-SVD and the resulting structure in this section. Since the corresponding block circulant matrices can be diagonalized by DFT, the DFT based t-SVD can be efficiently implemented via fast Fourier transform (fft). We will show the corresponding structure of DCT-based t-SVD can be diagonalized by DCT.

We define the shift of tensor as . It is easy to prove that any tensor can be uniquely divided into . We use to represent the DCT along each tube of , i.e., . We define the block Toeplitz matrix of as

(9)

The block Hankel matrix is defined as

(10)

The block Toeplitz-plus-Hankel matrix of is defined as

(11)

The block Toeplitz-plus-Hankel matrix can be diagonalized. The following theorem can by similarly established as (20).

Theorem
(12)

where denotes the Kronecker product, is an DCT matrix.

The proof of Theorem 3 can be obtained by using the similar argument in (20). We briefly illustrate this theorem with an example.

Example

Let the frontal slice of be

So the component is

The block Toeplitz matrix is

and the block Hankel matrix is

Then the block Toeplitz-plus-Hankel matrix is

By using stride permutations, we get

where and , , , and are Toeplitz-plus-Hankel matrices. So we have

where is a DCT matrix. In this equation, it is easy to see that

Similarly,

Hence, we have

Now, it is easy to verify

Definition (DCT-based t-product)

Given and , the t-product is a third-order tensor of size

(13)

where .

Equation (13) can be rewritten as

(14)

Based on this new t-product, the DCT-based t-SVD can be defined as follows:

Theorem (DCT-based t-SVD)

Given a tensor , the DCT-based t-SVD of is given by

(15)

where , are orthogonal tensors, is a f-diagonal tensor, and is the tensor transpose of , which is obtained by transposing each frontal slice of .

The proof of Theorem 4 can be obtained by using the similar argument in (19).

By exploiting the beautiful structure, the DCT-based t-SVD can be efficiently calculated by performing the matrix singular value decomposition for each frontal slice of the third-order tensor after DCT along each tube. For an tensor, the time complexity of performing DCT along each tube in the first step is for DCT-based t-SVD, which is the same as that DFT-based t-SVD. Since DCT only produces the real number, the time complexity of calculating SVDs is for DCT-based t-SVD, which is half that of DFT-based t-SVD.

tensor
DFT [t]
SVD after DFT
t-SVD [b]
DCT [t]
SVD after DCT
new t-SVD [b]
Table 1: The time complexity of t-SVD and DCT-based t-SVD on an tensor.

4 Low-rank Tensor Completion by TNN-C

Based on the DCT-based t-SVD, we propose the new definition of TNN called TNN-C in this section. Then, we establish the low-rank tensor completion model (6) based on TNN-C and develop the alternating direction method of multipliers (ADMM) to tackle the corresponding low-rank tensor completion model.

Definition (Tnn-C)

Given , TNN-C of is defined as

(16)

It is easy to see that TNN-C of is the sum of singular values of all frontal slices of . Meanwhile, the -th element of multi-rank is the rank of the -th frontal slice of . Thus, TNN-C is a convex surrogate of the norm of a third-order tensor’s multi-rank.

The low-rank tensor completion model is defined as

(17)

Letting

where , (17) can be rewritten as the following unconstrained problem:

(18)

By introducing an auxiliary variable , the augmented Lagrangian function of (18) is

(19)

where is the Lagrangian multiplier, and is the balance parameter. According to the framework of ADMM (24; 25; 26), , , and are iteratively updated as

(20)

Now, we give the details for solving each subproblem.

In Step 1, the -subproblem is:

(21)

which can be solved by the following theorem (22; 23).

Theorem

Given , a minimizer to

(22)

is given by the tensor singular value thresholding

(23)

where and is an f-diagonal tensor whose each frontal slice in the discrete cosine domain is .

In Step 2, we solve the following problem:

(24)

which has a closed-form solution

(25)

where is the complementary set of the index set .

We summarize the proposed ADMM procedure in Algorithm 1. Every step of ADMM has an explicit solution. Thus, the proposed method is efficiently implementable. The convergence of the ADMM method of convex functions of separable variables with linear constraints is guaranteed (27; 28).

Algorithm 1 ADMM for solving the proposed model (17).
Input: Observed data , index set , parameters .
Initialize: , , , , and .
1: while and do
2:  ;
3:  ;
4:  for to do
5:   
6:   
7:   
8:  end for
9:  
10:  ;
11:  
12 : end while
Output: The recovered tensor .

5 Numerical Examples

In this section, all experiments are implemented on Windows 10 and Matlab (R2017a) with an Intel(R) Core(TM) i7-7700k CPU at 4.20 GHz and 16 GB RAM.

5.1 The Computational Time

Saving time is the most important advantage of DCT-based t-SVD. We illustrate this advantage of the new t-SVD by operating on random tensors. We set 4 groups of random tensors of different size and performed 1000 runs to get the average time required. Tab. 2 shows that average time cost of performing t-SVD and DCT-based t-SVD, and confirms our point that DCT-based t-SVD only needs half the time of t-SVD.

size 100*100*100 100*100*400 200*200*100 400*400*100
FFT 0.0041 0.0175 0.0176 0.0653 [t]
SVD after FFT 0.0818 0.3250 0.3641 1.9015
original t-SVD 0.0859 0.3425 0.3817 1.9668 [b]
DCT 0.0042 0.0150 0.0162 0.0601 [t]
SVD after DCT 0.0439 0.1649 0.1978 0.8922
new t-SVD 0.0481 0.1799 0.2140 0.9523 [b]
Table 2: The time cost of t-SVD and DCT-based t-SVD on the random tensors of different size.

5.2 Real Data

We conduct the video and multispectral image (MSI) completion experiments and compare TNN-C with the TNN-F (22). In our experiments, the quality of the recovered image is measured by the average of highest peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) of all bands. PSNR of a band is defined as follows:

where is the masked matrix, is the recovered matrix, and is the maximum pixel value of the original matrix . SSIM can measure the similarity between the recovered image and the masked image. This indicator can reflect the similarities in brightness, contrast, and structure of two images and is defined as

where and

represent the average values of the original matrix and the estimated matrix, respectively,

and

represent the standard deviation of

and , respectively.

For all the following experiments, we set the maximum number of iterations to 500 and the tolerance to . This algorithm only needs one parameter , and we set it to .

Video completion. We test 3 videos: Akiyo, Suzie, and Salesman. The size of Akiyo and Salesman is . The size of Suzie is . Tab. 3 shows PSNR, SSIM, and time cost of TNN-F and TNN-C. TNN-C achieves better results and costs much less time than TNN-F in all experiments. Fig. 2 shows one selected tube. We can observe that the tube of recovered video by TNN-C is more closely to the true tube than that by TNN-F, especially near the boundary. Fig. 3 shows the PSNR values of each frame of recovered videos by TNN-F and TNN-C. We can observe that when the sampling rate (SR) is , the PSNR values of TNN-C are higher than those of TNN-F, especially for the first and last few frames. This observation is consistent with our interpretation of BCs. Fig. 4 shows the results recovered by TNN-F and TNN-C with . TNN-C is visually better than TNN-F.

Figure 2: The pixel value of a selected tube of videos Akiyo, Suzie, and Salesman.
Figure 3: The PSNR values of each frame of the recovered videos Akiyo, Suzie, and Salesman obtained by TNN-F and TNN-C.
Figure 4: A frame of the recovered videos with . From top to bottom: Akiyo, Suzie, and Salesman. From left to right: the original image, the masked image, the results by TNN-F, and TNN-C.
video akiyo suzie salesman
SR metric TNN-F TNN-C TNN-F TNN-C TNN-F TNN-C
0.05 PSNR 32.00 32.57 25.50 26.02 30.12 30.22 [t]
SSIM 0.934 0.941 0.681 0.700 0.895 0.897
time 156.2 91.9 69.6 40.1 148.5 85.6
(8.8+137.0) (6.2+70.9) (4.0+60.6) (2.9+30.6) (8.6+128.9) (6.0+65.3) [b]
0.1 PSNR 34.20 34.75 27.73 27.93 32.13 32.29 [t]
SSIM 0.958 0.963 0.759 0.766 0.928 0.931
time 141.8 86.3 64.5 39.3 139.5 84.9
(8.1+122.9) (5.8+66.6) (3.8+55.2) (2.8+30.2) (8.3+120.3) (5.8+64.9) [b]
0.2 PSNR 37.44 38.11 30.29 30.51 35.01 35.20 [t]
SSIM 0.979 0.983 0.838 0.844 0.960 0.961
time 145.2 79.8 62.5 37.2 135.1 81.3
(8.1+125.6) (5.4+60.3) (3.6+53.3) (2.8+28.6) (8.1+116.3) (5.5+61.6) [b]
Table 3: PSNR, SSIM, and time of two methods in video completion. In brackets, they are the time required for transformation and time required for performing SVD. The best results are highlighted in bold.

MSI completion. For MSI data, we add spectral angle mapper (SAM) and erreur relative globale adimensionnelle de synthse (ERGAS) which are common quality metrics for MSI data. SAM calculates the angle in spectral space between pixels and a set of reference tensor on spectral similarity. ERGAS measures fidelity of the recovered tensor based on the weighted sum of mean squared error (MSE) of all bands. The lower the value of these two indicators, the better the results. The size of the MSI data from CAVE database is with the wavelengths in the range of nm at an interval of 10nm. We display one selected tube in Fig. 5. We can observe that the tube of recovered tensor by TNN-C is more closely to the true tube than that by TNN-F, especially near the boundary. Moreover, we plot the PSNR values of recovered tensor by TNN-C and TNN-F in Fig. 6. In general, we can observe that the PSNR values of TNN-C are higher than those of TNN-F, especially for the first and last few bands. Those observations verify TNN-C can produce more natural results as compared to TNN-F when more reasonable BCs is implied in TNN-C. In Fig. 7, we show the first band of testing data recovered by the two methods with . Obviously, TNN-C achieves better visual results than TNN-F. Tabs. 4-5 give the more detailed data of other testing images. We can see that TNN-C not only has a better performance in PSNR, SSIM, SAM, and ERGAS, but also significantly reduces the time cost compared to TNN-F.

Figure 5: The pixel values of a random tube of MSI Pompoms, Stuffed toys, Foods, and Peppers.
Figure 6: The PSNR values of each band of the recovered MSIs Pompoms, Stuffed toys, Foods, and Peppers obtained by TNN-F and TNN-C.
Figure 7: The first band of recovered MSI images with . From top to bottom: Pompoms, Stuffed toys, Foods, and Peppers. From left to right: the original image, the masked image, the results by TNN-F, and TNN-C.
MSI Pompoms Stuffed toys
SR metric TNN-F TNN-C TNN-F TNN-C
0.05 PSNR 26.56 29.00 28.44 31.84 [t]
SSIM 0.818 0.876 0.892 0.941
SAM 0.22 0.16 0.30 0.22
ERGAS 10.28 8.00 9.80 6.74
time 309.4 161.0 320.6 183.4
(11.0+285.7) (8.9+135.3) (11.4+296.0) (10.3+153.4) [b]
0.1 PSNR 31.26 33.98 33.37 36.63 [t]
SSIM 0.922 0.952 0.955 0.978
SAM 0.13 0.09 0.19 0.14
ERGAS 5.96 4.52 5.53 3.84
time 271.7 171.1 320.2 164.5
(9.6+251.5) (9.6+143.9) (11.2+295.8) (9.2+138.1) [b]
0.2 PSNR 37.13 39.55 39.14 41.94 [t]
SSIM 0.976 0.986 0.986 0.994
SAM 0.07 0.05 0.11 0.09
ERGAS 3.04 2.39 2.82 2.06
time 308.1 184.0 278.9 165.8
(10.9+284.4) (10.2+154.2) (10.2+256.4) (9.2+138.7) [b]
Table 4: PSNR, SSIM, SAM, ERGAS, and time of two methods in MSI completion. In brackets, they are time required for transformation and time required for performing SVD. The best results are highlighted in bold.
MSI Foods Peppers
SR metric TNN-F TNN-C TNN-F TNN-C
0.05 PSNR 31.48 33.33 34.89 36.87 [t]
SSIM 0.904 0.932 0.946 0.965
SAM 0.27 0.21 0.21 0.15
ERGAS 9.52 8.01 6.31 5.21
time 281.0 164.8 284.9 155.0
(10.3+258.7) (9.2+137.9) (10.4+255.2) (8.8+128.7) [b]
0.1 PSNR 35.31 37.73 39.25 41.27 [t]
SSIM 0.957 0.974 0.980 0.989
SAM 0.18 0.13 0.13 0.09
ERGAS 6.14 4.91 3.86 3.18
time 291.4 167.7 278.3 146.8
(10.7+267.9) (9.4+140.2) (10.0+256.6) (8.6+124.9) [b]
0.2 PSNR 43.13 40.30 44.30 46.22 [t]
SSIM 0.993 0.986 0.995 0.997
SAM 0.11 0.08 0.07 0.05
ERGAS 3.49 2.68 2.19 1.82
time 289.7 164.0 286.2 153.6
(10.6+266.7) (9.3+137.4) (10.4+264.2) (9.0+138.5) [b]
Table 5: PSNR, SSIM, SAM, ERGAS, and time of two methods in MSI completion. In brackets, they are time required for transformation and time required for performing SVD. The best results are highlighted in bold.

Parameter analysis. We analyze the robustness of TNN-C for different parameters using MSI data Stuffed toys with . TNN-C only requires one parameter . As shown in Fig. (8), different lead to nearly the same PSNR value, but it affects the convergence speed. After testing, we choose for all experiments.

Figure 8: The PSNR values with respect to the iteration for different values of parameter .

6 Concluding Remarks

We have introduced the DCT as an alternative of DFT into the framework of t-SVD. Based on the resulting t-SVD, the DCT based tensor nuclear norm (TNN-C) is suggested for low-rank tensor completion problem. We have developed an efficient alternating direction method of multipliers (ADMM) to tackle the corresponding model. Numerical experiments are reported to demonstrate the superiority of the DCT-based t-SVD. In the future research work, other transforms based tensor singular value decomposition can be considered and studied. We expect other transforms based tensor singular value decomposition can deal with data tensors from specific applications.

Acknowledgment

The research is supported by NSFC (61772003) and the Fundamental Research Funds for the Central Universities (ZYGX2016J132), the HKRGC GRF 1202715, 12306616, 12200317 and HKBU RC-ICRS/16-17/03.

References

References

  • (1)

    M. Bertalmio, G. Sapiro, V. Caselles, C. Ballester, Image inpainting, Proceedings of International Conference on Computer Graphics and Interactive Techniques (2000) 417–424 (2000).

  • (2)

    N. Komodakis, Image completion using global optimization, Proceedings of Computer Vision and Pattern Recognition (2006) 442–452 (2006).

  • (3) J. Liu, P. Musialski, P. Wonka, J.-P. Ye, Tensor completion for estimating missing values in visual data, IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (1) (2013) 208–220 (2013).
  • (4) T. Korah, C. Rasmussen, Spatiotemporal inpainting for recovering texture maps of occluded building facades, IEEE Transactions on Image Processing 16 (9) (2007) 2262–2271 (2007).
  • (5) S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, T. Q. Nguyen, An augmented lagrangian method for total variation video restoration, IEEE Transactions on Image Processing 20 (11) (2011) 3097–3111 (2011).
  • (6) T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, L.-J. Deng, Y. Wang, A novel tensor-based video rain streaks removal approach via utilizing discriminatively intrinsic priors, Proceedings of Computer Vision and Pattern Recognition (2017) 2818–2827 (07 2017).
  • (7) F. Li, M. K. Ng, R. J. Plemmons, Coupled segmentation and denoising/deblurring models for hyperspectral material identification, Numerical Linear Algebra With Applications 19 (1) (2012) 153–173 (2012).
  • (8) X.-L. Zhao, F. Wang, T.-Z. Huang, M. K. Ng, R. J. Plemmons, Deblurring and sparse unmixing for hyperspectral images, IEEE Transactions on Geoscience and Remote Sensing 51 (7) (2013) 4045–4058 (2013).
  • (9) N. Li, B.-X. Li, Tensor completion for on-board compression of hyperspectral images, Proceedings of IEEE International Conference on Image Processing (2010) 517–520 (2010).
  • (10) Z.-M. Xing, M.-Y. Zhou, A. Castrodad, G. Sapiro, L. Carin, Dictionary learning for noisy and incomplete hyperspectral images, SIAM Journal on Imaging Sciences 5 (1) (2012) 33–56 (2012).
  • (11) J.-T. Sun, H.-J. Zeng, H. Liu, Y.-C. Lu, Z. Chen, Cubesvd: a novel approach to personalized web search, Proceedings of International World Wide Web Conferences (2005) 382–390 (2005).
  • (12) T. G. Kolda, B. W. Bader, J. P. Kenny, Higher-order web link analysis using multilinear algebra, Proceedings of IEEE International Conference on Data Mining (2005) 242–249 (2005).
  • (13) N. Varghees, M. Manikandan, R. G. John, Adaptive mri image denoising using total-variation and local noise estimation, Proceedings of IEEE International Conference on Advances in Engineering, Science and Management (2012) 506–511 (01 2012).
  • (14)

    N. Kreimer, M. D. Sacchi, A tensor higher-order singular value decomposition for prestack seismic data noise reduction and interpolation, Geophysics 77 (3) (2012) 113–122 (2012).

  • (15) R. A. Harshman, Foundations of the parafac procedure: Models and conditions for an “explanatory” multi-modal factor analysis, UCLA Working Papers in Phonetics (1970).
  • (16) L. R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika 31 (3) (1966) 279–311 (1966).
  • (17) M. E. Kilmer, C. D. M. Martin, Factorization strategies for third-order tensors, Linear Algebra and its Applications 435 (3) (2011) 641–658 (2011).
  • (18) C. D. Martin, R. Shafer, B. Larue, An order- tensor factorization with applications in imaging, SIAM Journal on Scientific Computing 35 (2013) 474–490 (2013).
  • (19) M. E. Kilmer, K. S. Braman, N. Hao, R. C. Hoover, Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging, SIAM Journal on Matrix Analysis and Applications 34 (1) (2013) 148–172 (2013).
  • (20) M. K. Ng, R. H. Chan, W. Tang, A fast algorithm for deblurring models with neumann boundary conditions, SIAM Journal on Scientific Computing 21 (3) (1999) 851–866 (1999).
  • (21) Z.-M. Zhang, G. Ely, S. Aeron, H. Ning, M. E. Kilmer, Novel methods for multilinear data completion and de-noising based on tensor-svd, Proceedings of Computer Vision and Pattern Recognition (2014) 3842–3849 (2014).
  • (22) C.-Y. Lu, J.-S. Feng, Y.-D. Chen, W. Liu, Z.-C. Lin, S.-C. Yan, Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization, Proceedings of Computer Vision and Pattern Recognition (2016) 5249–5257 (2016).
  • (23) O. Semerci, H. Ning, M. E. Kilmer, E. L. Miller, Tensor-based formulation and nuclear norm regularization for multienergy computed tomography, IEEE Transactions on Image Processing 23 (4) (2014) 1678–1693 (2014).
  • (24) S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn. 3 (1) (2011) 1–122 (Jan. 2011).
  • (25) Z.-C. Lin, M.-M. Chen, Y. Ma, L.-Q. Wu, The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices, ArXiv e-prints (Sep. 2010). arXiv:1009.5055.
  • (26) B.-S. He, M. Tao, X.-M. Yuan, Alternating direction method with gaussian back substitution for separable convex programming, SIAM Journal on Optimization 22 (2) (2012) 313–340 (2012).
  • (27) M. V. Afonso, J. M. Bioucasdias, M. A. T. Figueiredo, An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems, IEEE Transactions on Image Processing 20 (3) (2011) 681–695 (2011).
  • (28) D.-R. Han, X.-M. Yuan, A note on the alternating direction method of multipliers, Journal of Optimization Theory and Applications 155 (1) (2012) 227–238 (2012).