# A Fast Algorithm for Cosine Transform Based Tensor Singular Value Decomposition

Recently, there has been a lot of research into tensor singular value decomposition (t-SVD) by using discrete Fourier transform (DFT) matrix. The main aims of this paper are to propose and study tensor singular value decomposition based on the discrete cosine transform (DCT) matrix. The advantages of using DCT are that (i) the complex arithmetic is not involved in the cosine transform based tensor singular value decomposition, so the computational cost required can be saved; (ii) the intrinsic reflexive boundary condition along the tubes in the third dimension of tensors is employed, so its performance would be better than that by using the periodic boundary condition in DFT. We demonstrate that the tensor product between two tensors by using DCT can be equivalent to the multiplication between a block Toeplitz-plus-Hankel matrix and a block vector. Numerical examples of low-rank tensor completion are further given to illustrate that the efficiency by using DCT is two times faster than that by using DFT and also the errors of video and multispectral image completion by using DCT are smaller than those by using DFT.

There are no comments yet.

## Authors

• 2 publications
• 19 publications
• 5 publications
07/02/2019

### Robust Tensor Completion Using Transformed Tensor SVD

In this paper, we study robust tensor completion by using transformed te...
12/16/2021

### Quaternion tensor singular value decomposition using a flexible transform-based approach

A flexible transform-based tensor product named ⋆_QT-product for Lth-ord...
10/03/2019

### Quantum tensor singular value decomposition with applications to recommendation systems

In this paper, we present a quantum singular value decomposition algorit...
12/24/2019

### Singular Value Decomposition in Sobolev Spaces: Part II

Under certain conditions, an element of a tensor product space can be id...
08/09/2021

### Guaranteed Functional Tensor Singular Value Decomposition

This paper introduces the functional tensor singular value decomposition...
06/30/2021

### CS decomposition and GSVD for tensors based on the T-product

This paper derives the CS decomposition for orthogonal tensors (T-CSD) a...
10/07/2020

### A multi-surrogate higher-order singular value decomposition tensor emulator for spatio-temporal simulators

We introduce methodology to construct an emulator for environmental and ...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

A tensor is a multi-dimensional array of numbers, which is a generalization of a matrix. Compared to a “flat” matrix, a tensor provides a richer and more natural representation for many data. In this paper, we focus on the third-order tensor which looks like a magic cube. This format of data is widely used in color image and gray-scale video inpainting (1; 2; 3; 4; 5; 6), hyperspectral image (HSI) data recovery (7; 8; 9; 10), personalized web search (11), high-order web link analysis (12), magnetic resonance imaging (MRI) data recovery (13), and seismic data reconstruction (14).

Like the matrix decomposition, the tensor decomposition is an important multilinear algebra tool. There are many different tensor decompositions. The CANDECOMP/PAEAFAC (CP) decomposition (15) and the Tucker decomposition (16)

are the two most well-known ones. The CP decomposition can be considered as the higher order generalization of the matrix singular value decomposition (SVD). It tries to decompose a tensor into a sum of rank-one tensors. Similar to the rank-one matrix, third-order rank-one tensors can be written as the outer product of 3 vectors. The CP-rank of a tensor is defined as the minimum number of rank-one tensors whose sum generates the original tensor. This definition is an analog of the definition of matrix rank. The Tucker decomposition is the higher order generalization of the principal component analysis (PCA). It decomposes a tensor into a core tensor multiplied by a matrix along each mode. The Tucker rank based on Tucker decomposition is a vector whose

-th element is the mode- unfolding matrix rank.

Recent years, Kilmer and Martin (17; 18; 19) proposed a third-order tensor decomposition called tensor singular value decomposition (t-SVD). This decomposition strategy is based on the definition of the tensor product (see Section 2). After performing one-dimensional discrete Fourier transformation (DFT) on the third dimension of the tensor, this tensor product makes tensor decomposition be an analog of matrix decomposition. This strategy avoids the loss of structure information in matricization of the tensor. But because of performing one-dimensional DFT on the third dimension, the obtained tensor is a complex tensor. These complex numbers lead to higher computational cost and are not required. Why don’t we use another transformation instead of DFT to avoid its disadvantage? Discrete cosine transformation (DCT) (20) is the first alternative which expresses a finite sequence in terms of a sum of the cosine functions.

DCT only produces the real number for real input. This feature greatly reduces the data in the process of t-SVD, thus saving a lot of time. And there is another difference: DFT implies periodic boundary conditions (BC) when DCT implies reflexive BCs which yields a continuous extension at the boundaries (20). If the signal satisfies reflexive BCs (real data often satisfies), the new t-SVD based on DCT can achieve better results than DFT. We give the theoretical derivation of using DCT for t-SVD and verify the superiority compared to DFT.

The rest of this paper is as follows. In Section 2, we introduce some related notations and the original t-SVD with DFT background. In Section 3, we propose the theoretical derivation of new t-SVD with DCT. Based on the new t-SVD, we introduce the new tensor nuclear norm in Section 4. We conduct extensive experiments to demonstrate the effectiveness of the proposed method in Section 5. In Section 6, we give some concluding remarks.

## 2 Notations and Preliminaries

In this section, we introduce the basic notations and give the definitions related to the t-SVD. We use non-bold lowercase letters for scalars, e.g., , boldface lowercase letters for vectors, e.g., , boldface capital letters for matrices, e.g., , boldface Calligraphy letters for tensors, e.g., . and represent the field of real number and complex number, respectively. For a third-order tensor , we use the MATLAB notations , , and to denote the horizontal, lateral, and frontal slices, respectively, and , , and to denote the columns, rows, and tubes, respectively. For convenience, we use for the th frontal slice and for the -th tube . Both and represent the -th element. The Frobenius norm of is defined as . It is easily to see that .

Next, we introduce some definitions that are closely related to t-SVD. We use to represent the discrete Fourier transform of along each tube, i.e., . The block circulant matrix (18; 19) is defined as

 (1)

The block diagonal matrix and the corresponding inverse operator (18; 19) are defined as

 bdiag(X):=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣X(1)X(2)⋱X(m3)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, (2)
 unbdiag(bdiag(X))=X.

The unfold and fold operators in t-SVD (18; 19) are defined as

 (3)

It is a important point that block circulant matrix can be block diagonalized.

###### Theorem ((17))
 bdiag(~X)=(Fm3⊗Im1)bcirc(X)(FHm3⊗Im2), (4)

where denotes the Kronecker product, is an DFT matrix and is an identity matrix.

###### Definition (t-product (19))

Given and , the t-product is a third-order tensor of size

 Z=X∗Y:=fold(bcirc(X)unfold(Y)). (5)

This definition is the core of t-SVD. It is like a one-dimensional convolution of two vectors under reflexive BCs, but the elements of vectors are the frontal slices of tensors. With Theorem 2, equation (5) can be rewritten as

 ~Z=fold(% bdiag(~X)((Fm3⊗Im2)%unfold(Y)))=fold(bdiag(~X)unfold(~Y))=unbdiag(bdiag(~X)bdiag(~Y)). (6)

Equation (6) means that the t-product in the spatial domain corresponds to the matrix multiplication of the frontal slices in the Fourier domain, which greatly simplifies the process of the algorithm.

###### Definition (identity tensor (19))

The identity tensor is a tensor whose first frontal slice is the identity matrix of size , and whose other frontal slices are all zeros.

###### Definition (orthogonal tensor (19))

A tensor is orthogonal if it satisfies , where is the tensor conjugate transpose of , which is obtained by conjugate transposing each frontal slice of .

###### Definition (f-diagonal tensor (19))

A tensor is called f-diagonal if each of its frontal slices is a diagonal matrix.

###### Theorem (t-SVD (19; 17))

Given a tensor , the t-SVD of is given by

 X=U∗S∗VH, (7)

where , are orthogonal tensors, and is a f-diagonal tensor.

###### Definition (tensor multi-rank and tubal rank (21))

Given , its multi-rank is a vector whose -th element is the rank of the -th frontal slice of , i.e., . Its tubal rank is defined as the number of nonzero singular tubes, where the singular tubes of are the nonzero tubes of .

The tensor tubal rank is actually the largest element of multi-rank.

###### Definition (tensor nuclear norm (22; 23))

Given , based on the tensor multi-rank, the tensor nuclear norm (TNN) of is defined as

 ∥X∥∗:=1m3m3∑k=1∥∥~X(k)∥∥∗. (8)

In order to avoid confusion with the new definition of TNN we proposed later, we call this definition TNN-F in this paper.

The computation of t-SVD on an tensor needs two steps. Firstly, the first step is to perform DFT by fast Fourier transformation (FFT) along each tube. The time complexity of the first step is . After DFT, the obtained tensor is a complex tensor which can be divided into a real number tensor and an imaginary number tensor. The computation of SVD along each frontal slice on the obtained tensor is actually equivalent to performing on the real number tensor and the imaginary number tensor respectively. The time complexity of the second step is , which is about the computational cost of the first step.

## 3 Cosine Transform Based Tensor Singular Value Decomposition

We discuss the DCT-based t-SVD and the resulting structure in this section. Since the corresponding block circulant matrices can be diagonalized by DFT, the DFT based t-SVD can be efficiently implemented via fast Fourier transform (fft). We will show the corresponding structure of DCT-based t-SVD can be diagonalized by DCT.

We define the shift of tensor as . It is easy to prove that any tensor can be uniquely divided into . We use to represent the DCT along each tube of , i.e., . We define the block Toeplitz matrix of as

 bt(A):=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣A(1)A(2)⋯A(m3−1)A(m3)A(2)A(1)⋯A(m3−2)A(m3−1)⋮⋮⋱⋮A(m3−1)A(m3−2)⋯A(1)A(2)A(m3)A(m3−1)⋯A(2)A(1)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦. (9)

The block Hankel matrix is defined as

 bh(A):=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣A(2)A(3)⋯A(m3)OA(3)A(4)⋯OA(m3)⋮⋮⋱⋮A(m3)O⋯A(4)A(3)OA(m3)⋯A(3)A(2)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦. (10)

The block Toeplitz-plus-Hankel matrix of is defined as

 btph(A):=bt(A)+bh(A). (11)

The block Toeplitz-plus-Hankel matrix can be diagonalized. The following theorem can by similarly established as (20).

###### Theorem
 bdiag(¯X)=(Cm3⊗Im1)btph(A)(CTm3⊗Im2), (12)

where denotes the Kronecker product, is an DCT matrix.

The proof of Theorem 3 can be obtained by using the similar argument in (20). We briefly illustrate this theorem with an example.

###### Example

Let the frontal slice of be

 X(1)=[1234],X(2)=[5678].

So the component is

The block Toeplitz matrix is

 bt(A)=[A(1)A(2)A(2)A(1)]=⎡⎢ ⎢ ⎢⎣−4−456−3−47856−4−478−4−4⎤⎥ ⎥ ⎥⎦,

and the block Hankel matrix is

 bh(A)=[A(2)00A(2)]=⎡⎢ ⎢ ⎢⎣5600780000560078⎤⎥ ⎥ ⎥⎦.

Then the block Toeplitz-plus-Hankel matrix is

 btph(A)=bt(A)+bh(A)=⎡⎢ ⎢ ⎢⎣1256347856127834⎤⎥ ⎥ ⎥⎦.

By using stride permutations, we get

 Pbtph(A)P=⎡⎢ ⎢ ⎢⎣1526516237487384⎤⎥ ⎥ ⎥⎦=[ABCD],

where and , , , and are Toeplitz-plus-Hankel matrices. So we have

 (C2⊗I2)btph(A)(CT2⊗I2)=(C2⊗I2)PPbtph(A)PP(CT2⊗I2),

where is a DCT matrix. In this equation, it is easy to see that

 P(C2⊗I2)P=[C200C2].

Similarly,

Hence, we have

 (C2⊗I2)btph(A)(CT2⊗I2) =P[C200C2][ABCD][CT200CT2]P =P[C2ACT2C2BCT2C2CCT2C2DCT2]P =⎡⎢ ⎢ ⎢⎣680010120000−4−400−4−4⎤⎥ ⎥ ⎥⎦.

Now, it is easy to verify

 bdiag(¯X) =bdiag(dct(A+σ(A),[],3)) =(C2⊗I2)btph(A)(CT2⊗I2).

###### Definition (DCT-based t-product)

Given and , the t-product is a third-order tensor of size

 Z=X∗Y:=fold(btph(A)unfold(Y)), (13)

where .

Equation (13) can be rewritten as

 ¯Z=fold(% bdiag(¯X)((Cm3⊗Im2)% unfold(Y)))=fold(bdiag(¯X)unfold(¯Y)). (14)

Based on this new t-product, the DCT-based t-SVD can be defined as follows:

###### Theorem (DCT-based t-SVD)

Given a tensor , the DCT-based t-SVD of is given by

 X=U∗S∗VT, (15)

where , are orthogonal tensors, is a f-diagonal tensor, and is the tensor transpose of , which is obtained by transposing each frontal slice of .

The proof of Theorem 4 can be obtained by using the similar argument in (19).

By exploiting the beautiful structure, the DCT-based t-SVD can be efficiently calculated by performing the matrix singular value decomposition for each frontal slice of the third-order tensor after DCT along each tube. For an tensor, the time complexity of performing DCT along each tube in the first step is for DCT-based t-SVD, which is the same as that DFT-based t-SVD. Since DCT only produces the real number, the time complexity of calculating SVDs is for DCT-based t-SVD, which is half that of DFT-based t-SVD.

## 4 Low-rank Tensor Completion by TNN-C

Based on the DCT-based t-SVD, we propose the new definition of TNN called TNN-C in this section. Then, we establish the low-rank tensor completion model (6) based on TNN-C and develop the alternating direction method of multipliers (ADMM) to tackle the corresponding low-rank tensor completion model.

###### Definition (Tnn-C)

Given , TNN-C of is defined as

 ∥X∥∗=1m3m3∑i=1∥∥¯X(i)∥∥∗. (16)

It is easy to see that TNN-C of is the sum of singular values of all frontal slices of . Meanwhile, the -th element of multi-rank is the rank of the -th frontal slice of . Thus, TNN-C is a convex surrogate of the norm of a third-order tensor’s multi-rank.

The low-rank tensor completion model is defined as

 minX∥X∥∗,s.t.XΩ=BΩ. (17)

Letting

 lS(X)={0,if X∈S,∞,otherwise,

where , (17) can be rewritten as the following unconstrained problem:

 minX∥X∥∗+lS(X). (18)

By introducing an auxiliary variable , the augmented Lagrangian function of (18) is

 (19)

where is the Lagrangian multiplier, and is the balance parameter. According to the framework of ADMM (24; 25; 26), , , and are iteratively updated as

 ⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩Step 1: Yl+1∈argminYL(Xl,Y,Ml),Step 2: Xl+1∈argminXL(X,Yl+1,Ml),Step 3: Ml+1=Ml+β(Yl+1−Xl+1). (20)

Now, we give the details for solving each subproblem.

In Step 1, the -subproblem is:

 argminY∥Y∥∗+β2∥∥∥Y−Xl+1βMl∥∥∥2F, (21)

which can be solved by the following theorem (22; 23).

###### Theorem

Given , a minimizer to

 minY∥Y∥∗+β2∥Y−Z∥2F (22)

is given by the tensor singular value thresholding

 Y=U∗D1β∗VT, (23)

where and is an f-diagonal tensor whose each frontal slice in the discrete cosine domain is .

In Step 2, we solve the following problem:

 argminXlS(X)+β2∥∥∥Yl+1−X+1βMl∥∥∥2F, (24)

which has a closed-form solution

 Xl+1=(Yl+1+1βMl)ΩC+B, (25)

where is the complementary set of the index set .

We summarize the proposed ADMM procedure in Algorithm 1. Every step of ADMM has an explicit solution. Thus, the proposed method is efficiently implementable. The convergence of the ADMM method of convex functions of separable variables with linear constraints is guaranteed (27; 28).

 Algorithm 1 ADMM for solving the proposed model (17). Input: Observed data B, index set Ω, parameters β. Initialize: X=B, Y=0, M=0, tol=10−5, and L=500. 1: while ltol do 2:  Z=Xl−1βMl; 3:  ¯Z=dct(Z,[],3); 4:  for k=1 to m3 do 5: 6:   ¯D(k)=(¯S(k)−1/β)+; 7:   ¯Z(k),l+1=U(k)¯D(k)V(k)H; 8:  end for 9:  Yl+1=idct(¯Zl+1,[],3); 10:  Xl+1=(Yl+1+1βMl)Ωc+B; 11:  Ml+1=Ml+β(Yl+1−Xl+1). 12 : end while Output: The recovered tensor X.

## 5 Numerical Examples

In this section, all experiments are implemented on Windows 10 and Matlab (R2017a) with an Intel(R) Core(TM) i7-7700k CPU at 4.20 GHz and 16 GB RAM.

### 5.1 The Computational Time

Saving time is the most important advantage of DCT-based t-SVD. We illustrate this advantage of the new t-SVD by operating on random tensors. We set 4 groups of random tensors of different size and performed 1000 runs to get the average time required. Tab. 2 shows that average time cost of performing t-SVD and DCT-based t-SVD, and confirms our point that DCT-based t-SVD only needs half the time of t-SVD.

### 5.2 Real Data

We conduct the video and multispectral image (MSI) completion experiments and compare TNN-C with the TNN-F (22). In our experiments, the quality of the recovered image is measured by the average of highest peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) of all bands. PSNR of a band is defined as follows:

 PSNR=10log10m1m2X2max∥∥^X−X∥∥2F,

where is the masked matrix, is the recovered matrix, and is the maximum pixel value of the original matrix . SSIM can measure the similarity between the recovered image and the masked image. This indicator can reflect the similarities in brightness, contrast, and structure of two images and is defined as

 SSIM=(2μxμ^x+c1)(2σx^x+c2)(μ2x+μ2^x+c1)(σ2x+σ2^x+c1),

where and

represent the average values of the original matrix and the estimated matrix, respectively,

and

represent the standard deviation of

and , respectively.

For all the following experiments, we set the maximum number of iterations to 500 and the tolerance to . This algorithm only needs one parameter , and we set it to .

Video completion. We test 3 videos: Akiyo, Suzie, and Salesman. The size of Akiyo and Salesman is . The size of Suzie is . Tab. 3 shows PSNR, SSIM, and time cost of TNN-F and TNN-C. TNN-C achieves better results and costs much less time than TNN-F in all experiments. Fig. 2 shows one selected tube. We can observe that the tube of recovered video by TNN-C is more closely to the true tube than that by TNN-F, especially near the boundary. Fig. 3 shows the PSNR values of each frame of recovered videos by TNN-F and TNN-C. We can observe that when the sampling rate (SR) is , the PSNR values of TNN-C are higher than those of TNN-F, especially for the first and last few frames. This observation is consistent with our interpretation of BCs. Fig. 4 shows the results recovered by TNN-F and TNN-C with . TNN-C is visually better than TNN-F.

MSI completion. For MSI data, we add spectral angle mapper (SAM) and erreur relative globale adimensionnelle de synthse (ERGAS) which are common quality metrics for MSI data. SAM calculates the angle in spectral space between pixels and a set of reference tensor on spectral similarity. ERGAS measures fidelity of the recovered tensor based on the weighted sum of mean squared error (MSE) of all bands. The lower the value of these two indicators, the better the results. The size of the MSI data from CAVE database is with the wavelengths in the range of nm at an interval of 10nm. We display one selected tube in Fig. 5. We can observe that the tube of recovered tensor by TNN-C is more closely to the true tube than that by TNN-F, especially near the boundary. Moreover, we plot the PSNR values of recovered tensor by TNN-C and TNN-F in Fig. 6. In general, we can observe that the PSNR values of TNN-C are higher than those of TNN-F, especially for the first and last few bands. Those observations verify TNN-C can produce more natural results as compared to TNN-F when more reasonable BCs is implied in TNN-C. In Fig. 7, we show the first band of testing data recovered by the two methods with . Obviously, TNN-C achieves better visual results than TNN-F. Tabs. 4-5 give the more detailed data of other testing images. We can see that TNN-C not only has a better performance in PSNR, SSIM, SAM, and ERGAS, but also significantly reduces the time cost compared to TNN-F.

Parameter analysis. We analyze the robustness of TNN-C for different parameters using MSI data Stuffed toys with . TNN-C only requires one parameter . As shown in Fig. (8), different lead to nearly the same PSNR value, but it affects the convergence speed. After testing, we choose for all experiments.

## 6 Concluding Remarks

We have introduced the DCT as an alternative of DFT into the framework of t-SVD. Based on the resulting t-SVD, the DCT based tensor nuclear norm (TNN-C) is suggested for low-rank tensor completion problem. We have developed an efficient alternating direction method of multipliers (ADMM) to tackle the corresponding model. Numerical experiments are reported to demonstrate the superiority of the DCT-based t-SVD. In the future research work, other transforms based tensor singular value decomposition can be considered and studied. We expect other transforms based tensor singular value decomposition can deal with data tensors from specific applications.

## Acknowledgment

The research is supported by NSFC (61772003) and the Fundamental Research Funds for the Central Universities (ZYGX2016J132), the HKRGC GRF 1202715, 12306616, 12200317 and HKBU RC-ICRS/16-17/03.

## References

• (1)

M. Bertalmio, G. Sapiro, V. Caselles, C. Ballester, Image inpainting, Proceedings of International Conference on Computer Graphics and Interactive Techniques (2000) 417–424 (2000).

• (2)

N. Komodakis, Image completion using global optimization, Proceedings of Computer Vision and Pattern Recognition (2006) 442–452 (2006).

• (3) J. Liu, P. Musialski, P. Wonka, J.-P. Ye, Tensor completion for estimating missing values in visual data, IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (1) (2013) 208–220 (2013).
• (4) T. Korah, C. Rasmussen, Spatiotemporal inpainting for recovering texture maps of occluded building facades, IEEE Transactions on Image Processing 16 (9) (2007) 2262–2271 (2007).
• (5) S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, T. Q. Nguyen, An augmented lagrangian method for total variation video restoration, IEEE Transactions on Image Processing 20 (11) (2011) 3097–3111 (2011).
• (6) T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, L.-J. Deng, Y. Wang, A novel tensor-based video rain streaks removal approach via utilizing discriminatively intrinsic priors, Proceedings of Computer Vision and Pattern Recognition (2017) 2818–2827 (07 2017).
• (7) F. Li, M. K. Ng, R. J. Plemmons, Coupled segmentation and denoising/deblurring models for hyperspectral material identification, Numerical Linear Algebra With Applications 19 (1) (2012) 153–173 (2012).
• (8) X.-L. Zhao, F. Wang, T.-Z. Huang, M. K. Ng, R. J. Plemmons, Deblurring and sparse unmixing for hyperspectral images, IEEE Transactions on Geoscience and Remote Sensing 51 (7) (2013) 4045–4058 (2013).
• (9) N. Li, B.-X. Li, Tensor completion for on-board compression of hyperspectral images, Proceedings of IEEE International Conference on Image Processing (2010) 517–520 (2010).
• (10) Z.-M. Xing, M.-Y. Zhou, A. Castrodad, G. Sapiro, L. Carin, Dictionary learning for noisy and incomplete hyperspectral images, SIAM Journal on Imaging Sciences 5 (1) (2012) 33–56 (2012).
• (11) J.-T. Sun, H.-J. Zeng, H. Liu, Y.-C. Lu, Z. Chen, Cubesvd: a novel approach to personalized web search, Proceedings of International World Wide Web Conferences (2005) 382–390 (2005).
• (12) T. G. Kolda, B. W. Bader, J. P. Kenny, Higher-order web link analysis using multilinear algebra, Proceedings of IEEE International Conference on Data Mining (2005) 242–249 (2005).
• (13) N. Varghees, M. Manikandan, R. G. John, Adaptive mri image denoising using total-variation and local noise estimation, Proceedings of IEEE International Conference on Advances in Engineering, Science and Management (2012) 506–511 (01 2012).
• (14)

N. Kreimer, M. D. Sacchi, A tensor higher-order singular value decomposition for prestack seismic data noise reduction and interpolation, Geophysics 77 (3) (2012) 113–122 (2012).

• (15) R. A. Harshman, Foundations of the parafac procedure: Models and conditions for an “explanatory” multi-modal factor analysis, UCLA Working Papers in Phonetics (1970).
• (16) L. R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika 31 (3) (1966) 279–311 (1966).
• (17) M. E. Kilmer, C. D. M. Martin, Factorization strategies for third-order tensors, Linear Algebra and its Applications 435 (3) (2011) 641–658 (2011).
• (18) C. D. Martin, R. Shafer, B. Larue, An order- tensor factorization with applications in imaging, SIAM Journal on Scientific Computing 35 (2013) 474–490 (2013).
• (19) M. E. Kilmer, K. S. Braman, N. Hao, R. C. Hoover, Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging, SIAM Journal on Matrix Analysis and Applications 34 (1) (2013) 148–172 (2013).
• (20) M. K. Ng, R. H. Chan, W. Tang, A fast algorithm for deblurring models with neumann boundary conditions, SIAM Journal on Scientific Computing 21 (3) (1999) 851–866 (1999).
• (21) Z.-M. Zhang, G. Ely, S. Aeron, H. Ning, M. E. Kilmer, Novel methods for multilinear data completion and de-noising based on tensor-svd, Proceedings of Computer Vision and Pattern Recognition (2014) 3842–3849 (2014).
• (22) C.-Y. Lu, J.-S. Feng, Y.-D. Chen, W. Liu, Z.-C. Lin, S.-C. Yan, Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization, Proceedings of Computer Vision and Pattern Recognition (2016) 5249–5257 (2016).
• (23) O. Semerci, H. Ning, M. E. Kilmer, E. L. Miller, Tensor-based formulation and nuclear norm regularization for multienergy computed tomography, IEEE Transactions on Image Processing 23 (4) (2014) 1678–1693 (2014).
• (24) S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn. 3 (1) (2011) 1–122 (Jan. 2011).
• (25) Z.-C. Lin, M.-M. Chen, Y. Ma, L.-Q. Wu, The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices, ArXiv e-prints (Sep. 2010). arXiv:1009.5055.
• (26) B.-S. He, M. Tao, X.-M. Yuan, Alternating direction method with gaussian back substitution for separable convex programming, SIAM Journal on Optimization 22 (2) (2012) 313–340 (2012).
• (27) M. V. Afonso, J. M. Bioucasdias, M. A. T. Figueiredo, An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems, IEEE Transactions on Image Processing 20 (3) (2011) 681–695 (2011).
• (28) D.-R. Han, X.-M. Yuan, A note on the alternating direction method of multipliers, Journal of Optimization Theory and Applications 155 (1) (2012) 227–238 (2012).