1 Introduction
A tensor is a multidimensional array of numbers, which is a generalization of a matrix. Compared to a “flat” matrix, a tensor provides a richer and more natural representation for many data. In this paper, we focus on the thirdorder tensor which looks like a magic cube. This format of data is widely used in color image and grayscale video inpainting (1; 2; 3; 4; 5; 6), hyperspectral image (HSI) data recovery (7; 8; 9; 10), personalized web search (11), highorder web link analysis (12), magnetic resonance imaging (MRI) data recovery (13), and seismic data reconstruction (14).
Like the matrix decomposition, the tensor decomposition is an important multilinear algebra tool. There are many different tensor decompositions. The CANDECOMP/PAEAFAC (CP) decomposition (15) and the Tucker decomposition (16)
are the two most wellknown ones. The CP decomposition can be considered as the higher order generalization of the matrix singular value decomposition (SVD). It tries to decompose a tensor into a sum of rankone tensors. Similar to the rankone matrix, thirdorder rankone tensors can be written as the outer product of 3 vectors. The CPrank of a tensor is defined as the minimum number of rankone tensors whose sum generates the original tensor. This definition is an analog of the definition of matrix rank. The Tucker decomposition is the higher order generalization of the principal component analysis (PCA). It decomposes a tensor into a core tensor multiplied by a matrix along each mode. The Tucker rank based on Tucker decomposition is a vector whose
th element is the mode unfolding matrix rank.Recent years, Kilmer and Martin (17; 18; 19) proposed a thirdorder tensor decomposition called tensor singular value decomposition (tSVD). This decomposition strategy is based on the definition of the tensor product (see Section 2). After performing onedimensional discrete Fourier transformation (DFT) on the third dimension of the tensor, this tensor product makes tensor decomposition be an analog of matrix decomposition. This strategy avoids the loss of structure information in matricization of the tensor. But because of performing onedimensional DFT on the third dimension, the obtained tensor is a complex tensor. These complex numbers lead to higher computational cost and are not required. Why don’t we use another transformation instead of DFT to avoid its disadvantage? Discrete cosine transformation (DCT) (20) is the first alternative which expresses a finite sequence in terms of a sum of the cosine functions.
DCT only produces the real number for real input. This feature greatly reduces the data in the process of tSVD, thus saving a lot of time. And there is another difference: DFT implies periodic boundary conditions (BC) when DCT implies reflexive BCs which yields a continuous extension at the boundaries (20). If the signal satisfies reflexive BCs (real data often satisfies), the new tSVD based on DCT can achieve better results than DFT. We give the theoretical derivation of using DCT for tSVD and verify the superiority compared to DFT.
The rest of this paper is as follows. In Section 2, we introduce some related notations and the original tSVD with DFT background. In Section 3, we propose the theoretical derivation of new tSVD with DCT. Based on the new tSVD, we introduce the new tensor nuclear norm in Section 4. We conduct extensive experiments to demonstrate the effectiveness of the proposed method in Section 5. In Section 6, we give some concluding remarks.
2 Notations and Preliminaries
In this section, we introduce the basic notations and give the definitions related to the tSVD. We use nonbold lowercase letters for scalars, e.g., , boldface lowercase letters for vectors, e.g., , boldface capital letters for matrices, e.g., , boldface Calligraphy letters for tensors, e.g., . and represent the field of real number and complex number, respectively. For a thirdorder tensor , we use the MATLAB notations , , and to denote the horizontal, lateral, and frontal slices, respectively, and , , and to denote the columns, rows, and tubes, respectively. For convenience, we use for the th frontal slice and for the th tube . Both and represent the th element. The Frobenius norm of is defined as . It is easily to see that .
Next, we introduce some definitions that are closely related to tSVD. We use to represent the discrete Fourier transform of along each tube, i.e., . The block circulant matrix (18; 19) is defined as
(1) 
The block diagonal matrix and the corresponding inverse operator (18; 19) are defined as
(2) 
The unfold and fold operators in tSVD (18; 19) are defined as
(3) 
It is a important point that block circulant matrix can be block diagonalized.
Theorem ((17))
(4) 
where denotes the Kronecker product, is an DFT matrix and is an identity matrix.
Definition (tproduct (19))
Given and , the tproduct is a thirdorder tensor of size
(5) 
This definition is the core of tSVD. It is like a onedimensional convolution of two vectors under reflexive BCs, but the elements of vectors are the frontal slices of tensors. With Theorem 2, equation (5) can be rewritten as
(6) 
Equation (6) means that the tproduct in the spatial domain corresponds to the matrix multiplication of the frontal slices in the Fourier domain, which greatly simplifies the process of the algorithm.
Definition (identity tensor (19))
The identity tensor is a tensor whose first frontal slice is the identity matrix of size , and whose other frontal slices are all zeros.
Definition (orthogonal tensor (19))
A tensor is orthogonal if it satisfies , where is the tensor conjugate transpose of , which is obtained by conjugate transposing each frontal slice of .
Definition (fdiagonal tensor (19))
A tensor is called fdiagonal if each of its frontal slices is a diagonal matrix.
Theorem (tSVD (19; 17))
Given a tensor , the tSVD of is given by
(7) 
where , are orthogonal tensors, and is a fdiagonal tensor.
Definition (tensor multirank and tubal rank (21))
Given , its multirank is a vector whose th element is the rank of the th frontal slice of , i.e., . Its tubal rank is defined as the number of nonzero singular tubes, where the singular tubes of are the nonzero tubes of .
The tensor tubal rank is actually the largest element of multirank.
Definition (tensor nuclear norm (22; 23))
Given , based on the tensor multirank, the tensor nuclear norm (TNN) of is defined as
(8) 
In order to avoid confusion with the new definition of TNN we proposed later, we call this definition TNNF in this paper.
The computation of tSVD on an tensor needs two steps. Firstly, the first step is to perform DFT by fast Fourier transformation (FFT) along each tube. The time complexity of the first step is . After DFT, the obtained tensor is a complex tensor which can be divided into a real number tensor and an imaginary number tensor. The computation of SVD along each frontal slice on the obtained tensor is actually equivalent to performing on the real number tensor and the imaginary number tensor respectively. The time complexity of the second step is , which is about the computational cost of the first step.
3 Cosine Transform Based Tensor Singular Value Decomposition
We discuss the DCTbased tSVD and the resulting structure in this section. Since the corresponding block circulant matrices can be diagonalized by DFT, the DFT based tSVD can be efficiently implemented via fast Fourier transform (fft). We will show the corresponding structure of DCTbased tSVD can be diagonalized by DCT.
We define the shift of tensor as . It is easy to prove that any tensor can be uniquely divided into . We use to represent the DCT along each tube of , i.e., . We define the block Toeplitz matrix of as
(9) 
The block Hankel matrix is defined as
(10) 
The block ToeplitzplusHankel matrix of is defined as
(11) 
The block ToeplitzplusHankel matrix can be diagonalized. The following theorem can by similarly established as (20).
Theorem
(12) 
where denotes the Kronecker product, is an DCT matrix.
The proof of Theorem 3 can be obtained by using the similar argument in (20). We briefly illustrate this theorem with an example.
Example
Let the frontal slice of be
So the component is
The block Toeplitz matrix is
and the block Hankel matrix is
Then the block ToeplitzplusHankel matrix is
By using stride permutations, we get
where and , , , and are ToeplitzplusHankel matrices. So we have
where is a DCT matrix. In this equation, it is easy to see that
Similarly,
Hence, we have
Now, it is easy to verify
Definition (DCTbased tproduct)
Given and , the tproduct is a thirdorder tensor of size
(13) 
where .
Equation (13) can be rewritten as
(14) 
Based on this new tproduct, the DCTbased tSVD can be defined as follows:
Theorem (DCTbased tSVD)
Given a tensor , the DCTbased tSVD of is given by
(15) 
where , are orthogonal tensors, is a fdiagonal tensor, and is the tensor transpose of , which is obtained by transposing each frontal slice of .
The proof of Theorem 4 can be obtained by using the similar argument in (19).
By exploiting the beautiful structure, the DCTbased tSVD can be efficiently calculated by performing the matrix singular value decomposition for each frontal slice of the thirdorder tensor after DCT along each tube. For an tensor, the time complexity of performing DCT along each tube in the first step is for DCTbased tSVD, which is the same as that DFTbased tSVD. Since DCT only produces the real number, the time complexity of calculating SVDs is for DCTbased tSVD, which is half that of DFTbased tSVD.
tensor  
DFT  [t] 
SVD after DFT  
tSVD  [b] 
DCT  [t] 
SVD after DCT  
new tSVD  [b] 
4 Lowrank Tensor Completion by TNNC
Based on the DCTbased tSVD, we propose the new definition of TNN called TNNC in this section. Then, we establish the lowrank tensor completion model (6) based on TNNC and develop the alternating direction method of multipliers (ADMM) to tackle the corresponding lowrank tensor completion model.
Definition (TnnC)
Given , TNNC of is defined as
(16) 
It is easy to see that TNNC of is the sum of singular values of all frontal slices of . Meanwhile, the th element of multirank is the rank of the th frontal slice of . Thus, TNNC is a convex surrogate of the norm of a thirdorder tensor’s multirank.
The lowrank tensor completion model is defined as
(17) 
Letting
where , (17) can be rewritten as the following unconstrained problem:
(18) 
By introducing an auxiliary variable , the augmented Lagrangian function of (18) is
(19) 
where is the Lagrangian multiplier, and is the balance parameter. According to the framework of ADMM (24; 25; 26), , , and are iteratively updated as
(20) 
Now, we give the details for solving each subproblem.
Theorem
Given , a minimizer to
(22) 
is given by the tensor singular value thresholding
(23) 
where and is an fdiagonal tensor whose each frontal slice in the discrete cosine domain is .
In Step 2, we solve the following problem:
(24) 
which has a closedform solution
(25) 
where is the complementary set of the index set .
We summarize the proposed ADMM procedure in Algorithm 1. Every step of ADMM has an explicit solution. Thus, the proposed method is efficiently implementable. The convergence of the ADMM method of convex functions of separable variables with linear constraints is guaranteed (27; 28).
Algorithm 1 ADMM for solving the proposed model (17). 
Input: Observed data , index set , parameters . 
Initialize: , , , , and . 
1: while and do 
2: ; 
3: ; 
4: for to do 
5: 
6: 
7: 
8: end for 
9: 
10: ; 
11: 
12 : end while 
Output: The recovered tensor . 
5 Numerical Examples
In this section, all experiments are implemented on Windows 10 and Matlab (R2017a) with an Intel(R) Core(TM) i77700k CPU at 4.20 GHz and 16 GB RAM.
5.1 The Computational Time
Saving time is the most important advantage of DCTbased tSVD. We illustrate this advantage of the new tSVD by operating on random tensors. We set 4 groups of random tensors of different size and performed 1000 runs to get the average time required. Tab. 2 shows that average time cost of performing tSVD and DCTbased tSVD, and confirms our point that DCTbased tSVD only needs half the time of tSVD.
size  100*100*100  100*100*400  200*200*100  400*400*100 
FFT  0.0041  0.0175  0.0176  0.0653 [t] 
SVD after FFT  0.0818  0.3250  0.3641  1.9015 
original tSVD  0.0859  0.3425  0.3817  1.9668 [b] 
DCT  0.0042  0.0150  0.0162  0.0601 [t] 
SVD after DCT  0.0439  0.1649  0.1978  0.8922 
new tSVD  0.0481  0.1799  0.2140  0.9523 [b] 
5.2 Real Data
We conduct the video and multispectral image (MSI) completion experiments and compare TNNC with the TNNF (22). In our experiments, the quality of the recovered image is measured by the average of highest peak signaltonoise ratio (PSNR) and structural similarity index (SSIM) of all bands. PSNR of a band is defined as follows:
where is the masked matrix, is the recovered matrix, and is the maximum pixel value of the original matrix . SSIM can measure the similarity between the recovered image and the masked image. This indicator can reflect the similarities in brightness, contrast, and structure of two images and is defined as
where and
represent the average values of the original matrix and the estimated matrix, respectively,
andrepresent the standard deviation of
and , respectively.For all the following experiments, we set the maximum number of iterations to 500 and the tolerance to . This algorithm only needs one parameter , and we set it to .
Video completion. We test 3 videos: Akiyo, Suzie, and Salesman. The size of Akiyo and Salesman is . The size of Suzie is . Tab. 3 shows PSNR, SSIM, and time cost of TNNF and TNNC. TNNC achieves better results and costs much less time than TNNF in all experiments. Fig. 2 shows one selected tube. We can observe that the tube of recovered video by TNNC is more closely to the true tube than that by TNNF, especially near the boundary. Fig. 3 shows the PSNR values of each frame of recovered videos by TNNF and TNNC. We can observe that when the sampling rate (SR) is , the PSNR values of TNNC are higher than those of TNNF, especially for the first and last few frames. This observation is consistent with our interpretation of BCs. Fig. 4 shows the results recovered by TNNF and TNNC with . TNNC is visually better than TNNF.
video  akiyo  suzie  salesman  
SR  metric  TNNF  TNNC  TNNF  TNNC  TNNF  TNNC 
0.05  PSNR  32.00  32.57  25.50  26.02  30.12  30.22 [t] 
SSIM  0.934  0.941  0.681  0.700  0.895  0.897  
time  156.2  91.9  69.6  40.1  148.5  85.6  
(8.8+137.0)  (6.2+70.9)  (4.0+60.6)  (2.9+30.6)  (8.6+128.9)  (6.0+65.3) [b]  
0.1  PSNR  34.20  34.75  27.73  27.93  32.13  32.29 [t] 
SSIM  0.958  0.963  0.759  0.766  0.928  0.931  
time  141.8  86.3  64.5  39.3  139.5  84.9  
(8.1+122.9)  (5.8+66.6)  (3.8+55.2)  (2.8+30.2)  (8.3+120.3)  (5.8+64.9) [b]  
0.2  PSNR  37.44  38.11  30.29  30.51  35.01  35.20 [t] 
SSIM  0.979  0.983  0.838  0.844  0.960  0.961  
time  145.2  79.8  62.5  37.2  135.1  81.3  
(8.1+125.6)  (5.4+60.3)  (3.6+53.3)  (2.8+28.6)  (8.1+116.3)  (5.5+61.6) [b] 
MSI completion. For MSI data, we add spectral angle mapper (SAM) and erreur relative globale adimensionnelle de synthse (ERGAS) which are common quality metrics for MSI data. SAM calculates the angle in spectral space between pixels and a set of reference tensor on spectral similarity. ERGAS measures fidelity of the recovered tensor based on the weighted sum of mean squared error (MSE) of all bands. The lower the value of these two indicators, the better the results. The size of the MSI data from CAVE database is with the wavelengths in the range of nm at an interval of 10nm. We display one selected tube in Fig. 5. We can observe that the tube of recovered tensor by TNNC is more closely to the true tube than that by TNNF, especially near the boundary. Moreover, we plot the PSNR values of recovered tensor by TNNC and TNNF in Fig. 6. In general, we can observe that the PSNR values of TNNC are higher than those of TNNF, especially for the first and last few bands. Those observations verify TNNC can produce more natural results as compared to TNNF when more reasonable BCs is implied in TNNC. In Fig. 7, we show the first band of testing data recovered by the two methods with . Obviously, TNNC achieves better visual results than TNNF. Tabs. 45 give the more detailed data of other testing images. We can see that TNNC not only has a better performance in PSNR, SSIM, SAM, and ERGAS, but also significantly reduces the time cost compared to TNNF.
MSI  Pompoms  Stuffed toys  
SR  metric  TNNF  TNNC  TNNF  TNNC 
0.05  PSNR  26.56  29.00  28.44  31.84 [t] 
SSIM  0.818  0.876  0.892  0.941  
SAM  0.22  0.16  0.30  0.22  
ERGAS  10.28  8.00  9.80  6.74  
time  309.4  161.0  320.6  183.4  
(11.0+285.7)  (8.9+135.3)  (11.4+296.0)  (10.3+153.4) [b]  
0.1  PSNR  31.26  33.98  33.37  36.63 [t] 
SSIM  0.922  0.952  0.955  0.978  
SAM  0.13  0.09  0.19  0.14  
ERGAS  5.96  4.52  5.53  3.84  
time  271.7  171.1  320.2  164.5  
(9.6+251.5)  (9.6+143.9)  (11.2+295.8)  (9.2+138.1) [b]  
0.2  PSNR  37.13  39.55  39.14  41.94 [t] 
SSIM  0.976  0.986  0.986  0.994  
SAM  0.07  0.05  0.11  0.09  
ERGAS  3.04  2.39  2.82  2.06  
time  308.1  184.0  278.9  165.8  
(10.9+284.4)  (10.2+154.2)  (10.2+256.4)  (9.2+138.7) [b] 
MSI  Foods  Peppers  
SR  metric  TNNF  TNNC  TNNF  TNNC 
0.05  PSNR  31.48  33.33  34.89  36.87 [t] 
SSIM  0.904  0.932  0.946  0.965  
SAM  0.27  0.21  0.21  0.15  
ERGAS  9.52  8.01  6.31  5.21  
time  281.0  164.8  284.9  155.0  
(10.3+258.7)  (9.2+137.9)  (10.4+255.2)  (8.8+128.7) [b]  
0.1  PSNR  35.31  37.73  39.25  41.27 [t] 
SSIM  0.957  0.974  0.980  0.989  
SAM  0.18  0.13  0.13  0.09  
ERGAS  6.14  4.91  3.86  3.18  
time  291.4  167.7  278.3  146.8  
(10.7+267.9)  (9.4+140.2)  (10.0+256.6)  (8.6+124.9) [b]  
0.2  PSNR  43.13  40.30  44.30  46.22 [t] 
SSIM  0.993  0.986  0.995  0.997  
SAM  0.11  0.08  0.07  0.05  
ERGAS  3.49  2.68  2.19  1.82  
time  289.7  164.0  286.2  153.6  
(10.6+266.7)  (9.3+137.4)  (10.4+264.2)  (9.0+138.5) [b] 
Parameter analysis. We analyze the robustness of TNNC for different parameters using MSI data Stuffed toys with . TNNC only requires one parameter . As shown in Fig. (8), different lead to nearly the same PSNR value, but it affects the convergence speed. After testing, we choose for all experiments.
6 Concluding Remarks
We have introduced the DCT as an alternative of DFT into the framework of tSVD. Based on the resulting tSVD, the DCT based tensor nuclear norm (TNNC) is suggested for lowrank tensor completion problem. We have developed an efficient alternating direction method of multipliers (ADMM) to tackle the corresponding model. Numerical experiments are reported to demonstrate the superiority of the DCTbased tSVD. In the future research work, other transforms based tensor singular value decomposition can be considered and studied. We expect other transforms based tensor singular value decomposition can deal with data tensors from specific applications.
Acknowledgment
The research is supported by NSFC (61772003) and the Fundamental Research Funds for the Central Universities (ZYGX2016J132), the HKRGC GRF 1202715, 12306616, 12200317 and HKBU RCICRS/1617/03.
References
References

(1)
M. Bertalmio, G. Sapiro, V. Caselles, C. Ballester, Image inpainting, Proceedings of International Conference on Computer Graphics and Interactive Techniques (2000) 417–424 (2000).

(2)
N. Komodakis, Image completion using global optimization, Proceedings of Computer Vision and Pattern Recognition (2006) 442–452 (2006).
 (3) J. Liu, P. Musialski, P. Wonka, J.P. Ye, Tensor completion for estimating missing values in visual data, IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (1) (2013) 208–220 (2013).
 (4) T. Korah, C. Rasmussen, Spatiotemporal inpainting for recovering texture maps of occluded building facades, IEEE Transactions on Image Processing 16 (9) (2007) 2262–2271 (2007).
 (5) S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, T. Q. Nguyen, An augmented lagrangian method for total variation video restoration, IEEE Transactions on Image Processing 20 (11) (2011) 3097–3111 (2011).
 (6) T.X. Jiang, T.Z. Huang, X.L. Zhao, L.J. Deng, Y. Wang, A novel tensorbased video rain streaks removal approach via utilizing discriminatively intrinsic priors, Proceedings of Computer Vision and Pattern Recognition (2017) 2818–2827 (07 2017).
 (7) F. Li, M. K. Ng, R. J. Plemmons, Coupled segmentation and denoising/deblurring models for hyperspectral material identification, Numerical Linear Algebra With Applications 19 (1) (2012) 153–173 (2012).
 (8) X.L. Zhao, F. Wang, T.Z. Huang, M. K. Ng, R. J. Plemmons, Deblurring and sparse unmixing for hyperspectral images, IEEE Transactions on Geoscience and Remote Sensing 51 (7) (2013) 4045–4058 (2013).
 (9) N. Li, B.X. Li, Tensor completion for onboard compression of hyperspectral images, Proceedings of IEEE International Conference on Image Processing (2010) 517–520 (2010).
 (10) Z.M. Xing, M.Y. Zhou, A. Castrodad, G. Sapiro, L. Carin, Dictionary learning for noisy and incomplete hyperspectral images, SIAM Journal on Imaging Sciences 5 (1) (2012) 33–56 (2012).
 (11) J.T. Sun, H.J. Zeng, H. Liu, Y.C. Lu, Z. Chen, Cubesvd: a novel approach to personalized web search, Proceedings of International World Wide Web Conferences (2005) 382–390 (2005).
 (12) T. G. Kolda, B. W. Bader, J. P. Kenny, Higherorder web link analysis using multilinear algebra, Proceedings of IEEE International Conference on Data Mining (2005) 242–249 (2005).
 (13) N. Varghees, M. Manikandan, R. G. John, Adaptive mri image denoising using totalvariation and local noise estimation, Proceedings of IEEE International Conference on Advances in Engineering, Science and Management (2012) 506–511 (01 2012).

(14)
N. Kreimer, M. D. Sacchi, A tensor higherorder singular value decomposition for prestack seismic data noise reduction and interpolation, Geophysics 77 (3) (2012) 113–122 (2012).
 (15) R. A. Harshman, Foundations of the parafac procedure: Models and conditions for an “explanatory” multimodal factor analysis, UCLA Working Papers in Phonetics (1970).
 (16) L. R. Tucker, Some mathematical notes on threemode factor analysis, Psychometrika 31 (3) (1966) 279–311 (1966).
 (17) M. E. Kilmer, C. D. M. Martin, Factorization strategies for thirdorder tensors, Linear Algebra and its Applications 435 (3) (2011) 641–658 (2011).
 (18) C. D. Martin, R. Shafer, B. Larue, An order tensor factorization with applications in imaging, SIAM Journal on Scientific Computing 35 (2013) 474–490 (2013).
 (19) M. E. Kilmer, K. S. Braman, N. Hao, R. C. Hoover, Thirdorder tensors as operators on matrices: A theoretical and computational framework with applications in imaging, SIAM Journal on Matrix Analysis and Applications 34 (1) (2013) 148–172 (2013).
 (20) M. K. Ng, R. H. Chan, W. Tang, A fast algorithm for deblurring models with neumann boundary conditions, SIAM Journal on Scientific Computing 21 (3) (1999) 851–866 (1999).
 (21) Z.M. Zhang, G. Ely, S. Aeron, H. Ning, M. E. Kilmer, Novel methods for multilinear data completion and denoising based on tensorsvd, Proceedings of Computer Vision and Pattern Recognition (2014) 3842–3849 (2014).
 (22) C.Y. Lu, J.S. Feng, Y.D. Chen, W. Liu, Z.C. Lin, S.C. Yan, Tensor robust principal component analysis: Exact recovery of corrupted lowrank tensors via convex optimization, Proceedings of Computer Vision and Pattern Recognition (2016) 5249–5257 (2016).
 (23) O. Semerci, H. Ning, M. E. Kilmer, E. L. Miller, Tensorbased formulation and nuclear norm regularization for multienergy computed tomography, IEEE Transactions on Image Processing 23 (4) (2014) 1678–1693 (2014).
 (24) S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn. 3 (1) (2011) 1–122 (Jan. 2011).
 (25) Z.C. Lin, M.M. Chen, Y. Ma, L.Q. Wu, The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted LowRank Matrices, ArXiv eprints (Sep. 2010). arXiv:1009.5055.
 (26) B.S. He, M. Tao, X.M. Yuan, Alternating direction method with gaussian back substitution for separable convex programming, SIAM Journal on Optimization 22 (2) (2012) 313–340 (2012).
 (27) M. V. Afonso, J. M. Bioucasdias, M. A. T. Figueiredo, An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems, IEEE Transactions on Image Processing 20 (3) (2011) 681–695 (2011).
 (28) D.R. Han, X.M. Yuan, A note on the alternating direction method of multipliers, Journal of Optimization Theory and Applications 155 (1) (2012) 227–238 (2012).
Comments
There are no comments yet.