1 Introduction
^{†}^{†}This work was supported in part by JSPS GrantinAid for Scientific Research on Innovative Areas (Multidisciplinary Computational Anatomy): JSPS KAKENHI Grant Number 26108003 and 15K16067.Matrix/tensor completion is a technique for recovering the missing elements in incomplete data and it has become a very important method in recent years [2, 3, 1, 9, 16, 21, 4, 31, 30]. In general, completion is an illposed problem without any assumptions. However, if we have useful prior knowledge or assumptions regarding the data structure, completion can be treated as a wellposed optimization problem, such as convex optimization. The assumption of the structure is also referred to as a “model.”
The methods for modeling matrices/tensors can be categorized into two classes. In the first class, the methods directly represent data with the matrices/tensors themselves and some structures of the matrices/tensors are assumed, such as lowrank [2, 3, 1, 9, 16] and smooth properties [32, 11].
By contrast, the methods in the second class “embed” the data into a highdimensional feature space and it is assumed that the data can be represented by lowrank or a smooth manifold in the embedded space [17, 25, 6, 18] (see Figure 1). Typically, a time series is represented by a “Hankel matrix” (see Section 2.1.1) and its lowrank property has been employed widely for modeling a linear timeinvariant system of signals [25, 18]. For example, Li et al. [15] proposed a method for modeling damped sinusoidal signals based on a lowrank Hankel approximation. Ding et al. [6] proposed the use of rank minimization of a Hankel matrix for the video inpainting problem by assuming an autoregressive moving average model. Figure 2 shows an example of occlusion recovery for a noisy time series, which indicates that total variation (TV) and quadratic variation (QV) regularization methods reconstruct a flat estimator, whereas minimization of the Hankel matrix (our proposed method) successfully reconstructs the signal.
In the proposed method, the incomplete input data are not represented as a Hankel matrix, but instead they are represented as a “higher order Hankel tensor” via multiway embedding with delay/shift along the time/space axes, and we solve the lowrank tensor completion problem in the embedded space. The minimization of the rank of a matrix/tensor is NPhard [10] and the problem is often relaxed to nuclearnorm minimization [20]. A disadvantage of the relaxation to nuclear norm minimization is that it decreases the rank of the resultant matrix/tensor as well as the total component values in the matrix/tensor. In particular, nuclear norm minimization often obtains “dark” signals in denoising tasks. Thus, we employ Tucker decomposition for lowrank modeling of the higher order Hankel tensor completion.
The Tuckerbased tensor completion is a nonconvex optimization problem, and the existing methods usually have difficulty for selecting the stepsize parameter. In this study, we propose to use an auxiliary functionbased approach, where it improves the convergence characteristics of the optimization process. Moreover, we propose a rank increment scheme for determining the appropriate multilinear tensor ranks. According to our extensive experiments, the proposed method is highly suitable for tensor completion (e.g. recovery of “Lena” from only 1% of the randomly sampled voxels) and it outperforms stateoftheart tensor completion methods.
1.1 Notations
A vector is denoted by a bold small letter
. A matrix is denoted by a bold capital letter . A higherorder () tensor is denoted by a bold calligraphic letter . The th entry of a vector is denoted by , and the th entry of a matrix is denoted by . The th entry of an thorder tensor is denoted by , where and . The Frobenius norm of an thorder tensor is defined by .A mode unfolding (matricization) of a tensor is denoted as . A mode multiplication between a tensor and a matrix/vector is denoted by , where the entries are given by , and we have .
If we consider matrices and an th order tensor , then the multilinear tensor product is defined as
(1) 
Moreover, a multilinear tensor product excluding the th mode is defined as
(2) 
When we consider Tucker decomposition, and in Eq. (1) are referred to as the core tensor and factor matrices, respectively.
2 Proposed method
In this study, we assume a lowrank structure of a higher order Hankel tensor given by the MDT, which is defined in Section 2.1. We denote this by . The proposed method is conceptually quite simple where it comprises three steps: (1) MDT, (2) lowrank tensor approximation, and (3) inverse MDT.
Let and be the input incomplete tensor and its mask tensor, respectively, and the first step is given by
(3)  
(4) 
where .
In the second step, we obtain a lowrank approximation of based on the Tucker decomposition model. For example, we first consider the following optimization problem:
(5)  
s.t. 
where .
Finally, the resultant tensor can be obtained by the inverse MDT of Tucker decomposition:
(6) 
2.1 Mdt
2.1.1 Standard delay embedding transform
In this section, we explain the delay embedding operation. For simplicity, we first define a standard delay embedding transform for a vector, which can be interpreted as a timeseries signal. Let us consider a vector , a standard delay embedding transform of with is given by
(7) 
Thus, a standard delay embedding transform produces a duplicated matrix from a vector, where this is also referred to as “Hankelization” since is a Hankel matrix. If is a duplication matrix that satisfies
(8) 
then the standard delay embedding transform can be obtained by
(9) 
where is a folding operator from a vector to a matrix.
Next, we consider an inverse transform of standard delay embedding. The forward transform can be decomposed into duplication and folding, so the inverse transform can also be decomposed into the individual corresponding inverse transforms: a vectorization operation and the Moore–Penrose pseudoinverse . Thus, the inverse delay embedding transform for a Hankel matrix can be given by
(10) 
Figure 3 shows an example of the delay embedding transform for a vector, duplication matrix, and its inverse transform. We can see that the duplication matrix comprises multiple identity matrices. It should be noted that the diagonal elements of comprise the numbers of duplications for individual elements, which are usually , but low for marginal elements.
2.1.2 Tensor extension
We now define the MDT for an th order tensor . The MDT of with is defined by
(11) 
where constructs a th order tensor from the input th order tensor. In a similar manner to how the vector delayembedding is a combination of linear duplication and folding operations, the MDT is also a combination of multilinear duplication and multiway folding operations. Figure 4 shows flowcharts to illustrate singleway and multiway delay embedding for a matrix. Finally, the inverse MDT for a Hankel tensor is given by
(12) 
where .
2.2 Tucker decomposition algorithms
In this section, we explain the algorithm for solving Problem (5). It should be noted that Problem (5) is not convex, its solution is not unique, and it is not easy to obtain its global solution [13]. In the case of Tucker decomposition without missing elements, it is known that the alternating least squares (ALS) [5] can efficiently obtain its stationary point. In the case with missing elements, algorithms for obtaining solutions have been proposed that use the gradient descent method [8] and manifold optimization [14, 12] in recent years. Gradient descent is usually slow to converge and manifold optimization can accelerate it by correcting its update direction on the manifold. However, a common issue with both methods is stepsize parameter selection because the convergence time is sensitive to the stepsize parameter.
We also propose to use an “auxiliary function” based approach to perform Tucker decomposition with missing elements. The proposed algorithm is very simple but efficient because the ALS can be incorporated and it has no adjusting parameters. First, we define the original cost function and auxiliary function by
(13)  
(14) 
where is a set of parameters, represents a complement set of , and is a Tucker decomposition. Clearly, we have
(15) 
Let us consider the following algorithm
(16) 
where the cost function is monotonically nonincreasing since we have
(17) 
It should be noted that only has to satisfy to have a nonincreasing property. Furthermore, the auxiliary function can be transformed by
(18) 
where . Thus, the minimization of the auxiliary function itself can be regarded as the standard Tucker decomposition without missing elements, which can be solved efficiently using the ALS.
In practice, the proposed algorithm comprises the following two steps: (1) calculate the auxiliary tensor by
(19) 
and (2) update and using the ALS [5] to optimize
s.t.  (20) 
The orthogonality constraint for each supports the uniqueness of the solution for Tucker decomposition and it does not change the reconstructed tensor from the original optimization problem. Finally, the proposed Tuckerbased tensor completion is summarized in Algorithm 1. Algorithm 1 updates and for only one cycle of the ALS, and it does not achieve a strict minimization of the auxiliary function. However, it is guaranteed to decrease the auxiliary function because each update obtains the global minimum of the suboptimization problem of (20) with respect to the corresponding parameter. Thus, Algorithm 1 still has the monotonic convergence property.
2.3 Tucker decomposition with rank increment
A difficult and important issue with the Tuckerbased tensor completion method is determining an appropriate rank setting . If we aim to obtain the lowest rank setting for sufficient approximation, the rank estimation problem can be considered as
s.t.  (21)  
where is a noise threshold parameter. However, we do not know the existence of the unique solution for Problem (21) and it will be dependent on . Furthermore, even if the best rank setting is unique, the resultant lowrank tensor is not unique. To address this issue, we propose the use of a very important strategy called the “rank increment” method. Figure 5 provides a flowchart to illustrate the concept of Tucker decomposition with/without rank increment. The rank increment strategy has been discussed in several studies of matrix and tensor completion [19, 7, 22, 24, 27, 30], but the present study is its first application to Tuckerbased completion to the best of our knowledge. The main reason for using the rank increment method is the nonuniqueness of the solution for the tensor . Thus, the resultant tensor depends on its initialization. The main feature of the rank increment method is that the tensor should be initialized by a lower rank approximation than its target rank. Based on this strategy, the proposed algorithm can be described as follows.

Step 1: Set initial for all .

Step 2: Obtain and with using Algorithm 1 and obtain .

Step 3: Check the noise condition , where the algorithm is terminated if it is satisfied; otherwise, go to the next step.

Step 4: Choose the incremental mode and increment , and then go back to step 2.
The problem is how to choose and how to increase the rank . We propose choosing using the “th mode residual” of the cost function, which is defined as a residual on the multilinear subspace spanned by all the factor matrices excluding the th mode factor. This is mathematically formulated as:
(22) 
We can interpret this as meaning that the selected th mode has a high expectation of cost reduction when increases while the othermode ranks remain fixed.
For the rank increment process, we consider the rank sequences for individual modes. For example, the rank sequence for the th mode is set as because the contribution rates of the singular vectors usually decrease exponentially. Thus, a small rank increment is important for the phase of a lowrank approximation, whereas a small rank increment is not effective for the phase of a relatively highrank approximation. Large rank steps for the highrank phase help to accelerate the algorithm, but they should be selected carefully because excessively large rank steps may lead to problems with nonunique solutions. The proposed method for Tuckerbased tensor completion with rank increment is summarized in Algorithm 2.
3 Experiments
3.1 Verification of the proposed method using a typical color image
First, in our experiments, we tried to fill the missing slices in a typical color image by using MDT and fixed rank Tucker decomposition. The input image is depicted in Figure 5. We set and a color image was converted into a tensor. The fifth mode can be ignored so this Hankel tensor was regarded as a fifthorder tensor with a size of . Figure 7 shows the images obtained with various settings for the rank parameter. Clearly, lowrank Tucker decomposition with the Hankel tensor successfully filled the missing area. However, an important issue is how to treat the difference between the meanings of and . The fundamental difference between and is due to the window sizes of and . Thus, it should be noted that a lower may contribute to the representation of the local structure (e.g., smoothness), whereas a lower may contribute to the representation of the global structure (e.g., recursive textures) of the image. In Figure 7, when we compare two flows from the bottom right to the top right, and from the bottom right to the bottom left, the lowrankness of is clearly more important than that of for recovering the missing area.
Next, we tried to fill missing color images using MDT and Tucker decomposition with the rank increment method. The rank sequences for the proposed method were set as , , and . Figure 7 shows the main flow for the processed images using the proposed method. Using the rank increment method, we first obtained a Tucker decomposition with a very lowrank setting (e.g., rankone tensor) and a higherrank decomposition was then obtained by using the previous lowerrank decomposition for its initialization. We repeatedly obtained a higherrank decomposition until the residual was sufficiently small. Thus, the rank increment method automatically selected an appropriate rank setting.
(PSNR,SSIM)  HaLRTC  TV reg.  LR&TV reg.  STDC  SPCQV  Proposed 

facade 1  (19.31,0.937)  (30.08,0.964)  (30.18,0.964)  (24.03,0.958)  (30.04,0.972)  (36.52,0.988) 
facade 2  (16.03,0.852)  (24.28,0.838)  (24.47,0.842)  (17.89,0.826)  (25.65,0.916)  (26.96,0.916) 
house  (8.81,0.271)  (26.75,0.909)  (26.70,0.913)  (20.65,0.624)  (27.04,0.909)  (27.50,0.908) 
peppers  (9.92,0.288)  (25.84,0.877)  (25.81,0.885)  (21.61,0.662)  (26.59,0.888)  (27.62,0.898) 
Lena (5%)  (9.63,0.145)  (20.86,0.626)  (20.89,0.621)  (22.07,0.596)  (23.52,0.700)  (23.57,0.738) 
Lena (1%)  (5.32,0.021)  (15.47,0.440)  (15.49,0.443)  (12.75,0.089)  (18.49,0.508)  (19.68,0.565) 
3.2 Comparison using color images
We compared the performance of the proposed method with those of stateoftheart tensor completion algorithms: HaLRTC (nuclearnorm regularization) [16], TV regularization [32], nuclearnorm and TV regularization (LR&TV) [28], STDC (constrained Tucker decomposition) [4], and SPCQV (constrained PARAFAC tensor decomposition) [30]. We prepared six missing images for this experiment. The first image had 11 continuous missing slices along the vertical axis. Several horizontal and vertical slices and many voxels were missing from the second image. Random vertical and horizontal slices were missing from the third and fourth images. In addition, 95% and 99% of the random voxels were missing from the fifth and sixth images, respectively.
Figure 8 shows the experimental results obtained after the completion of various incomplete images. Magnified regions are depicted at the bottom right in the first to fourth images. HaLRTC recovered random missing voxels for the second image, but it failed to recover the missing slices for all of the images. The TV regularization and LR&TV regularization methods filled the missing areas, but the recovered areas were unnaturally flat. STDC failed to recover the missing slices and SPCQV retained “shadows” of the missing slices. By contrast, the proposed method recovered most of the missing slices without shadows. For the image with 95% missing voxels, the proposed method and SPCQV obtained similar results. However, for the image with 99% missing voxels, the result obtained by the proposed method was much better than that by SPCQV. Table 1 shows the peak signaltonoise ratio (PSNR) and structural similarity (SSIM) [26] for these comparisons, where the best PSNR and SSIM values are emphasized in bold font. According to this quantitative evaluation, the proposed method performed better than the stateoftheart methods in terms of the PSNR, and the results in terms of the SSIM were very competitive with SPCQV for some of the images.
# of missing slices  HaLRTC  TV reg.  LR&TV reg.  STDC  SPCQV  Proposed 

1  (20.15,0.991)  (23.13,0.992)  (23.16,0.993)  (23.30,0.993)  (24.20,0.994)  (23.25,0.994) 
2  (14.71,0.982)  (16.20,0.982)  (16.29,0.984)  (17.09,0.985)  (17.80,0.985)  (18.45,0.987) 
3  (14.02,0.973)  (15.20,0.972)  (15.17,0.974)  (15.99,0.977)  (16.58,0.976)  (14.54,0.978) 
4  (12.17,0.964)  (12.45,0.962)  (12.46,0.964)  (12.43,0.966)  (12.98,0.966)  (14.08,0.970) 
5  (11.31,0.955)  (11.41,0.952)  (11.59,0.956)  (11.59,0.957)  (12.15,0.957)  (13.85,0.964) 
3.3 Comparison using fMRI images
In the next experiment, we tried to recover continuous missing time frames in fMRI images. Figure 10 shows the image prepared with missing data. There were 94 time frames of the fMRI slices and one to five continuous time frames were removed. The size of the tensor was . We applied the proposed method with for the given tensor. The rank sequences were set as , , and .
Figure 10 shows the results of this experiments. Similar to the color image experiments, the ordinary lowrank model could not recover the missing slices. TV regularization obtained flat results and LR&TV regularization produced similar results. STDC and SPCQV obtained some improvements compared with the convex methods, but they were still unclear. By contrast, the results obtained by the proposed method were very clear, although their accuracy was not high. Table 2 shows the signaltonoise ratio (SNR) and mean SSIM results obtained for all of the completion methods, which demonstrates that the proposed method performed better than the stateoftheart methods in terms of the mean SSIM measure and it was also very competitive in terms of the SNR.
4 Discussion
4.1 Novelty and contributions
4.1.1 Lowrank model in embedded space
The idea of tensor completion using MDT and its inverse transform is novel. Most of the existing methods for tensor completion are based on structural assumptions in the signal space, such as nuclearnorm regularization and TV regularization. By contrast, our method considers a structural assumption in the embedded space, which can be regarded as a novel strategy for the tensor completion problem. Furthermore, we employ delayembedding in this approach and it is extended it in a multiway manner for tensors.
4.1.2 An auxiliary functionbased approach for Tucker decomposition of a tensor with missing elements
The auxiliary functionbased algorithm for Tucker decomposition is efficiently employed. The existing algorithms used for the Tucker decomposition of a tensor with missing elements [8, 14, 12] are based on gradient methods. However the convergence speed of the gradient method is quite sensitive to the stepsize parameter. By contrast, the proposed algorithm does not have any hyperparameters and its monotonic convergence is guaranteed by the auxiliary function theory.
4.1.3 Model selection and uniqueness improvement by the rank increment method
The rank increment method for the Tucker decomposition of incomplete tensors is firstly applied in the best of our knowledge. Several methods for estimating multilinear tensor ranks have been studied only for complete tensors [23, 29]. Methods for estimating the ranks of matrix and PARAFAC decompositions have been proposed for incomplete data [22, 24, 27, 30], but these methods cannot be applied to our problem. Thus, we proposed a new method for estimating multilinear ranks for an incomplete tensor, where it can also handle the issue of nonunique solutions.
4.2 Computational bottleneck
The proposed method has an issue with data volume expansion due to MDT. An th order tensor is converted into a th order tensor by MDT and its data size increases roughly fold. This issue makes it difficult to apply the proposed method to largescale tensors and this problem will be addressed in future research.
5 Conclusions
In this study, we proposed a novel method and algorithm for tensor completion problems that include missing slices. The recovery of missing slices is recognized as a difficult problem that ordinary tensor completion methods usually fail to solve. To address this problem, we introduced the concept of “delay embedding” from the study of dynamical systems and extended it for our problem. We showed that missing slices can be recovered by considering lowrank models in embedded spaces and that MDT is a good choice for this purpose.
At present, the proposed method is very basic but it has many potential extensions such as using different embedding transformations and constrained tensor decompositions (e.g., nonnegative, sparse, and smooth). The MATLAB code will be available via our website^{1}^{1}1https://sites.google.com/site/yokotatsuya/home/software.
References
 [1] E. J. Candes and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925–936, 2010.
 [2] E. J. Candes and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009.
 [3] E. J. Candes and T. Tao. The power of convex relaxation: Nearoptimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2010.
 [4] Y.L. Chen, C.T. Hsu, and H.Y. Liao. Simultaneous tensor decomposition and completion using factor priors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(3):577–591, 2014.
 [5] L. De Lathauwer, B. De Moor, and J. Vandewalle. On the best rank1 and rank(, ,…, ) approximation of higherorder tensors. SIAM Journal on Matrix Analysis and Applications, 21(4):1324–1342, 2000.
 [6] T. Ding, M. Sznaier, and O. I. Camps. A rank minimization approach to video inpainting. In Proceedings of ICCV, pages 1–8. IEEE, 2007.
 [7] S. V. Dolgov and D. V. Savostyanov. Alternating minimal energy methods for linear systems in higher dimensions. Part I: SPD systems. arXiv preprint arXiv:1301.6068, 2013.
 [8] M. Filipovic and A. Jukic. Tucker factorization with missing data with application to lownrank tensor completion. Multidimensional Systems and Signal Processing, 26(3):677–692, 2015.
 [9] S. Gandy, B. Recht, and I. Yamada. Tensor completion and lownrank tensor recovery via convex optimization. Inverse Problems, 27(2), 2011.
 [10] N. Gillis and F. Glineur. Lowrank matrix approximation with weights or missing data is NPhard. SIAM Journal on Matrix Analysis and Applications, 32(4):1149–1165, 2011.
 [11] X. Guo and Y. Ma. Generalized tensor total variation minimization for visual data recovery. In Proceedings of CVPR, pages 3603–3611, 2015.
 [12] H. Kasai and B. Mishra. Lowrank tensor completion: a Riemannian manifold preconditioning approach. In Proceedings of ICML, pages 1012–1021, 2016.
 [13] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455–500, 2009.
 [14] D. Kressner, M. Steinlechner, and B. Vandereycken. Lowrank tensor completion by Riemannian optimization. BIT Numerical Mathematics, 54(2):447–468, 2014.
 [15] Y. Li, K. R. Liu, and J. Razavilar. A parameter estimation scheme for damped sinusoidal signals based on lowrank Hankel approximation. IEEE Transactions on Signal Processing, 45(2):481–486, 1997.
 [16] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in visual data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):208–220, 2013.
 [17] E. N. Lorenz. Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20(2):130–141, 1963.
 [18] I. Markovsky. Structured lowrank approximation and its applications. Automatica, 44(4):891–909, 2008.
 [19] B. Mishra, G. Meyer, F. Bach, and R. Sepulchre. Lowrank optimization with trace norm penalty. SIAM Journal on Optimization, 23(4):2124–2149, 2013.
 [20] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3):471–501, 2010.

[21]
F. Shi, J. Cheng, L. Wang, P.T. Yap, and D. Shen.
Lowrank total variation for image superresolution.
In Proceedings of International Conference on Medical Image Computing and ComputerAssisted Intervention, pages 155–162. Springer, 2013.  [22] M. Tan, I. W. Tsang, L. Wang, B. Vandereycken, and S. J. Pan. Riemannian pursuit for big matrix recovery. In Proceedings of ICML, pages 1539–1547, 2014.

[23]
M. E. Timmerman and H. A. Kiers.
Threemode principal components analysis: Choosing the numbers of components and sensitivity to local optima.
British Journal of Mathematical and Statistical Psychology, 53(1):1–16, 2000.  [24] A. Uschmajew and B. Vandereycken. Greedy rank updates combined with Riemannian descent methods for lowrank optimization. In Proceedings of International Conference on Sampling Theory and Applications, pages 420–424, 2015.
 [25] P. Van Overschee and B. De Moor. Subspace algorithms for the stochastic identification problem. In Proceedings of IEEE Conference on Decision and Control, pages 1321–1326. IEEE, 1991.
 [26] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
 [27] T. Yokota and A. Cichocki. A fast automatic rank determination algorithm for noisy lowrank matrix completion. In Proceedings of APSIPA ASC, pages 43–46, 2015.
 [28] T. Yokota and H. Hontani. Simultaneous visual data completion and denoising based on tensor rank and total variation minimization and its primaldual splitting algorithm. In Proceedings of CVPR, pages 3732–3740, 2017.

[29]
T. Yokota, N. Lee, and A. Cichocki.
Robust multilinear tensor rank estimation using higher order singular value decomposition and information criteria.
IEEE Transactions on Signal Processing, 65(5):1196–1206, 2017.  [30] T. Yokota, Q. Zhao, and A. Cichocki. Smooth PARAFAC decomposition for tensor completion. IEEE Transactions on Signal Processing, 64(20):5423–5436, 2016.
 [31] Q. Zhao, L. Zhang, and A. Cichocki. Bayesian CP factorization of incomplete tensors with automatic rank determination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9):1751–1763, 2015.
 [32] M. Zhu and T. Chan. An efficient primaldual hybrid gradient algorithm for total variation image restoration. UCLA CAM Report, pages 08–34, 2008.
Supplemental Figures: Results for Various Colorimage Completion
In this supplemental document, we show the results of colorimage completion by using various methods with explanations of the technical overview of the stateoftheart methods. Table 3 summarizes the optimization concepts of the stateoftheart tensor completion methods: high accuracy lowrank tensor completion (HaLRTC) [16], total variation regularization (TV reg.) [32, 28], lowrank and TV regularization (LRTV reg.) [28], simultaneous tensor decomposition and completion [4], smooth PARAFAC tensor completion with quadratic variation (SPCQV) [30], and the proposed method.
In HaLRTC, we minimize the tensor nuclear norm (tNN), which is defined as sum of matrix nuclear norm (NN) for all
th matricizations of , under the constraint of data consistency . In TV reg., we minimize the tensor total variation (tTV), which is an extension of matrix total variation (TV), under the constraint of data consistency. LRTV simultaneously minimizes tNN and tTV under the constraint of data consistency. Above three methods are formulated as convex optimization problems, and can be solved by primaldual splitting (PDS) algorithm [condat2013primal]. Since LRTV method proposed in [28] is a generalization of both HaLRTC and TV reg. methods and PDS algorithm is employed for optimization, we tried above three methods via PDS algorithm with appropriate hyperparameter settings in LRTV formulation. In STDC, a Tucker decomposition problem with regularizations for a core tensor and factor matrices is considered. Factor matrices in STDC are imposed to small nuclearnorm to minimize the tensor rank, and factor prior imposes a kind of similarity of factor vectors each other. The optimization problem of STDC is nonconvex, and it is solved by using augmented Lagrangian method. According to [4], the convergence of STDC algorithm is not theoretically guaranteed. In SPCQV, a PARAFAC decomposition problem with regularizations for factor vectors is considered. The PARAFAC decomposition consists of multiple rank1 tensors, and each rank1 tensor is constructed by factor vectors. In SPCQV method, individual factor vectors are imposed to be smoothly variated, and number of rank1 tensors is minimized. The optimization problem of SPCQV is also nonconvex, and it is solved by using hierarchical alternating least squares (HALS) algorithm. Unlike the STDC, SPCQV has a monotonic nonincreasing property of cost function and the convergence to stationary point is guaranteed. In the proposed method, a lowrank Tucker decomposition problem in embedded space is simply considered. In general, the optimization problem is nonconvex, however, monotonic convergence property is guaranteed based on auxiliary function based optimization algorithm.Figure 11 shows eight benchmark images used in this experiments. For each image, we generate eight types of incomplete images: (a) 50%, (b) 70%, (c) 90%, (d) 95%, and (e) 99% random voxel missing, (f) 11 continuous vertical slices missing, (g) cross shape occlusion with 50% random voxel missing, and (h) random vertical/horizontal slices missing. Thus, totally, 64 incomplete images were generated. For each incomplete image, we applied six tensor completion methods: HaLRTC, TV reg., LRTV reg., STDC, SPCQV, and the proposed method. Hyper parameters for all methods were tuned from several candidates, and employed the best settings. Figures 1219 show the results of this experiments. First three convex methods recovered them for only low missing ratio cases. STDC outperformed the three convex methods, however, it failed to recover 95% and 99% missing cases and slice missing cases, and sometimes the algorithm did not converge. SPCQV successfully recovered random missing cases, however, it failed to recover slice missing cases. The proposed method obtained more clear results for random missing cases compared with SPCQV, and successfully recovered slice missing cases.
name  minimization  constraints 

HaLRTC [16]  tensor nuclear norm (tNN):  
TV reg. [32, 28]  tensor total variation (tTV):  
LRTV reg. [28]  (tNN) + (tTV)  
STDC [4]  (NN of factor matrices) + (l2norm of core tensor) + (factor prior)  Tucker decomposition + 
SPCQV [30]  (number of rank1 tensors) + (quadratic variation of factor vectors)  PARAFAC decomposition + 
Proposed  (sum of multilinear tensor ranks of ) 
Comments
There are no comments yet.