Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space

04/05/2018
by   Tatsuya Yokota, et al.
Harvard University
0

Let us consider a case where all of the elements in some continuous slices are missing in tensor data. In this case, the nuclear-norm and total variation regularization methods usually fail to recover the missing elements. The key problem is capturing some delay/shift-invariant structure. In this study, we consider a low-rank model in an embedded space of a tensor. For this purpose, we extend a delay embedding for a time series to a "multi-way delay-embedding transform" for a tensor, which takes a given incomplete tensor as the input and outputs a higher-order incomplete Hankel tensor. The higher-order tensor is then recovered by Tucker-based low-rank tensor factorization. Finally, an estimated tensor can be obtained by using the inverse multi-way delay embedding transform of the recovered higher-order tensor. Our experiments showed that the proposed method successfully recovered missing slices for some color images and functional magnetic resonance images.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 13

page 14

page 15

page 16

page 17

page 18

page 19

05/23/2020

p-order Tensor Products with Invertible Linear Transforms

This paper studies the issues about tensors. Three typical kinds of tens...
02/25/2020

Block Hankel Tensor ARIMA for Multiple Short Time Series Forecasting

This work proposes a novel approach for multiple time series forecasting...
03/18/2022

Soft Smoothness for Audio Inpainting Using a Latent Matrix Model in Delay-embedded Space

Here, we propose a new reconstruction method of smooth time-series signa...
09/07/2015

A New Low-Rank Tensor Model for Video Completion

In this paper, we propose a new low-rank tensor model based on the circu...
12/02/2020

Tensor Completion via Few-shot Convolutional Sparse Coding

Tensor data often suffer from missing value problem due to the complex h...
05/11/2021

Multi-version Tensor Completion for Time-delayed Spatio-temporal Data

Real-world spatio-temporal data is often incomplete or inaccurate due to...
07/01/2019

Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization

There has been an increased interest in multimodal language processing i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This work was supported in part by JSPS Grant-in-Aid for Scientific Research on Innovative Areas (Multidisciplinary Computational Anatomy): JSPS KAKENHI Grant Number 26108003 and 15K16067.

Matrix/tensor completion is a technique for recovering the missing elements in incomplete data and it has become a very important method in recent years [2, 3, 1, 9, 16, 21, 4, 31, 30]. In general, completion is an ill-posed problem without any assumptions. However, if we have useful prior knowledge or assumptions regarding the data structure, completion can be treated as a well-posed optimization problem, such as convex optimization. The assumption of the structure is also referred to as a “model.”

The methods for modeling matrices/tensors can be categorized into two classes. In the first class, the methods directly represent data with the matrices/tensors themselves and some structures of the matrices/tensors are assumed, such as low-rank [2, 3, 1, 9, 16] and smooth properties [32, 11].

By contrast, the methods in the second class “embed” the data into a high-dimensional feature space and it is assumed that the data can be represented by low-rank or a smooth manifold in the embedded space [17, 25, 6, 18] (see Figure 1). Typically, a time series is represented by a “Hankel matrix” (see Section 2.1.1) and its low-rank property has been employed widely for modeling a linear time-invariant system of signals [25, 18]. For example, Li et al. [15] proposed a method for modeling damped sinusoidal signals based on a low-rank Hankel approximation. Ding et al. [6] proposed the use of rank minimization of a Hankel matrix for the video inpainting problem by assuming an autoregressive moving average model. Figure 2 shows an example of occlusion recovery for a noisy time series, which indicates that total variation (TV) and quadratic variation (QV) regularization methods reconstruct a flat estimator, whereas minimization of the Hankel matrix (our proposed method) successfully reconstructs the signal.

In the proposed method, the incomplete input data are not represented as a Hankel matrix, but instead they are represented as a “higher order Hankel tensor” via multi-way embedding with delay/shift along the time/space axes, and we solve the low-rank tensor completion problem in the embedded space. The minimization of the rank of a matrix/tensor is NP-hard [10] and the problem is often relaxed to nuclear-norm minimization [20]. A disadvantage of the relaxation to nuclear norm minimization is that it decreases the rank of the resultant matrix/tensor as well as the total component values in the matrix/tensor. In particular, nuclear norm minimization often obtains “dark” signals in denoising tasks. Thus, we employ Tucker decomposition for low-rank modeling of the higher order Hankel tensor completion.

The Tucker-based tensor completion is a non-convex optimization problem, and the existing methods usually have difficulty for selecting the step-size parameter. In this study, we propose to use an auxiliary function-based approach, where it improves the convergence characteristics of the optimization process. Moreover, we propose a rank increment scheme for determining the appropriate multi-linear tensor ranks. According to our extensive experiments, the proposed method is highly suitable for tensor completion (e.g. recovery of “Lena” from only 1% of the randomly sampled voxels) and it outperforms state-of-the-art tensor completion methods.

Figure 1: Lorentz system and its delay embedded space: A time series signal can be embedded into a three-dimensional space with individual axes of , , and . Clearly, most of the points are located on some hyper-plane (i.e., low-rank) in the embedded space.
Figure 2: Recovering a dynamical signal using the proposed method (), TV regularization, and QV regularization.

1.1 Notations

A vector is denoted by a bold small letter

. A matrix is denoted by a bold capital letter . A higher-order () tensor is denoted by a bold calligraphic letter . The th entry of a vector is denoted by , and the th entry of a matrix is denoted by . The th entry of an th-order tensor is denoted by , where and . The Frobenius norm of an th-order tensor is defined by .

A mode- unfolding (matricization) of a tensor is denoted as . A mode- multiplication between a tensor and a matrix/vector is denoted by , where the entries are given by , and we have .

If we consider matrices and an -th order tensor , then the multi-linear tensor product is defined as

(1)

Moreover, a multi-linear tensor product excluding the -th mode is defined as

(2)

When we consider Tucker decomposition, and in Eq. (1) are referred to as the core tensor and factor matrices, respectively.

2 Proposed method

In this study, we assume a low-rank structure of a higher order Hankel tensor given by the MDT, which is defined in Section 2.1. We denote this by . The proposed method is conceptually quite simple where it comprises three steps: (1) MDT, (2) low-rank tensor approximation, and (3) inverse MDT.

Let and be the input incomplete tensor and its mask tensor, respectively, and the first step is given by

(3)
(4)

where .

In the second step, we obtain a low-rank approximation of based on the Tucker decomposition model. For example, we first consider the following optimization problem:

(5)
s.t.

where .

Finally, the resultant tensor can be obtained by the inverse MDT of Tucker decomposition:

(6)

2.1 Mdt

2.1.1 Standard delay embedding transform

In this section, we explain the delay embedding operation. For simplicity, we first define a standard delay embedding transform for a vector, which can be interpreted as a time-series signal. Let us consider a vector , a standard delay embedding transform of with is given by

(7)

Thus, a standard delay embedding transform produces a duplicated matrix from a vector, where this is also referred to as “Hankelization” since is a Hankel matrix. If is a duplication matrix that satisfies

(8)

then the standard delay embedding transform can be obtained by

(9)

where is a folding operator from a vector to a matrix.

Next, we consider an inverse transform of standard delay embedding. The forward transform can be decomposed into duplication and folding, so the inverse transform can also be decomposed into the individual corresponding inverse transforms: a vectorization operation and the Moore–Penrose pseudo-inverse . Thus, the inverse delay embedding transform for a Hankel matrix can be given by

(10)

Figure 3 shows an example of the delay embedding transform for a vector, duplication matrix, and its inverse transform. We can see that the duplication matrix comprises multiple identity matrices. It should be noted that the diagonal elements of comprise the numbers of duplications for individual elements, which are usually , but low for marginal elements.

Figure 3: A delay embedding transform for a vector.

2.1.2 Tensor extension

We now define the MDT for an -th order tensor . The MDT of with is defined by

(11)

where constructs a -th order tensor from the input -th order tensor. In a similar manner to how the vector delay-embedding is a combination of linear duplication and folding operations, the MDT is also a combination of multi-linear duplication and multi-way folding operations. Figure 4 shows flowcharts to illustrate single-way and multi-way delay embedding for a matrix. Finally, the inverse MDT for a Hankel tensor is given by

(12)

where .

Figure 4: Multi-way delay embedding transform for a matrix. Single-way and multi-way delay embedding transforms convert a matrix into third and fourth order tensors, respectively.

2.2 Tucker decomposition algorithms

In this section, we explain the algorithm for solving Problem (5). It should be noted that Problem (5) is not convex, its solution is not unique, and it is not easy to obtain its global solution [13]. In the case of Tucker decomposition without missing elements, it is known that the alternating least squares (ALS) [5] can efficiently obtain its stationary point. In the case with missing elements, algorithms for obtaining solutions have been proposed that use the gradient descent method [8] and manifold optimization [14, 12] in recent years. Gradient descent is usually slow to converge and manifold optimization can accelerate it by correcting its update direction on the manifold. However, a common issue with both methods is step-size parameter selection because the convergence time is sensitive to the step-size parameter.

We also propose to use an “auxiliary function” based approach to perform Tucker decomposition with missing elements. The proposed algorithm is very simple but efficient because the ALS can be incorporated and it has no adjusting parameters. First, we define the original cost function and auxiliary function by

(13)
(14)

where is a set of parameters, represents a complement set of , and is a Tucker decomposition. Clearly, we have

(15)

Let us consider the following algorithm

(16)

where the cost function is monotonically non-increasing since we have

(17)

It should be noted that only has to satisfy to have a non-increasing property. Furthermore, the auxiliary function can be transformed by

(18)

where . Thus, the minimization of the auxiliary function itself can be regarded as the standard Tucker decomposition without missing elements, which can be solved efficiently using the ALS.

In practice, the proposed algorithm comprises the following two steps: (1) calculate the auxiliary tensor by

(19)

and (2) update and using the ALS [5] to optimize

s.t. (20)

The orthogonality constraint for each supports the uniqueness of the solution for Tucker decomposition and it does not change the reconstructed tensor from the original optimization problem. Finally, the proposed Tucker-based tensor completion is summarized in Algorithm 1. Algorithm 1 updates and for only one cycle of the ALS, and it does not achieve a strict minimization of the auxiliary function. However, it is guaranteed to decrease the auxiliary function because each update obtains the global minimum of the sub-optimization problem of (20) with respect to the corresponding parameter. Thus, Algorithm 1 still has the monotonic convergence property.

1:  input: , , and .
2:  initialize: , and , randomly.
3:  repeat
4:     ;
5:     ;
6:     for  do
7:        ;
8:        ;
9:     end for
10:     ;
11:  until convergence
12:  output: ;
Algorithm 1 Tucker-based tensor completion

2.3 Tucker decomposition with rank increment

A difficult and important issue with the Tucker-based tensor completion method is determining an appropriate rank setting . If we aim to obtain the lowest rank setting for sufficient approximation, the rank estimation problem can be considered as

s.t. (21)

where is a noise threshold parameter. However, we do not know the existence of the unique solution for Problem (21) and it will be dependent on . Furthermore, even if the best rank setting is unique, the resultant low-rank tensor is not unique. To address this issue, we propose the use of a very important strategy called the “rank increment” method. Figure 5 provides a flowchart to illustrate the concept of Tucker decomposition with/without rank increment. The rank increment strategy has been discussed in several studies of matrix and tensor completion [19, 7, 22, 24, 27, 30], but the present study is its first application to Tucker-based completion to the best of our knowledge. The main reason for using the rank increment method is the non-uniqueness of the solution for the tensor . Thus, the resultant tensor depends on its initialization. The main feature of the rank increment method is that the tensor should be initialized by a lower rank approximation than its target rank. Based on this strategy, the proposed algorithm can be described as follows.

  • Step 1: Set initial for all .

  • Step 2: Obtain and with using Algorithm 1 and obtain .

  • Step 3: Check the noise condition , where the algorithm is terminated if it is satisfied; otherwise, go to the next step.

  • Step 4: Choose the incremental mode and increment , and then go back to step 2.

The problem is how to choose and how to increase the rank . We propose choosing using the “-th mode residual” of the cost function, which is defined as a residual on the multi-linear subspace spanned by all the factor matrices excluding the -th mode factor. This is mathematically formulated as:

(22)

We can interpret this as meaning that the selected -th mode has a high expectation of cost reduction when increases while the other-mode ranks remain fixed.

Figure 5: Conceptual illustrations of the proposed methods.

For the rank increment process, we consider the rank sequences for individual modes. For example, the rank sequence for the -th mode is set as because the contribution rates of the singular vectors usually decrease exponentially. Thus, a small rank increment is important for the phase of a low-rank approximation, whereas a small rank increment is not effective for the phase of a relatively high-rank approximation. Large rank steps for the high-rank phase help to accelerate the algorithm, but they should be selected carefully because excessively large rank steps may lead to problems with non-unique solutions. The proposed method for Tucker-based tensor completion with rank increment is summarized in Algorithm 2.

1:  input: , , , , tol.
2:  initialize:  , , and ;
3:  ;
4:  ;
5:  repeat
6:     Do lines 5-10 in Algorithm 1;
7:     ;
8:     ;
9:     if  tol then
10:        ;
11:        ;
12:        , and ;
13:     else
14:        ;
15:     end if
16:  until 
17:  output: ;
Algorithm 2 Tucker-based tensor completion with rank increment

3 Experiments

3.1 Verification of the proposed method using a typical color image

First, in our experiments, we tried to fill the missing slices in a typical color image by using MDT and fixed rank Tucker decomposition. The input image is depicted in Figure 5. We set and a color image was converted into a tensor. The fifth mode can be ignored so this Hankel tensor was regarded as a fifth-order tensor with a size of . Figure 7 shows the images obtained with various settings for the rank parameter. Clearly, low-rank Tucker decomposition with the Hankel tensor successfully filled the missing area. However, an important issue is how to treat the difference between the meanings of and . The fundamental difference between and is due to the window sizes of and . Thus, it should be noted that a lower may contribute to the representation of the local structure (e.g., smoothness), whereas a lower may contribute to the representation of the global structure (e.g., recursive textures) of the image. In Figure 7, when we compare two flows from the bottom right to the top right, and from the bottom right to the bottom left, the low-rankness of is clearly more important than that of for recovering the missing area.

Next, we tried to fill missing color images using MDT and Tucker decomposition with the rank increment method. The rank sequences for the proposed method were set as , , and . Figure 7 shows the main flow for the processed images using the proposed method. Using the rank increment method, we first obtained a Tucker decomposition with a very low-rank setting (e.g., rank-one tensor) and a higher-rank decomposition was then obtained by using the previous lower-rank decomposition for its initialization. We repeatedly obtained a higher-rank decomposition until the residual was sufficiently small. Thus, the rank increment method automatically selected an appropriate rank setting.

Figure 6: Results obtained by fixed rank MDT Tucker decomposition with for various rank settings.
Figure 7: Results obtained by rank increment MDT Tucker decomposition with .
Figure 6: Results obtained by fixed rank MDT Tucker decomposition with for various rank settings.
Figure 8: Color images completed with various methods. Six missing color images were artificially generated comprising: slice missing case “facade 1,” slice+voxel missing case “facade 2,” random slice missing cases “house’ and “peppers,” and random voxel missing cases “Lena (95%)’ and “Lena (99%)’.
(PSNR,SSIM) HaLRTC TV reg. LR&TV reg. STDC SPCQV Proposed
facade 1 (19.31,0.937) (30.08,0.964) (30.18,0.964) (24.03,0.958) (30.04,0.972) (36.52,0.988)
facade 2 (16.03,0.852) (24.28,0.838) (24.47,0.842) (17.89,0.826) (25.65,0.916) (26.96,0.916)
house (8.81,0.271) (26.75,0.909) (26.70,0.913) (20.65,0.624) (27.04,0.909) (27.50,0.908)
peppers (9.92,0.288) (25.84,0.877) (25.81,0.885) (21.61,0.662) (26.59,0.888) (27.62,0.898)
Lena (5%) (9.63,0.145) (20.86,0.626) (20.89,0.621) (22.07,0.596) (23.52,0.700) (23.57,0.738)
Lena (1%) (5.32,0.021) (15.47,0.440) (15.49,0.443) (12.75,0.089) (18.49,0.508) (19.68,0.565)
Table 1: Comparisons of the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) after color image completion.
Figure 9: Missing time frames in functional magnetic resonance images.
Figure 10: Results recovered for functional magnetic resonance image slices.
Figure 9: Missing time frames in functional magnetic resonance images.

3.2 Comparison using color images

We compared the performance of the proposed method with those of state-of-the-art tensor completion algorithms: HaLRTC (nuclear-norm regularization) [16], TV regularization [32], nuclear-norm and TV regularization (LR&TV) [28], STDC (constrained Tucker decomposition) [4], and SPCQV (constrained PARAFAC tensor decomposition) [30]. We prepared six missing images for this experiment. The first image had 11 continuous missing slices along the vertical axis. Several horizontal and vertical slices and many voxels were missing from the second image. Random vertical and horizontal slices were missing from the third and fourth images. In addition, 95% and 99% of the random voxels were missing from the fifth and sixth images, respectively.

Figure 8 shows the experimental results obtained after the completion of various incomplete images. Magnified regions are depicted at the bottom right in the first to fourth images. HaLRTC recovered random missing voxels for the second image, but it failed to recover the missing slices for all of the images. The TV regularization and LR&TV regularization methods filled the missing areas, but the recovered areas were unnaturally flat. STDC failed to recover the missing slices and SPCQV retained “shadows” of the missing slices. By contrast, the proposed method recovered most of the missing slices without shadows. For the image with 95% missing voxels, the proposed method and SPCQV obtained similar results. However, for the image with 99% missing voxels, the result obtained by the proposed method was much better than that by SPCQV. Table 1 shows the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) [26] for these comparisons, where the best PSNR and SSIM values are emphasized in bold font. According to this quantitative evaluation, the proposed method performed better than the state-of-the-art methods in terms of the PSNR, and the results in terms of the SSIM were very competitive with SPCQV for some of the images.

# of missing slices HaLRTC TV reg. LR&TV reg. STDC SPCQV Proposed
1 (20.15,0.991) (23.13,0.992) (23.16,0.993) (23.30,0.993) (24.20,0.994) (23.25,0.994)
2 (14.71,0.982) (16.20,0.982) (16.29,0.984) (17.09,0.985) (17.80,0.985) (18.45,0.987)
3 (14.02,0.973) (15.20,0.972) (15.17,0.974) (15.99,0.977) (16.58,0.976) (14.54,0.978)
4 (12.17,0.964) (12.45,0.962) (12.46,0.964) (12.43,0.966) (12.98,0.966) (14.08,0.970)
5 (11.31,0.955) (11.41,0.952) (11.59,0.956) (11.59,0.957) (12.15,0.957) (13.85,0.964)
Table 2: SNR and mean SSIM after the completion of fMRI slices.

3.3 Comparison using fMRI images

In the next experiment, we tried to recover continuous missing time frames in fMRI images. Figure 10 shows the image prepared with missing data. There were 94 time frames of the fMRI slices and one to five continuous time frames were removed. The size of the tensor was . We applied the proposed method with for the given tensor. The rank sequences were set as , , and .

Figure 10 shows the results of this experiments. Similar to the color image experiments, the ordinary low-rank model could not recover the missing slices. TV regularization obtained flat results and LR&TV regularization produced similar results. STDC and SPCQV obtained some improvements compared with the convex methods, but they were still unclear. By contrast, the results obtained by the proposed method were very clear, although their accuracy was not high. Table 2 shows the signal-to-noise ratio (SNR) and mean SSIM results obtained for all of the completion methods, which demonstrates that the proposed method performed better than the state-of-the-art methods in terms of the mean SSIM measure and it was also very competitive in terms of the SNR.

4 Discussion

4.1 Novelty and contributions

4.1.1 Low-rank model in embedded space

The idea of tensor completion using MDT and its inverse transform is novel. Most of the existing methods for tensor completion are based on structural assumptions in the signal space, such as nuclear-norm regularization and TV regularization. By contrast, our method considers a structural assumption in the embedded space, which can be regarded as a novel strategy for the tensor completion problem. Furthermore, we employ delay-embedding in this approach and it is extended it in a multi-way manner for tensors.

4.1.2 An auxiliary function-based approach for Tucker decomposition of a tensor with missing elements

The auxiliary function-based algorithm for Tucker decomposition is efficiently employed. The existing algorithms used for the Tucker decomposition of a tensor with missing elements [8, 14, 12] are based on gradient methods. However the convergence speed of the gradient method is quite sensitive to the step-size parameter. By contrast, the proposed algorithm does not have any hyper-parameters and its monotonic convergence is guaranteed by the auxiliary function theory.

4.1.3 Model selection and uniqueness improvement by the rank increment method

The rank increment method for the Tucker decomposition of incomplete tensors is firstly applied in the best of our knowledge. Several methods for estimating multi-linear tensor ranks have been studied only for complete tensors [23, 29]. Methods for estimating the ranks of matrix and PARAFAC decompositions have been proposed for incomplete data [22, 24, 27, 30], but these methods cannot be applied to our problem. Thus, we proposed a new method for estimating multi-linear ranks for an incomplete tensor, where it can also handle the issue of non-unique solutions.

4.2 Computational bottleneck

The proposed method has an issue with data volume expansion due to MDT. An -th order tensor is converted into a -th order tensor by MDT and its data size increases roughly -fold. This issue makes it difficult to apply the proposed method to large-scale tensors and this problem will be addressed in future research.

5 Conclusions

In this study, we proposed a novel method and algorithm for tensor completion problems that include missing slices. The recovery of missing slices is recognized as a difficult problem that ordinary tensor completion methods usually fail to solve. To address this problem, we introduced the concept of “delay embedding” from the study of dynamical systems and extended it for our problem. We showed that missing slices can be recovered by considering low-rank models in embedded spaces and that MDT is a good choice for this purpose.

At present, the proposed method is very basic but it has many potential extensions such as using different embedding transformations and constrained tensor decompositions (e.g., non-negative, sparse, and smooth). The MATLAB code will be available via our website111https://sites.google.com/site/yokotatsuya/home/software.

References

  • [1] E. J. Candes and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925–936, 2010.
  • [2] E. J. Candes and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009.
  • [3] E. J. Candes and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2010.
  • [4] Y.-L. Chen, C.-T. Hsu, and H.-Y. Liao. Simultaneous tensor decomposition and completion using factor priors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(3):577–591, 2014.
  • [5] L. De Lathauwer, B. De Moor, and J. Vandewalle. On the best rank-1 and rank-(, ,…, ) approximation of higher-order tensors. SIAM Journal on Matrix Analysis and Applications, 21(4):1324–1342, 2000.
  • [6] T. Ding, M. Sznaier, and O. I. Camps. A rank minimization approach to video inpainting. In Proceedings of ICCV, pages 1–8. IEEE, 2007.
  • [7] S. V. Dolgov and D. V. Savostyanov. Alternating minimal energy methods for linear systems in higher dimensions. Part I: SPD systems. arXiv preprint arXiv:1301.6068, 2013.
  • [8] M. Filipovic and A. Jukic. Tucker factorization with missing data with application to low-n-rank tensor completion. Multidimensional Systems and Signal Processing, 26(3):677–692, 2015.
  • [9] S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems, 27(2), 2011.
  • [10] N. Gillis and F. Glineur. Low-rank matrix approximation with weights or missing data is NP-hard. SIAM Journal on Matrix Analysis and Applications, 32(4):1149–1165, 2011.
  • [11] X. Guo and Y. Ma. Generalized tensor total variation minimization for visual data recovery. In Proceedings of CVPR, pages 3603–3611, 2015.
  • [12] H. Kasai and B. Mishra. Low-rank tensor completion: a Riemannian manifold preconditioning approach. In Proceedings of ICML, pages 1012–1021, 2016.
  • [13] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455–500, 2009.
  • [14] D. Kressner, M. Steinlechner, and B. Vandereycken. Low-rank tensor completion by Riemannian optimization. BIT Numerical Mathematics, 54(2):447–468, 2014.
  • [15] Y. Li, K. R. Liu, and J. Razavilar. A parameter estimation scheme for damped sinusoidal signals based on low-rank Hankel approximation. IEEE Transactions on Signal Processing, 45(2):481–486, 1997.
  • [16] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in visual data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):208–220, 2013.
  • [17] E. N. Lorenz. Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20(2):130–141, 1963.
  • [18] I. Markovsky. Structured low-rank approximation and its applications. Automatica, 44(4):891–909, 2008.
  • [19] B. Mishra, G. Meyer, F. Bach, and R. Sepulchre. Low-rank optimization with trace norm penalty. SIAM Journal on Optimization, 23(4):2124–2149, 2013.
  • [20] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3):471–501, 2010.
  • [21] F. Shi, J. Cheng, L. Wang, P.-T. Yap, and D. Shen.

    Low-rank total variation for image super-resolution.

    In Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 155–162. Springer, 2013.
  • [22] M. Tan, I. W. Tsang, L. Wang, B. Vandereycken, and S. J. Pan. Riemannian pursuit for big matrix recovery. In Proceedings of ICML, pages 1539–1547, 2014.
  • [23] M. E. Timmerman and H. A. Kiers.

    Three-mode principal components analysis: Choosing the numbers of components and sensitivity to local optima.

    British Journal of Mathematical and Statistical Psychology, 53(1):1–16, 2000.
  • [24] A. Uschmajew and B. Vandereycken. Greedy rank updates combined with Riemannian descent methods for low-rank optimization. In Proceedings of International Conference on Sampling Theory and Applications, pages 420–424, 2015.
  • [25] P. Van Overschee and B. De Moor. Subspace algorithms for the stochastic identification problem. In Proceedings of IEEE Conference on Decision and Control, pages 1321–1326. IEEE, 1991.
  • [26] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
  • [27] T. Yokota and A. Cichocki. A fast automatic rank determination algorithm for noisy low-rank matrix completion. In Proceedings of APSIPA ASC, pages 43–46, 2015.
  • [28] T. Yokota and H. Hontani. Simultaneous visual data completion and denoising based on tensor rank and total variation minimization and its primal-dual splitting algorithm. In Proceedings of CVPR, pages 3732–3740, 2017.
  • [29] T. Yokota, N. Lee, and A. Cichocki.

    Robust multilinear tensor rank estimation using higher order singular value decomposition and information criteria.

    IEEE Transactions on Signal Processing, 65(5):1196–1206, 2017.
  • [30] T. Yokota, Q. Zhao, and A. Cichocki. Smooth PARAFAC decomposition for tensor completion. IEEE Transactions on Signal Processing, 64(20):5423–5436, 2016.
  • [31] Q. Zhao, L. Zhang, and A. Cichocki. Bayesian CP factorization of incomplete tensors with automatic rank determination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9):1751–1763, 2015.
  • [32] M. Zhu and T. Chan. An efficient primal-dual hybrid gradient algorithm for total variation image restoration. UCLA CAM Report, pages 08–34, 2008.

Supplemental Figures: Results for Various Color-image Completion

In this supplemental document, we show the results of color-image completion by using various methods with explanations of the technical overview of the state-of-the-art methods. Table 3 summarizes the optimization concepts of the state-of-the-art tensor completion methods: high accuracy low-rank tensor completion (HaLRTC) [16], total variation regularization (TV reg.) [32, 28], low-rank and TV regularization (LRTV reg.) [28], simultaneous tensor decomposition and completion [4], smooth PARAFAC tensor completion with quadratic variation (SPCQV) [30], and the proposed method.

In HaLRTC, we minimize the tensor nuclear norm (tNN), which is defined as sum of matrix nuclear norm (NN) for all

-th matricizations of , under the constraint of data consistency . In TV reg., we minimize the tensor total variation (tTV), which is an extension of matrix total variation (TV), under the constraint of data consistency. LRTV simultaneously minimizes tNN and tTV under the constraint of data consistency. Above three methods are formulated as convex optimization problems, and can be solved by primal-dual splitting (PDS) algorithm [condat2013primal]. Since LRTV method proposed in [28] is a generalization of both HaLRTC and TV reg. methods and PDS algorithm is employed for optimization, we tried above three methods via PDS algorithm with appropriate hyper-parameter settings in LRTV formulation. In STDC, a Tucker decomposition problem with regularizations for a core tensor and factor matrices is considered. Factor matrices in STDC are imposed to small nuclear-norm to minimize the tensor rank, and factor prior imposes a kind of similarity of factor vectors each other. The optimization problem of STDC is non-convex, and it is solved by using augmented Lagrangian method. According to [4], the convergence of STDC algorithm is not theoretically guaranteed. In SPCQV, a PARAFAC decomposition problem with regularizations for factor vectors is considered. The PARAFAC decomposition consists of multiple rank-1 tensors, and each rank-1 tensor is constructed by factor vectors. In SPCQV method, individual factor vectors are imposed to be smoothly variated, and number of rank-1 tensors is minimized. The optimization problem of SPCQV is also non-convex, and it is solved by using hierarchical alternating least squares (HALS) algorithm. Unlike the STDC, SPCQV has a monotonic non-increasing property of cost function and the convergence to stationary point is guaranteed. In the proposed method, a low-rank Tucker decomposition problem in embedded space is simply considered. In general, the optimization problem is non-convex, however, monotonic convergence property is guaranteed based on auxiliary function based optimization algorithm.

Figure 11 shows eight benchmark images used in this experiments. For each image, we generate eight types of incomplete images: (a) 50%, (b) 70%, (c) 90%, (d) 95%, and (e) 99% random voxel missing, (f) 11 continuous vertical slices missing, (g) cross shape occlusion with 50% random voxel missing, and (h) random vertical/horizontal slices missing. Thus, totally, 64 incomplete images were generated. For each incomplete image, we applied six tensor completion methods: HaLRTC, TV reg., LRTV reg., STDC, SPCQV, and the proposed method. Hyper parameters for all methods were tuned from several candidates, and employed the best settings. Figures 12-19 show the results of this experiments. First three convex methods recovered them for only low missing ratio cases. STDC outperformed the three convex methods, however, it failed to recover 95% and 99% missing cases and slice missing cases, and sometimes the algorithm did not converge. SPCQV successfully recovered random missing cases, however, it failed to recover slice missing cases. The proposed method obtained more clear results for random missing cases compared with SPCQV, and successfully recovered slice missing cases.

name minimization constraints
HaLRTC [16] tensor nuclear norm (tNN):
TV reg. [32, 28] tensor total variation (tTV):
LRTV reg. [28] (tNN) + (tTV)
STDC [4] (NN of factor matrices) + (l2-norm of core tensor) + (factor prior) Tucker decomposition +
SPCQV [30] (number of rank-1 tensors) + (quadratic variation of factor vectors) PARAFAC decomposition +
Proposed (sum of multi-linear tensor ranks of )
Table 3: Optimization concepts
Figure 11: Eight benchmark images
Figure 12: Results in ‘Airplain’
Figure 13: Results in ‘Baboon’
Figure 14: Results in ‘Barbara’
Figure 15: Results in ‘Facade’
Figure 16: Results in ‘House’
Figure 17: Results in ‘Lena’
Figure 18: Results in ‘Peppers’
Figure 19: Results in ‘Sailboat’