Effective Tensor Completion via Element-wise Weighted Low-rank Tensor Train with Overlapping Ket Augmentation

09/13/2021
by   Yang Zhang, et al.
0

In recent years, there have been an increasing number of applications of tensor completion based on the tensor train (TT) format because of its efficiency and effectiveness in dealing with higher-order tensor data. However, existing tensor completion methods using TT decomposition have two obvious drawbacks. One is that they only consider mode weights according to the degree of mode balance, even though some elements are recovered better in an unbalanced mode. The other is that serious blocking artifacts appear when the missing element rate is relatively large. To remedy such two issues, in this work, we propose a novel tensor completion approach via the element-wise weighted technique. Accordingly, a novel formulation for tensor completion and an effective optimization algorithm, called as tensor completion by parallel weighted matrix factorization via tensor train (TWMac-TT), is proposed. In addition, we specifically consider the recovery quality of edge elements from adjacent blocks. Different from traditional reshaping and ket augmentation, we utilize a new tensor augmentation technique called overlapping ket augmentation, which can further avoid blocking artifacts. We then conduct extensive performance evaluations on synthetic data and several real image data sets. Our experimental results demonstrate that the proposed algorithm TWMac-TT outperforms several other competing tensor completion methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 5

page 8

page 9

page 13

page 14

07/14/2016

Concatenated image completion via tensor augmentation and completion

This paper proposes a novel framework called concatenated image completi...
07/09/2018

Sparse tensor recovery via N-mode FISTA with support augmentation

A common approach for performing sparse tensor recovery is to use an N-m...
07/03/2018

Higher-dimension Tensor Completion via Low-rank Tensor Ring Decomposition

The problem of incomplete data is common in signal processing and machin...
04/17/2018

Fast and Accurate Tensor Completion with Total Variation Regularized Tensor Trains

We propose a new tensor completion method based on tensor trains. The to...
12/11/2019

Tensor Completion via Gaussian Process Based Initialization

In this paper, we consider the tensor completion problem representing th...
05/11/2019

Deep Plug-and-play Prior for Low-rank Tensor Completion

Tensor image data sets such as color images and multispectral images are...
02/12/2019

Weighted Tensor Completion for Time-Series Causal Information

Marginal Structural Models (MSM) Robins00 are the most popular models fo...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Tensors are higher-order generalizations of matrices and vectors, which are represented as multidimensional arrays. Thus, tensors possess better ability to represent practical multidimensional data, such as RGB images, hyperspectral images and video sequences, compared with matrices and vectors. Generally, although such tensors are residing in extremely high-dimensional spaces, they often have low-dimensional structures that can naturally be characterized by so-called low-rankness. Consequently, low-rank tensor modeling is a powerful technique in practical multidimensional data analysis and has received much attention in recent years, e.g.,

[1, 2, 3].

As a generalization of low-rank matrix completion (LRMC) [4, 5, 6], low-rank tensor completion (LRTC) aims at recovering the missing entities of a higher-order tensor whose entries are partially observed[7, 8, 9, 10]

. It has achieved great success in the fields of computer vision, signal processing, and machine learning, among numerous others 

[11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. Due to the ability to maintain the intrinsic structures of the data, LRTC methods generally outperform LRMC methods with natural multidimensional image and video data, e.g., [11, 21, 22, 23, 24, 10, 25, 26, 27, 28]. In addition, the most popular LRTC methods are based on CANDECOMP/PARAFAC (CP) decomposition [29, 30] and Tucker decomposition [31]

. However, for CP, it is still hard to compute the CP rank due to NP-hard nature. The difficulties of CP rank estimation impede the wider applications of CP decomposition to a large extent. For Tucker, by unfolding an

-way tensor into matrices, the obtained matrices are comparably unbalanced. In other words, there is a big gap between the number of rows and columns of the unfolding matrices. This imbalance results in the inefficiency of Tucker rank minimizations, which further limits its performance in dealing with tensor completion task in practice.

As recently stated in [23], tensor train (TT) decomposition [32] could overcome the aforementioned shortcomings of CP and Tucker decompositions. Specifically, the seminal work [23] demonstrated that tensor completion algorithms based on TT decomposition perform better than other popular algorithms in dealing with several image processing tasks. Then TT decomposition has been widely applied in several computer vision tasks [23, 33, 34, 35, 36, 19, 20]. However, such popular TT decomposition algorithms need to address two critical issues:

  1. Weights assignment. The component matrices obtained by TT decomposition include both balanced and unbalanced matrices. Among them, the unbalanced matrices refer to the imbalance in the number of rows and columns. As stated in [23], balanced matrices generally perform better at dealing with tensor recovery tasks. Hence, balanced component matrices are given larger weights when folding matrices back to tensors. However, according to our observation, even in the most unbalanced mode, some elements are recovered more precisely than those in balanced mode, which will be illustrated in detail in Subsection IV-A. That is, current weights assignment strategy is rough and inaccurate to some extent.

  2. Tensor augmentation. It is necessary for low-TT-rank approximation to augment the order of the tensor. As the order of tensor increases, the number of factored matrices increases and more relatively balanced matrices can be obtained. In this way, TT-rank minimization becomes more effective as we try to optimize the objective from multiple perspectives, i.e., sub-matrices from the original tensor. For instance, compared with Tucker decomposition, given a three-order tensor data, the number of TT ranks degrades to two, which is a subset of the Tucker rank (which is three). Thus, TT decomposition does not perform better than the Tucker decomposition in practice. To make better use of TT decomposition, it is vital to make the order of the tensor higher than three. This necessitates some tensor augmentation schemes, including reshaping [37] and ket augmentation (KA) [23, 33, 38]. Though KA possesses better physical meaning [38] than the reshaping technique, in the order-increasing procedure of KA, the lower-order tensor is evenly divided into a number of blocks without utilizing any neighborhood information. Therefore, once the missing element rate is high, the recovered tensors involved in KA often have apparent blocking artifacts, just as our experimental study shows in Section V.

To overcome the aforementioned drawbacks, we focus on both issues. Firstly, the current mode-wise weights assignment scheme cannot properly use the recovered information of each mode, as the recovery quality of every element does not strictly accord with the degree of balance of the mode matrix to which it belongs. To encourage well-recovered elements in the unbalanced mode and suppress poorly recovered elements in the balanced mode, we consider the weights of elements rather than modes. The key idea of our element-wise weighting scheme is somewhat similar to the idea of transformer [39]. Both of them allow greater flexibility to gain a better performance.

Secondly, in order to allow tensor augmentation to better maintain neighborhood information, we propose a new augmentation scheme by introducing overlapping idea. The benefits of such a new scheme are two-fold: 1) the order of the tensor can be further increased compared with that in traditional schemes, which plays an important role in low-TT-rank optimization; and 2) the overlapping procedure enforces local compatibility and smoothness constraints[40], which guarantees that the use of neighborhood information can avoid the blocking effect.

The main contributions of this work can be summarized as follows:

  • We propose an element-wise weighted low-rank tensor completion via tensor train (EWLRTC-TT) model for dealing with the LRTC task, in which each element can be estimated more precisely. Additionally, to prevent oversmoothing of the overlapping regions, the elements in the overlapping regions in each component matrix are assigned different weights.

  • We derive an effective algorithm to solve the EWLRTC-TT model called tensor completion by weighted parallel matrix factorization via tensor train (TWMac-TT). By parallel computing through multiple threads, high computation efficiency can be guaranteed.

  • We propose a novel tensor augmentation scheme called overlapping ket augmentation (OKA) and then incorporate it into the proposed EWLRTC-TT model. The experimental results demonstrate that the EWLRTC-TT model combined with OKA significantly outperforms several state-of-the-art methods.

The rest of the paper is organized as follows. In Section II, we introduce the notations and tensor basics used throughout the paper. We then provide a concise introduction to the current tensor completion model based on TT and KA in Section III. The reformulations of the EWLRTC-TT model and the OKA scheme are proposed in Section IV. In addition, an effective solving algorithm called TWMac-TT is also presented. In Section V, we present extensive experimental results to show how our proposed model outperforms other popular alternatives. The conclusion of our work is finally summarized in section VI.

Ii Notations and basic definitions

Definition 1 (tensor). Tensor[41] is a high-order generalization of vector and matrix, whose dimension is called order or mode. It can be understood as a multi-dimensional array. In this paper, scalers, vectors, and matrices are denoted by lowercase letters , boldface lowercase letters , and capital letters , respectively. Higher order tensors which means their orders are three or above are denoted by calligraphic letters .

Definition 2 (Frobenius norm). An th-order tensor is represented as with elements , where is the dimension corresponding to mode . The Frobenius[41] norm of is defined as

(1)

Definition 3 (mode- unfolding). Mode- unfolding [42] (also known as mode- matricization or flattening) of a tensor , denoted as , is an operation that reshapes the tensor into a matrix by putting the mode in the matrix rows and remaining modes with the original order in the columns such that

(2)

Definition 4 (mode- canonical matricization). Mode- canonical matricization [42] of a tensor , denoted as , is an operation that reshapes the tensor into a matrix by putting the first modes in the matrix rows and the remaining modes in the columns, i.e.,

(3)

such that

(4)

Definition 5 (tensor train) Let be an N-th order tensor with dimension along the mode, each element of the tensor can be represented in the form of tensor train (TT)[32]:

(5)

where is a set of 3-order tensors. And the vector is the so-called tensor train rank (TT-Rank).

Iii Related work

Iii-a Tensor train

Tensor train is the simplest form of tensor networks, and its rank is a generalization of matrix, CP and Tucker ranks. However, as the data in the presentation of TT is much smaller than that in CP and Tucker[43] representations, minimizing the TT rank can be used to complete more intricate images, which seems to be non-low-rank.

The formulation of TT is provided in Section II. It can also be represented in graph form by a linear tensor network[44, 45] (see Fig. 1 for details). In the graph, the rectangles represent the factors of TT, and the circles represent the weights of the edges for the TT rank. Using TT decomposition, a well-balanced matricization scheme can be obtained by matricizing a tensor into modes.

Fig. 1: A fifth order tensor-train network. In this figure, rectangles contain spatial index and the auxiliary indices , , which correspond to the factors in Eq. 5. Circles contain only the auxiliary indices and represent a link.

Low-TT-rank analysis is a well-studied classic method in physics, and quantum dynamics[46, 47]. In addition, it has also been applied in many fields of mathematics, e.g., [48, 49, 50, 51, 52], among many others. Recently, a few works have applied TT to machine learning [53, 23]. These works all assume that the data lie in a low-TT-rank subspace but do not provide any mathematical explanation. In a recent work, Ye et al. [43] tried to explicitly explain why tensor train is more effective in dealing with real world problems, and they give some solid mathematical explanations.

Iii-B Tensor augmentation

As higher-order tensors offer more factored matrices by TT decomposition, tensor augmentation is a vital pre-processing step for utilizing the local structures. Basically, there are two main ways to transform a tensor into a higher-order one, namely the reshape operation[54, 37] and ket augmentation[38, 23].

The reshape operation is a MATLAB function. It can reshape a matrix into an arbitrary higher-order tensor whose elements are taken column-wise. However, since the local structure of the data is not considered, the recovery result obtained by the reshape operation is not satisfactory. The KA method, introduced from the quantum entanglement, can overcome the drawback of the reshape operation and has demonstrated good performance in the tensor completion task [23, 33].

The KA was firstly introduced in [38] to cast the pixel values into a real ket of a Hilbert space. And then it was generalized by [23] for dealing with RGB images. Fig. 2 shows two examples of the ket augmentation procedure to address a gray image into a real ket state and a corresponding generalized version for an RGB image, which is achieved by adding the third channel index . To be specific, for the left gray image in Fig. 2, the pixels are regrouped by firstly dividing them into four sub-blocks indexed by , and then dividing each sub-block into four smaller inner sub-blocks indexed by . For the generalized RGB version on the right hand of Fig. 2, we denote the color channel as dimension and represent red, green and blue respectively. Then we execute exactly the same procedure as in the gray image. These two cases can be formulated mathematically as follows,

(6)

where is the pixel value in the gray image and is the pixel value corresponding to color in the color image. is the orthonormal base to denote the position of each pixel. In the example cases, the values represent the positions indexed from top to bottom and left to right. Thus, we have four orthonormal bases, namely , . Similar to , is also an orthonormal base to denote different color channels. Thus the red, green, blue channels are represented by .

Generally, KA can deal with tensors of size , where denotes the channels of the image and represents the set of positive integers. By repeating the structured block addressing procedure times, we can get the th-order tensor . The obtained tensor can be formulated as follows,

(7)

Thus, KA is able to represent a low-order tensor by a higher-order tensor , where and .

Fig. 2: The ket augmentation procedure [38, 23]. Left: a structured block addressing procedure to cast a gray image into a higher-oder tensor. Right: the same procedure for a RGB image.

Iv The proposed framework

Iv-a EWLRTC-TT Model

The goal of matrix completion is to recover missing entries of a matrix from its partially known entries given by a subset . This can be achieved via the well-known rank minimization technique[55, 56]:

(8)

As a higher-order generalized form of matrix completion, the optimization problem of tensor completion can be similarly written as follows:

(9)

where is an th-order tensor representing the true tensor, and the index set gives the location of partial known entries. Based on tensor train decomposition and parallel matrix factorization, the optimization objective of the tensor completion problem can be written as

(10)

which is called TMac-TT in [23]. In this model, denotes the parallel matrix factorization for canonical TT mode matrix , where and , and denotes the weight of the matrix with . The value of the weight is calculated according to the degree of balance of the canonical TT mode matrices. In other words, we assign larger weights to the more balanced factored matrices and smaller weights to the less balanced factored matrices. The balance degree of a matrix is defined by

(11)

where . It has been shown that this model outperforms other tensor completion models using TT decomposition [23].

Even though the most balanced mode is given the largest weight based on the assumption that it performs the best among different modes, unbalanced modes are also considered to get the final recovery result rather than using the most balanced mode only. This implies that unbalanced modes also provide useful information. In fact, the recovery evaluation results of some elements in unbalanced modes are even better than the highest ones obtained in the most balanced mode.

Fig. 3: Absolute error of 50 randomly selected missing elements after recovered in different modes. Matrix size (1)Mode 1 : ; (2)Mode 4: ; (3)Mode 7: .

To further demonstrate this conjecture, we take a real gray image, i.e., Lena of size , as an example, to figure out whether all the elements are restored the best by the most balanced mode. To better use the ability of TT, before the image completion process, we use KA to increase the order of the image and get a eighth-order tensor . Thus, in this example, we have 7 mode matrices by canonical matricization in Eq. (3). Fig. 3 shows the absolute error between the evaluated value and the corresponding true one for 50 randomly selected elements. For better visualization, we ascendingly sort the elements according to the absolute error of the most balanced mode, i.e., mode 4 matrix of size

. As shown in this figure, although mode 4 performs best among all the modes for most of the elements, there is still some percentage of the elements in the unbalanced modes that result in better recovery. Meanwhile, this observation always holds no matter how many iterations are. Interestingly, the absolute error from unbalanced modes has a large variance at first a few iterations and converges as the algorithm iterates more. This phenomenon is in line with our common sense that the balanced mode provides a stronger rank constraint initially, yet the unbalanced modes tend to reveal more information more stably when most elements have been appropriately recovered.

As a matter of fact, we consider different weights of elements not only when conducting the low-TT-rank approximation but also when folding all mode matrices back into one higher order tensor. Thus, an accurate element weights can be calculated iteratively by our model.

A new approach to the LRTC problem in (9) is to address it by element-wise weighted tensor completion via TT decomposition. In each mode, the matrix decomposition can be represented as for , where and . It can be seen that any rank- matrix can be decomposed in such a way. In addition, any pair of such smaller matrices could yield a rank- matrix by multiplying two such factors. Therefore, this problem can be seen as an unconstrained minimization problem over pairs of matrices with the minimization objective:

(12)

where is the given matrix; is an binary matrix for the missing values in X, where if is available and if is missing; and is the Hadamard product.

Therefore, the proposed EWLRTC-TT model can be formulated as

(13)

Iv-B Overlapping ket augmentation

To eliminate the blocking artifacts, we consider a new manner of tensor augmentation called overlapping ket augmentation (OKA) to represent a low-order tensor by a higher-order one. Fig. 4 shows the processes of OKA and KA. Specifically, we assume that the color is indexed by , where represents the channels of an RGB image. Compared with KA, the matrices from three channels (red, green, and blue colors) are divided into four blocks with some overlapping elements by OKA. The very beginning sub-blocks are indexed by . Next, take one block marked with colors obtained by step 1 as an example, OKA further divides the colored block into four smaller overlapping sub-blocks marked with different colors and retrieves them by . A higher-order tensor can be constructed by repeating such division step.

Fig. 4: Process of OKA and KA: example for a RGB image of third-order into a fifth-order tensor.

Fig. 5 further illustrates the element reallocation procedure of OKA and KA when we transform a matrix into a higher-order tensor. The small squares marked with colors and numbers in Fig. 5 represent different elements of the matrix. Fig. 5 (a) shows the procedure of OKA. The matrix is divided with one overlapping element to get four sub-matrices, which can be stacked into a third-order tensor of size in the first step. Then, OKA further divides this third-order tensor with one overlapping element to form four third-order tensors, and stacks them into a fourth-order tensor of size . KA applies the same procedure except no element overlaps when dividing the matrix. Thus, we get a third-order tensor by KA as shown in Fig. 5 (b).

Different to the KA formulation in section III-B, the process in Fig. 5 (a) can be formulated mathematically as follows,

(14)

where is the pixel value in the matrix indexed by the process shown in Fig. 5 (a), and is the orthonormal base, which is exactly the same meaning as in the KA formulation.

In addition to the above mentioned block artifact elimination, another advantage of OKA is that the input tensors are no longer restricted by the number of rows and columns being . Furthermore, we can deal with non-square cases, i.e., cases in which the numbers of rows and columns are not equal. In other words, we can increase an arbitrary tensor to a higher-order one. We also designed a tiny automatic algorithm for computing the number of overlapping element in every step.

Then, we extend this recursive algorithm into the general case.

(15)

where is determined by the input size and the number of overlapping elements. In Fig. 5

 (a), the number of overlapping elements is set to be 1 or 2, where the overlapping number is determined by the size of current tensor to be processed. To be specific, if the number of rows (or columns) of frontal slice is odd, the current overlapping number will be 2. And if it is even, then the overlapping number will be 1. We formulate this automatic process as follows,

(16)

Then, the augmented order of the input tensor can be calculated by iteratively checking whether the size of the resulting frontal slice of the tensor is greater than . If not, the Nway initialization of OKA algorithm keeps increasing the order of the tensor until it cannot be further divided. After obtaining the desirable Nway, the OKA algorithm can be performed as described in Algorithm 1.

0:  The observed data
1:  Initialization: The objective Nway vector , the starting position index sets and , the shape of each augmentation
2:  for  N-1 loops do
3:     for  to Nway do
4:        
5:        
6:        
7:     end for
8:  end for
8:   Higher-order tensor
Algorithm 1 OKA Procedure
Fig. 5: A structured block addressing procedure to cast a second-order matrix of size 44 into a higher-order tensor. (a) Tensor augmentation by OKA yields a tensor of size . There are two steps to increase the tensor order. By the first step, a third-order tensor is obtained with three dimension marked with , and . By the second step, a fourth-order tensor is formed from the third-order tensor with four dimension denoted with , , and . (b) Tensor augmentation by KA yields a tensor of size . Without any element overlaps, only one step is taken to arrive at a third-order tensor by KA.

Iv-C TWMac-TT-OKA Algorithm

To solve the weighted model EWLRTC-TT in (13), we take its partial derivative with respect to and , that is,

(17)
(18)

To ensure that the solution is identifiable, the norm penalty for both and is introduced into the above formulation:

(19)

Thus, it is easy to get a closed-form update formula by setting the partial derivative to zero, i.e.,

(20)
(21)

where is a diagonal matrix with the elements from the th column of and is a diagonal matrix with the elements from the th row of .

Since the norm penalty is used (19), we update the weight via a convex function following the equation used in [57]:

(22)

where and are positive constants. As a result, by iteratively calculating and , we can guarantee a (local) optimal solution [58].

To solve the EWLRTC-TT model proposed in section IV-A, we apply the block coodinate descent (BCD) algorithm, following TMac and TC-MLFM used in [17] and [16]. More precisely, after updating , and for all , we compute the elements of the tensor as follows:

(23)

where is a fold operation defined as folding all the mode matrices according to their element-wise weights. This algorithm is described as tensor completion by parallel weighted matrix factorization based on tensor train with overlapping ket augmentation (TWMac-TT-OKA). The detailed procedure is presented in Algorithm 2.

Algorithm 2: TWMac-TT-OKA
Input: The observed data , index set
Pre-processing: Augment the input tensor by OKA algorithm (details are in Algorithm 1) and get
Parameters:
1: Initialization: , with
While not converged do:
2: for to do
3:       Unfold the tensor to get
4:       
5:       
6:       
7: end
8: Update the tensor using
End while
Fig. 6: The pipeline of the proposed algorithm in the form of a toy example, where the input is a matrix. The proposed algortihm mainly includes two stages, namely OKA and WTMac-TT. In the process of tensor augmentation in this case, the OKA totally uses two steps to a higher order tensor of size . In the process of WTMac-TT, the algorithm iteratively repeate the weighted low rank matrix decomposition until the algorithm coverges.

To make the process clearer, we display the pipeline of the proposed algorithm in Fig. 6 using an example. We assume that there is a matrix with partial observations. The missing elements are represented by the black squares containing a question mark. To recover the original matrix, TWMac-TT-OKA mainly carries out two stages, which are boxed with rounded rectangles in orange and green, respectively. The first stage is pre-processing which aims to augment the input via the OKA scheme. In particular, the example matrix does not meet with the conditions of KA and reshaping, as 5 is a prime number and can be divided by only 1 and itself. Only OKA has the ability to increase the order of the matrix. In this specific case, two steps are taken, which is similar to the procedure in Fig. 5. In the second phase, TWMac-TT is applied to the augmented tensor of size . By unfolding the tensor using mode- canonical matrization, two mode matrices are formed, i.e., the mode matrix 1 of size and the mode matrix 2 of size . These two matrices are then completed using the weighted LRMC to acquire the corresponding weight matrices. The weight matrix is a gray-scale matrix with the known elements being 1 and the estimated elements ranging from 0 to 1. By multiplying the weight matrices and the mode matrices with the Hardamard product and folding them with the weights, two third-order tensors are obtained. We add these two tensors together to get a recovered tensor. The process in the second phase is repeated until the algorithm converges. By conducting the inverse operation of OKA, we recover the matrix as the output of the proposed algorithm.

V Experiments

V-a Experimental settings and accuracy metrics

We conduct extensive experiments on synthetic data, real color images and magnetic resonance imaging (MRI) data to demonstrate the effectiveness of our model and algorithm. We compare our TWMac-TT algorithm with several classic and state-of-the-art tensor completion methods, including TMac [17], SiLRTC [11], FBCP [59], STDC [60], and TMac-TT [23].

The proposed methods are TMac-TT+OKA, TWMac-TT and TWMac-TT+OKA, among which TWMac-TT+OKA is our main model.

The compared methods are as follows:

  1. Tensor completion by parallel matrix factorization via tensor train enhanced by overlapping ket augmentation (TMac-TT+OKA) is used to verify our novel tensor augmentation technique in real-world data experiments.

  2. Tensor completion by parallel weighted matrix factorization via tensor train (TWMac-TT) is used to verify our novel weighting technique in synthetic experiments.

  3. Tensor completion by parallel weighted matrix factorization via tensor train enhanced by overlapping ket augmentation (TWMac-TT+OKA) is the final model considered in this paper.

  4. Tensor completion by parallel matrix factorization (TMac).

  5. Tensor completion by parallel matrix factorization by utilizing only the most balanced mode (TMac-Square).

  6. Simple low-rank tensor completion (SiLRTC).

  7. Bayesian CP factorization of incomplete tensors (FBCP).

  8. Simultaneous tensor decomposition and completion using factor priors (STDC).

  9. Tensor completion by parallel matrix factorization via tensor train (TMac-TT).

The algorithms are evaluated under different missing ratios (mr), which are defined as follows:

(24)

where is the total number of missing elements, which is randomly uniformly sampled from the original tensor .

To quantitatively evaluate recovery quality, the relative squared error (RSE) is used, which is defined as

(25)

where is the recovered tensor and

is the true tensor. Moreover, the widely used peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are also employed as quantitative measures for image data.

V-B Parameter setting

First, we determine two parameters in (22), i.e. and , which are designed to control the distribution of weight. In all the following experiments, we normalize the input tensor to and then empirically set , and . As such choices of and could yield very good and stable results.

Second, as one of the most important parameters, the TT ranks

are initiated by reserving rather important singular values of each mode. For the

-th mode, is given by its singular values satisfying the following inequality:

(26)

where is the set of singular values sorted in descending order, and is the assigned threshold chosen from the candidate set in practice. Empirical guidelines for tuning the threshold parameter are as follows. For simple images such as Sailboat or Airplane, a larger would always work better. Conversely, for intricate images like Lena, smaller yields better performance. The reason behind this rule of thumb may lies in the fact that the images with less variations are essentially lower rank thus neglecting more singular values does not affect the reserved information and helps recover the missing entries, and vice versa. In practice, we can set as a default setting. By preserving the top largest singular values via the threshold method, we can retain most information and obtain relatively small ranks of unfolding matrices.

V-C Synthetic data completion

We conduct a series of simulations to achieve two main goals. The first goal is to validate the effectiveness of weight estimation of the proposed model and its solving algorithm. The second goal is to demonstrate the superiority of our method over other compared methods. The simulated low-TT-rank tensor is generated simply by using the TT representation formula[32]:

(27)

where the decomposition components

are generated randomly according to a standard Gaussian distribution, i.e.,

. For convenience, the dimension of each TT mode and the corresponding TT ranks are set equally as and , respectively.

In general, we conduct four sets of experiments for different sizes to cover the different tensor orders and ranks, including a fourth-order tensor , a fifth-order tensor , a sixth-order tensor and a seventh-order tensor . The corresponding TT ranks are set as (10,10,10) (fourth-order), (5,5,5,5) (fifth-order), (4,4,4,4,4) (sixth-order) , and (4,4,4,4,4,4) (seventh-order), respectively.

To validate the quality of the weight estimation procedure for the proposed TWMac-TT algorithm, we plot the scatter diagrams of the true errors and the estimated weights obtained by the proposed algorithm for a fourth-order synthesized tensor with different missing rates, namely, 50% and 70%. More details are shown in Fig. 7 and Fig. 8.

(a) Iteration 2
(b) Iteration 4
(c) Iteration 6
(d) Iteration 8
Fig. 7: The scatter diagram of true errors and corresponding estimated weights with missing rate of 50 percent.
(a) Iteration 8
(b) Iteration 10
(c) Iteration 12
(d) Iteration 14
Fig. 8: The scatter diagram of true errors and corresponding estimated weights with missing rate of 70 percent.

In these two instances, we randomly choose 1000 elements from the missing elements for illustration. By observing the relation between the estimated weights and the true recovery errors, we can get an intuitive assessment. As seen, both scatters under different missing rates show an inverse relation between the weights and recovery errors, i.e., larger recovery errors correspond to smaller estimated weights, and vice versa, which evidences that the weight estimation in the proposed algorithm is accurate. Furthermore, from the perspective of the iterations of this experiment, as the errors decrease, the estimated weights get correspondingly larger, and the profiles of the curves of the weights are simultaneously getting "thinner", which means that the estimation errors are getting smaller. These findings not only verify the effectiveness of the element-wise weighted estimation, which is the basis of the validity of our model, but also demonstrates the convergence of the proposed algorithm.

We then compare our algorithm with others in terms of the RSE in Fig. 9. In Fig. 9, different settings of input tensors are evaluated. From top to bottom, then left to right, they are results of tensors’dimensions 4D, 5D, 6D and 7D, respectively. It can be seen from these plots that TWMac-TT performs the best in most cases, especially in the cases where the missing rates are large, e.g., . Among all the compared algorithms, FBCP has the worst performance. Compared with the baseline TMac-TT, TWMac-TT achieves a gain performance by a large margin.

Fig. 9: The RSE comparison between different LRTC algorithms for different sizes of synthetic tensors. From top to bottom, then left to right, they are results of different tensors’ dimensions 4D, 5D, 6D and 7D, respectively.

In the following real data experiments, we apply our method to image completion task on three different kinds of data, including color images, face images and MRI images. As the real-world images are way more complicated than the synthesized ones, transferring our model to realistic images can better demonstrate the practicality and robustness of the proposed model. On the other hand, there are plenty of incomplete photos caused by the issues such as poor communication transmission and so on. Therefore, the application to realistic images of our method is natural, direct, and meaningful.

V-D Color image completion

Five color images are employed for the evaluation, namely, Lena, Peppers, Sailboat, Baboon and Airplane. All these images are represented as third-order tensors of size . Due to the FBCP, SiLRTC, STDC and TMac being based on the original tensor form, the input of these methods is the RGB third-order tensor. For TMac-TT+KA, TMac-TT+RE and the proposed TWMac-TT+OKA, the tensor needs to be initialized into a higher order. KA increases the order of the tensors to nine that are sized , and so does the reshape operation. For our proposed OKA procedure, due to overlap, the order of tensors can be further increased. We set the number of overlapping pixels be 2 and 3, and use the Eq. (?) for the size calculation. Thus, the tensor reformed by OKA is a tenth-order tensor of size .

Fig. 10: Completion of five color images: Lena, Peppers, Sailboat, Baboon, Airplane with 90% missing elements. (a) Missing, (b) FBCP, (c) SiLRTC, (d) STDC, (e) TMac, (f) TMac-TT+KA, (g) TMac-TT+RE, (h) TMac-TT+OKA, and (i) TWMac-TT+OKA.
FBCP SiLRTC STDC TMac TMac-TT+KA TMac-TT+RE TMac-TT+OKA TWMac-TT+OKA
50% RSE 0.0753 0.0745 0.0717 0.1086 0.0791 0.0643 0.0487 0.0434
PSNR 27.3263 27.3671 27.7361 23.8824 26.6803 29.0196 31.6058 32.5480
SSIM 0.8541 0.8795 0.8728 0.7091 0.8973 0.8834 0.9502 0.9594
60% RSE 0.0908 0.0927 0.0797 0.1164 0.0905 0.0817 0.0579 0.0526
PSNR 25.6578 25.4150 26.8793 23.2763 25.6455 26.7737 30.0204 30.7904
SSIM 0.7934 0.8173 0.8457 0.6714 0.8453 0.8238 0.9277 0.9387
70% RSE 0.1106 0.1179 0.0934 0.1273 0.1001 0.1004 0.0731 0.0688
PSNR 23.8795 23.2944 25.5762 22.4985 24.6459 24.8384 27.9240 28.4587
SSIM 0.7098 0.7262 0.8067 0.6210 0.8190 0.7444 0.8817 0.8910
80% RSE 0.1405 0.1548 0.1179 0.1481 0.1150 0.1243 0.0893 0.0846
PSNR 21.7590 20.8928 23.5539 21.2186 23.4593 22.8495 25.9261 26.4070
SSIM 0.5895 0.5981 0.7491 0.5366 0.7377 0.6498 0.8304 0.8391
90% RSE 0.1927 0.2261 0.2119 0.2856 0.1541 0.1654 0.1202 0.1173
PSNR 18.9587 17.6125 18.0841 15.6648 21.0432 20.3709 23.1679 23.3595
SSIM 0.4127 0.4115 0.5453 0.2713 0.6124 0.4993 0.7252 0.7259
TABLE I: The average recovery performance (RSE, PSNR, SSIM) on five images with missing ratios of 50, 60, 70, 80 and 90 percent.

V-E Face images under varying illuminations

Fig. 11: Recovery of the Extended Yale B faces with 90% of missing elements using different algorithms. (a) Original, (b) missing, (c) FBCP, (d) SiLRTC, (e) STDC, (f) TMac, (g) TMac-TT+RE, (h) TMac-TT+OKA, and (i) TWMac-TT+OKA.

Fig. 10

displays the visual recovery results on the five images with 90 percent elements missing. The images recovered by FBCP, SiLRTC, and TMac are so blurred that we can barely even observe the details. For the STDC, there is obvious accumulated noise, which largely degrades the image quality. Though TMac-TT has the ability to recover images decently, serious blocking artifacts largely degrade the visual effect. In contrast, our proposed method TMac-TT+OKA and TWMac-TT+OKA not only prevail against all the other methods but also remove the blocking artifacts. TMac-TT+KA performs the second best thanks to its higher order and strong low-rankness in balanced modes. While TMac-TT+RE also has a higher-order input, its tensor augmentation method does not have any physical meaning. In other words, the reshaping cannot utilize the correlation of different qubits 

[38], which corresponds to the TT rank. Therefore, compared to the reshape, it is better to input a high-order tensor augmented by KA into the same state-of-the-art TMac-TT algorithm.

Meanwhile, by comparing the visual effects in Fig. 10 (f), (g), and (h), we can conclude that the proposed augmentation method OKA performs best when it comes to the tensor augmentation pre-processing methods. OKA overcomes the visual flaws caused by reshape and eliminates the blocking artifacts introduced by KA. Finally, integrated with the OKA scheme, our element weight algorithm TWMac-TT+OKA further improves the image recovery quality. For example, the first row in Fig. 10 exhibits the completion results for Lena by different algorithms. If we zoom in the first row, there is an apparent difference between TMac-TT+OKA and TWMac-TT+OKA. In addition to the overlapping idea’s efficiency, the element-wise weighting scheme further suppresses the local noise and results in a more realistic recovery.

Table I presents the average quantitative results for the five image data. In this table, we bold the optimal values and underline the suboptimal values. In all the cases, our TWMac-TT+OKA algorithm outperforms all the other compared algorithms in terms of all the evaluation measures, which is consistent with the conclusion reached from the visual results in Fig. 10. The second best result is achieved by TMac-TT+OKA, which is significantly superior to other algorithms. We also observe that the superiority of TWMac-TT+OKA over TMac-TT+OKA concerning the quantitative metrics is relatively slight. This observation may raise some doubts about the effectiveness of our weighting scheme. However, we find that a more pleasing visual effect (as shown in Fig. 10) can be gained by using the element weights. The effectiveness of the element-wise weighting idea has also been demonstrated in the synthetic data completion in section V-C.

Taking the recovery of the Lena image with a 90 percent missing rate as an example, TWMac-TT+OKA achieves the best result among the algorithms, with , and . Comparatively, the result obtained by the baseline TMac-TT+KA is , and . We obtain an approximately 17 percent improvement over the best current algorithm in terms of the RSE, a 7 percent increase in terms of the PSNR, and an 11 percent increase in the SSIM. When it comes to the average gain obtained by TWMac-TT+OKA over the baseline TMac-TT+KA on all the evaluated images under the 90 percent missing rate situation, we acquire 24 percent gain in the RSE, 11 percent increase in the PSNR, and 19 percent in the SSIM. The huge improvement in evaluation indicators proves the superiority of our algorithm on the real-world RGB images.

We test the algorithms on the Extended YaleFace Dataset B, which includes 38 people with 9 poses under 64 illumination conditions. This data set is different from RGB images as the channels change from three colors to multiple illuminations. To reduce the computations, we down-sample the original images into cropped images of size . Furthermore, only the frontal pose is used for the test, thus the input tensor is of size . In this case, KA failed to increase the order of tensors as it is designed only for tensors of size , where are positive integers and represents the target higher order. Thus, we only compare the reshaping and OKA. Reshaping gives a sixth-order tensor, and OKA outputs a ninth-order tensor of size .

In Fig. 11

, the performance of the algorithms on the face image completion task is shown. The SiLRTC and STDC can barely recover the corrupt faces. By contrast, FBCP and TMac perform much better than SiLRTC and STDC, but the imputation of the missing entries is not accurate at all. Although TMac-TT+RE utilizes the power of both the higher-order and effective tensor train ranks, it does not improve more compared to the classic algorithms. Instead, as long as we replace the reshaping with the proposed OKA technique, namely, TMac-TT+OKA, the recovery quality can be significantly improved, which again demonstrates the effectiveness of OKA. On the other hand, incorporating the technique of element-wise weighting further improves overall performance. For example, the image recovered by TMac-TT+OKA in the fourth column has an apparent striped noise due to the inflexible mode weights. The proposed TWMac-TT+OKA fixes this phenomenon. The reflection on the nose is distinct, and the shadow on the eyes and cheeks is also evident. Generally, the Yale faces under all the different illuminations are all well recovered by the proposed algorithm. TWMac-TT+OKA achieves the best visual result among all the algorithms.

RSE PSNR SSIM
FBCP 0.1696 23.7424 0.7986
SiLRTC 0.3624 17.1486 0.4856
STDC 0.3462 17.5440 0.6178
TMac 0.1815 23.1565 0.8289
TMac-TT+RE 0.1830 23.0809 0.7146
TMac-TT+OKA 0.1381 25.5300 0.7806
TWMac-TT+OKA 0.1333 25.8320 0.7908
TABLE II: The averaged recovery performance (RSE, PSNR, SSIM) on the seletced yale face under varying illuminations with missing ratio of 90 percent.

The quantitative results in Table II show that the proposed method performs the best with regard to the RSE and PSNR. The results demonstrate the superiority of the element-wise method in modeling the errors and the weights of recovered elements as TWMac-TT+OKA outperforms TMac-TT+OKA both visually and quantitatively. Although the SSIM of the proposed algorithm is not the highest, it is comparable to the best SSIM achieved by TMac and is fairly good. Intuitively, TT-based methods achieve comparably poor SSIM because rearranging the elements destroys the structural characteristics to a certain extent, especially for the cropped small-size images. Besides, the face images are structural in themselves. In this case, the order increase measures weaken their natural low-rankness instead, which is best reflected in the SSIM metrics. Comparing the three TT-based algorithms, namely TMac-TT+RE, TMac-TT+OKA and TWMac-TT+OKA, we find that replacing the reshaping with OKA improves all the quantitative performance to a large extent, including the SSIM. Introducing the weighting strategy further improves these three evaluation indices. Therefore, we arrive at the conclusion that integrating the element-wise weighting and OKA makes a great contribution to the TT-based algorithm. As TWMac-TT+OKA is based on TT, TMac outperforms the proposed method by a narrow margin in terms of the SSIM. Nevertheless, we can see from the visual results in Fig. 11 that the faces recovered by TMac are somewhat distorted, especially in the third column (illumination), while the faces recovered with our proposed method are fine.

V-F MRI data completion

Magnetic resonance imaging (MRI) data is a natural third-order tensor where the first two indices are for spatial variables and the third index is for object slices. We choose a three-dimensional brain MRI tensor for comparison. In this case, reshaping turns the tensor into a seventh-order tensor, and OKA makes the tensor into an eighth-order tensor of size . We show the qualitative results in Fig. 12 and the quantitative results in Table III.

Fig. 12: Completion of MRI with 90% missing elements. The figures from from left to right are: (a) Original, (b) Missing, (c) FBCP, (d) SiLRTC, (e) STDC, (f) TMac, (g) TMac-TT+RE, (h) TMac-TT+OKA, and (i) our TWMac-TT+OKA. From up to bottom, the shown figures are from the 1st to 4th slices from the selected 25 slices.
RSE PSNR SSIM
FBCP 0.2461 30.5639 0.8756
SiLRTC 0.5487 23.5973 0.6483
STDC 0.4632 25.0689 0.7358
TMac 0.4595 25.1388 0.7700
TMac-TT+RE 0.2134 31.7999 0.8950
TMac-TT+OKA 0.1373 35.6295 0.9564
TWMac-TT+OKA 0.1308 36.0551 0.9576
TABLE III: The averaged recovery performance (RSE, PSNR, SSIM) on the MRI data with missing ratio of 90 percent.

From Fig. 12, generally, FBCP, TMac-TT+RE, TMac-TT+OKA and TWMac-TT+OKA show relatively good performance. When looking at the details, we observe that in the first slice (first row in Fig. 12), only the proposed method TMac-TT+OKA and TWMac-TT+OKA recover the original image without any blur or distortion. The numerical evaluation in Table III is also consistent with the analysis above. The proposed TWMac-TT+OKA outperforms all the algorithms in terms of the RSE, PSNR and SSIM. And the TMac-TT+OKA performs second-best in all these evaluation indices. The RSE achieved by the proposed TWMac-TT+OKA is 39 percent lower than that achieved by the baseline algorithm TMac-TT+RE, which is quite a prominent promotion. The PSNR and SSIM of the proposed TWMac-TT+OKA are also 13 percent and 7 percent higher than those of TMac-TT+RE, respectively. The difference between the TMac-TT+OKA and TWMac-TT+OKA is not distinguishable by the naked eye in the qualitative results. It is difficult to judge which one is better may attribute to the lower input resolution. Still and all, the quantitative results indicate that our final model TWMac-TT+OKA performs better.

V-G Algorithm convergence

Now we shall show the convergence of the proposed Algorithm 1. Fig. 13 and Fig. 14 show the changes in the relative error, RSE and PSNR during the iteration of the color image Lena and the MRI images, respectively. In these figures, the color image and MRI data are the same as those introduced in subsection V-D and subsection V-F with a missing rate of 70 percent, respectively. It is easy to see that the relative error drops and then converges rapidly, as does the RSE. Besides, the PSNR has the opposite tendency, i.e., it rises quickly and converges. These findings demonstrate the good convergence behavior of Algorithm 2.

Fig. 13: The relative error, RSE and PSNR of the recovery process of RGB image Lena.
Fig. 14: The relative error, RSE and PSNR of the recovery process of MRI.

Vi Conclusion

This work proposes a novel model named EWLRTC-TT to deal with the LRTC problem based on TT decomposition. To effectively solve this model, the proposed algorithm named TWMac-TT-OKA uses the weighed multilinear matrix factorization technique. To the best of our knowledge, this is the first work that incorporates weighting procedure into multilinear matrix factorization. The proposed algorithm is then applied to both synthetic and real-world data represented by higher-order tensors. Extensive experimental results demonstrate that our algorithm is superior to other competing ones and is also highly scalable to various tensors no matter what size they are. In the future, we shall incorporate our weighted matrix factorization procedure to enhance the performance of the recently proposed tensor completion based on tensor ring decomposition [61].

References

  • [1] M. A. O. Vasilescu and D. Terzopoulos, “Multilinear subspace analysis of image ensembles,” in

    Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on

    , vol. 2.   IEEE, 2003, pp. II–93.
  • [2] J.-T. Sun, H.-J. Zeng, H. Liu, Y. Lu, and Z. Chen, “Cubesvd: a novel approach to personalized web search,” in Proceedings of the 14th international conference on World Wide Web.   ACM, 2005, pp. 382–390.
  • [3] T. Franz, A. Schultz, S. Sizov, and S. Staab, “Triplerank: Ranking semantic web data by tensor decomposition,” in International semantic web conference.   Springer, 2009, pp. 213–228.
  • [4] J.-F. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956–1982, 2010.
  • [5] S. Ma, D. Goldfarb, and L. Chen, “Fixed point and bregman iterative methods for matrix rank minimization,” Mathematical Programming, vol. 128, no. 1-2, pp. 321–353, 2011.
  • [6] B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,” SIAM review, vol. 52, no. 3, pp. 471–501, 2010.
  • [7]

    M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in

    Proceedings of the 27th annual conference on Computer graphics and interactive techniques.   ACM Press/Addison-Wesley Publishing Co., 2000, pp. 417–424.
  • [8] N. Komodakis, “Image completion using global optimization,” in Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, vol. 1.   IEEE, 2006, pp. 442–452.
  • [9] T. Korah and C. Rasmussen, “Spatiotemporal inpainting for recovering texture maps of occluded building facades,” IEEE Transactions on Image Processing, vol. 16, no. 9, pp. 2262–2271, 2007.
  • [10] T. Xie, S. Li, L. Fang, and L. Liu, “Tensor completion via nonlocal low-rank regularization,” IEEE transactions on cybernetics, vol. 49, no. 6, pp. 2344–2354, 2018.
  • [11] J. Liu, P. Musialski, P. Wonka, and J. Ye, “Tensor completion for estimating missing values in visual data,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 1, pp. 208–220, 2013.
  • [12] M. Signoretto, L. De Lathauwer, and J. A. Suykens, “Nuclear norms for tensors and their use for convex multilinear estimation,” Submitted to Linear Algebra and Its Applications, vol. 43, 2010.
  • [13] M. Signoretto, R. Van de Plas, B. De Moor, and J. A. Suykens, “Tensor versus matrix completion: a comparison with application to spectral data,” IEEE Signal Processing Letters, vol. 18, no. 7, pp. 403–406, 2011.
  • [14] S. Gandy, B. Recht, and I. Yamada, “Tensor completion and low-n-rank tensor recovery via convex optimization,” Inverse Problems, vol. 27, no. 2, p. 025010, 2011.
  • [15] R. Tomioka, T. Suzuki, K. Hayashi, and H. Kashima, “Statistical performance of convex tensor decomposition,” in Advances in Neural Information Processing Systems, 2011, pp. 972–980.
  • [16] H. Tan, B. Cheng, W. Wang, Y.-J. Zhang, and B. Ran, “Tensor completion via a multi-linear low-n-rank factorization model,” Neurocomputing, vol. 133, pp. 161–169, 2014.
  • [17] Y. Xu, R. Hao, W. Yin, and Z. Su, “Parallel matrix factorization for low-rank tensor completion,” arXiv preprint arXiv:1312.1254, 2013.
  • [18] R. Xu, Y. Xu, and Y. Quan, “Factorized tensor dictionary learning for visual tensor data completion,” IEEE Transactions on Multimedia, 2020.
  • [19] L. Yuan, Q. Zhao, L. Gui, and J. Cao, “High-dimension tensor completion via gradient-based optimization under tensor-train format,” arXiv preprint arXiv:1804.01983, 2018.
  • [20] C.-Y. Ko, K. Batselier, L. Daniel, W. Yu, and N. Wong, “Fast and accurate tensor completion with total variation regularized tensor trains,” IEEE Transactions on Image Processing, vol. 29, pp. 6918–6931, 2020.
  • [21] Z. Zhang and S. Aeron, “Exact tensor completion using t-svd,” IEEE Transactions on Signal Processing, vol. 65, no. 6, pp. 1511–1526, 2016.
  • [22] T. Yokota, Q. Zhao, and A. Cichocki, “Smooth parafac decomposition for tensor completion,” IEEE Transactions on Signal Processing, vol. 64, no. 20, pp. 5423–5436, 2016.
  • [23] J. A. Bengua, H. N. Phien, H. D. Tuan, and M. N. Do, “Efficient tensor completion for color image and video recovery: Low-rank tensor train,” IEEE Transactions on Image Processing, vol. 26, no. 5, pp. 2466–2479, 2017.
  • [24] P. Zhou, C. Lu, Z. Lin, and C. Zhang, “Tensor factorization for low-rank tensor completion,” IEEE Transactions on Image Processing, vol. 27, no. 3, pp. 1152–1163, 2017.
  • [25] L. Zhang, L. Song, B. Du, and Y. Zhang, “Nonlocal low-rank tensor completion for visual data,” IEEE transactions on cybernetics, 2019.
  • [26] Y. Liu, F. Shang, L. Jiao, J. Cheng, and H. Cheng, “Trace norm regularized candecomp/parafac decomposition with missing data,” IEEE transactions on cybernetics, vol. 45, no. 11, pp. 2437–2448, 2014.
  • [27] Y. Chang, L. Yan, X.-L. Zhao, H. Fang, Z. Zhang, and S. Zhong, “Weighted low-rank tensor recovery for hyperspectral image restoration,” IEEE transactions on cybernetics, vol. 50, no. 11, pp. 4558–4572, 2020.
  • [28] Y. Du, G. Han, Y. Quan, Z. Yu, H.-S. Wong, C. P. Chen, and J. Zhang, “Exploiting global low-rank structure and local sparsity nature for tensor completion,” IEEE transactions on cybernetics, vol. 49, no. 11, pp. 3898–3910, 2018.
  • [29] J. D. Carroll and J.-J. Chang, “Analysis of individual differences in multidimensional scaling via an n-way generalization of “eckart-young” decomposition,” Psychometrika, vol. 35, no. 3, pp. 283–319, 1970.
  • [30] R. A. Harshman, “Foundations of the parafac procedure: Models and conditions for an" explanatory" multimodal factor analysis,” 1970.
  • [31] L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika, vol. 31, no. 3, pp. 279–311, 1966.
  • [32] I. V. Oseledets, “Tensor-train decomposition,” SIAM Journal on Scientific Computing, vol. 33, no. 5, pp. 2295–2317, 2011.
  • [33] J. A. Bengua, H. D. Tuan, H. N. Phien, and M. N. Do, “Concatenated image completion via tensor augmentation and completion,” in Signal Processing and Communication Systems (ICSPCS), 2016 10th International Conference on.   IEEE, 2016, pp. 1–7.
  • [34]

    R. Dian, S. Li, and L. Fang, “Learning a low tensor-train rank representation for hyperspectral image super-resolution,”

    IEEE Transactions on Neural Networks and Learning Systems

    , vol. 30, no. 9, pp. 2672–2683, Sep. 2019.
  • [35] A. Phan, A. Cichocki, A. Uschmajew, P. Tichavský, G. Luta, and D. P. Mandic, “Tensor networks for latent variable analysis: Novel algorithms for tensor train approximation,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–15, 2020.
  • [36] Y. Liu, J. Liu, and C. Zhu, “Low-rank tensor train coefficient array estimation for tensor-on-tensor regression,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–10, 2020.
  • [37] W. Wang, V. Aggarwal, and S. Aeron, “Efficient low rank tensor ring completion,” Rn, vol. 1, no. r1, p. 1, 2017.
  • [38] J. I. Latorre, “Image compression and entanglement,” arXiv preprint quant-ph/0510031, 2005.
  • [39] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” arXiv preprint arXiv:1706.03762, 2017.
  • [40] H. Chang, D.-Y. Yeung, and Y. Xiong, “Super-resolution through neighbor embedding,” in Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, vol. 1.   IEEE, 2004, pp. I–I.
  • [41] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM review, vol. 51, no. 3, pp. 455–500, 2009.
  • [42] A. Cichocki, “Tensor networks for big data analytics and large-scale optimization problems,” arXiv preprint arXiv:1407.3124, 2014.
  • [43] K. Ye and L.-H. Lim, “Tensor network ranks,” arXiv preprint arXiv:1801.02662, 2018.
  • [44] “Concatenated tensor network … hubener and volckmar nebendahl and wolfgang dur,” 2010.
  • [45] Román, “A practical introduction to tensor networks: Matrix product states and projected entangled pair states,” Annals of Physics, vol. 349, no. 10, pp. 117–158, 2013.
  • [46] G. Vidal, “Efficient classical simulation of slightly entangled quantum computations,” Physical review letters, vol. 91, no. 14, p. 147902, 2003.
  • [47] ——, “Efficient simulation of one-dimensional quantum many-body systems,” Physical review letters, vol. 93, no. 4, p. 040502, 2004.
  • [48] I. Oseledets, E. Tyrtyshnikov, and N. Zamarashkin, “Tensor-train ranks for matrices and their inverses,” Computational Methods in Applied Mathematics Comput. Methods Appl. Math., vol. 11, no. 3, pp. 394–403, 2011.
  • [49] E. Corona, A. Rahimian, and D. Zorin, “A tensor-train accelerated solver for integral equations in complex geometries,” Journal of Computational Physics, vol. 334, pp. 145–169, 2017.
  • [50] I. V. Oseledets and S. Dolgov, “Solution of linear systems and matrix inversion in the tt-format,” SIAM Journal on Scientific Computing, vol. 34, no. 5, pp. A2718–A2739, 2012.
  • [51]

    T. Mach, “Computing inner eigenvalues of matrices in tensor train matrix format,” in

    Numerical Mathematics and Advanced Applications 2011.   Springer, 2013, pp. 781–788.
  • [52] N. Lee and A. Cichocki, “Estimating a few extreme singular values and vectors for large-scale matrices in tensor train format,” SIAM Journal on Matrix Analysis and Applications, vol. 36, no. 3, pp. 994–1014, 2015.
  • [53]

    J. A. Bengua, H. N. Phien, and H. D. Tuan, “Optimal feature extraction and classification of tensors via matrix product state decomposition,” in

    Big Data (BigData Congress), 2015 IEEE International Congress on.   IEEE, 2015, pp. 669–672.
  • [54] A. Novikov, D. Podoprikhin, A. Osokin, and D. P. Vetrov, “Tensorizing neural networks,” in Advances in Neural Information Processing Systems, 2015, pp. 442–450.
  • [55] M. Fazel, “Matrix rank minimization with applications,” Ph.D. dissertation, PhD thesis, Stanford University, 2002.
  • [56] M. Kurucz, A. A. Benczúr, and K. Csalogány, “Methods for large scale svd with missing values,” in Proceedings of KDD cup and workshop

    , vol. 12.   Citeseer, 2007, pp. 31–38.

  • [57] X. Guo and Y. Ma, “Generalized tensor total variation minimization for visual data recovery,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3603–3611.
  • [58] N. Srebro and T. Jaakkola, “Weighted low-rank approximations,” in Proceedings of the Twentieth International Conference on International Conference on Machine Learning, ser. ICML’03.   AAAI Press, 2003, p. 720–727.
  • [59] Q. Zhao, L. Zhang, and A. Cichocki, “Bayesian cp factorization of incomplete tensors with automatic rank determination,” IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 9, pp. 1751–1763, 2015.
  • [60] Y.-L. Chen, C.-T. C. Hsu, and H.-Y. M. Liao, “Simultaneous tensor decomposition and completion using factor priors,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 1, p. 1, 2013.
  • [61] Q. Zhao, G. Zhou, S. Xie, L. Zhang, and A. Cichocki, “Tensor ring decomposition,” arXiv preprint arXiv:1606.05535, 2016.