Tensor decomposition aims to find the latent factors of tensor valued data (i.e. the generalization of multi-dimensional arrays), thereby casting large-scale tensors into a multilinear tensor space of low-dimensionality (very few degree of freedom designated by the rank). Tensor factors can then be considered as latent features of data, and in this way can represent the data economically and predict missing entries when the data is incomplete. The specific form and operations among latent factors defines the type of tensor decomposition. A variety of tensor decomposition models have been applied in diverse fields such as machine learning[20, 2, 16] and signal processing [30, 7]. Tucker decomposition and CANDECOMP/PARAFAC (CP) decomposition are classical tensor decomposition models, which have been studied for nearly half a century [18, 24, 13].
In recent years, the concept of tensor networks has been proposed and has become a powerful and promising aspect of tensor methodology [5, 6]. One of the most recent and popular tensor networks, named the matrix product state/tensor-train (MPS/TT), is studied across disciplines owing to its super compression and computational efficiency properties [21, 20]. For a tensor of -dimensions, the most significant property of TT decomposition is that the space complexity will not grow exponentially with
, thus providing a natural remedy for the ‘curse of dimensionality’, while the number of parameters of Tucker decomposition is exponential in. Although the CP decomposition is a highly compact representation which has desirable property of being linear in , it has difficulties in finding the optimal latent tensor factors. To address these issues, recent studies propose a generalization of TT decomposition, termed the tensor ring (TR) decomposition, in order to relax the rank constraint of TT, thus offering an enhanced representation ability, latent factors permutation flexibility (i.e. tensor permutation is directly related to the permutation of tensor factors) and structure information interpretability (i.e. each tensor factor can represent a specific feature of original tensor) [29, 27].
Tensor completion aims to recover an incomplete tensor from partially observed entries. The theoretical lynchpin in matrix or tensor completion problems is the low-rank assumption, and tensor completion has been applied in various applications such as image/video completion [19, 28], recommendation systems , link prediction , compressed sensing , to name but a few. Since the determination of tensor rank is an NP-hard problem[14, 18], many tensor low-rank surrogates were proposed for tensor completion. One such surrogate is the Schatten norm (a.k.a. nuclear norm, or trace norm), which is defined as the sum of singular values of a matrix, and is the most popular convex surrogate for rank regularization. Unlike matrix completion problems, the Schatten norm model of a tensor is hard to formulate. Recent studies mainly focus on two convex relaxation models of tensor Schatten norm, the ‘overlapped’ model [19, 23, 4, 22, 15] and the ‘latent’ [23, 12] model.
The work in  first proposes the ‘latent’ norm model and shows that the mean square error of a ‘latent’ norm method scales no greater than the ‘overlapped’ norm method. Under the low-rank regularization of the latent model, the tensor does not need to be low-rank at every mode, which is considered as a more flexible constraint. Both models do not need to specify the rank of decompositions, and the rank of tensor is optimized to be minimal subject to the equivalence of observed elements. However, the two methods need to perform multiple SVD operations on the matricization of tensors, and the computational complexity grows exponentially with tensor dimension. Other tensor completion algorithms, like alternating least squares (ALS) [11, 25] and gradient-based algorithms [26, 1], need to specify the rank of the decompositions beforehand, which leads to annoyed parameter tuning problems. In addition, the completion performance of tensor completion algorithms is mainly affected by rank selection, the number of observed entries and tensor dimensions.
In this paper, in order to tackle the high computational cost and the sensitivity to rank selection problems that most proposed algorithms experience, we propose a new tensor completion model based on the tensor ring decomposition. Our main contributions are listed below:
The relation between low-rank assumption on tensor and latent factors is theoretically explained , and the low-rank surrogate on latent factors of tensor ring decomposition is introduced.
We formulate the TR overlapped low-rank factor (TR-OLRF) model and the TR latent low-rank factor (TR-LLRF) model, then the two models are solved efficiently by the ADMM algorithm.
We conduct several experiments and obtain the high performance and high efficiency by using our algorithms. In addition, the experiments results also show that our algorithms are robust to rank selection and data dimensionality.
2.1 tensor ring decomposition
tensor ring (TR) decomposition is a more general decomposition than tensor-train (TT) decomposition, and it represents a tensor with large dimension by circular multilinear products over a sequence of low dimension cores. All of the cores corresponding to TR decomposition are order-three tensors, and are denoted by , . The decomposition diagram is shown in Fig. 1. In the same way as TT, the TR decomposition linearly scales to the dimension of the tensor, thus it can overcome the ‘curse of dimensionality’. For simplicity, we define to represent a set of tensor cores. The syntax denotes TR-rank which controls the model complexity of TR decomposition. The TR decomposition relaxes the rank constraint on the first and last core of TT to , while the original constraint on TT is rather stringent, i.e., . TR applies trace operation and all the core tensors are constrained to be third-order equivalently. In this case, TR can be considered as a linear combination of TT and thus offers a more powerful and generalized representation ability than TT. The element-wise relation and global relation of TR decomposition and the original tensor is given by equations (1) and (2):
where is the matrix trace operator, is the th mode- slice matrix of , which also can be denoted by . is a subchain tensor by merging all cores except the th core tensor, i.e., . is the mode- matricization operator of a tensor, i.e., if , then . is another type of mode- matricization operator of a tensor, e.g., if , then .
2.2 Tensor completion by Schatten norm regularization
The low-rank tensor completion problem can be formulated as:
and the model can be written in a unconstrained form by:
where is the low-rank approximation tensor, is a rank regularizer, denotes all the observed entries w.r.t. the set of indices of observed entries represented by , and is the Frobenius norm. For the low-rank tensor completion problem, determining the rank of a tensor is an NP-hard problem. Work in  and  extends the concept of low-rank matrix completion and defines the tensor rank as the sum of the rank of mode- matricization of the tensor. This surrogate is named ‘overlapped’ model, and it simultaneously regularizes all the mode- matricizations of a tensor into low-rankness by Schatten norm. In this way, we can define the rank of a tensor as:
where denotes the Schatten norm.
Another surrogate of tensor rank, named ‘latent’ low-rank, has been proposed and studied recently. In , the ‘latent’ model considers the original tensor as a summation of several latent tensors and assumes that each latent tensor is low-rank in a specific mode:
This convex surrogate is more flexible as it can fit the tensor well if the tensor does not have low-rankness in all modes. The completion algorithms based on these two models are shown to have fast convergence and good performance when data size is small. However, when we need to deal with large-scale data, the multiple SVD operations will be intractable due to high computational cost.
2.3 Tensor completion by tensor decomposition
Some other existing tensor completion algorithms do not employ a low-rank constraint to the tensor, and thus they do not find the low-rank tensor directly, instead, they try to find the low-rank representation (i.e. tensor factors) of the incomplete data by observed entries, then the obtained latent factors are used to predict the missing entries. The completion problem is set as a weighted least squares model, e.g., the tensor completion model based on TR decomposition is formulated below:
where is the Hadamard product of two tensors of same size, is the tensor generated by the tensor factors. is a weight tensor which is the same size as , it records the indices of the observed entries of , and every entry of satisfies and .
Based on solving tensor factors of different tensor decompositions, many tensor completion algorithms have been proposed, e.g., weighted CP , weighted Tucker , and weighted TT , TR-ALS . However, usually these algorithms are solved by gradient-based method or alternating least squares method, they are shown to suffer from low convergence speed and high computational cost. In addition, the performance of these methods is sensitive to rank selection.
In this paper, we make virtue of applying both ‘overlapped’ approach and ‘latent’ approach of structured Schatten norms, and aim to formulate a new tensor completion model. The main idea is to give a low-rank constraint on latent factors of a tensor. In this way, we only need to calculate SVD on the tensor factors instead of the whole scale of data. At the same time, low-rankness constraint on tensor factors will regularize the tensor factors to low-rank, and in doing so it will solve the problem of rank selection. The next section we presents our proposed method based on both ‘overlapped’ and ‘latent’ tensor low-rank models.
3 Low-rankness on tensor factors
We propose a new definition on low-rank tensor, which gives the low-rankness on the decomposition factors of a tensor, for TR decomposition, the low-rank model is formulated as:
We need firstly to deduce the relation of tensor rank and tensor factor rank, which can be explained by the below theorem:
: For , .
: For , from equation (2), we can infer .
The above theorem proves the relation between the ranks of tensor and core tensors . Since is an upper bound of the mode- matricization of tensor , we can take assumption that has a low-rank structure. This can largely decrease the computational complexity compared to other algorithms which give low-rank assumption on overlapped tensors or latent tensors. In a similar way, we can deduce that the sum of latent rank of tensor factors is the upper bound of the latent rank of the original tensor. More specifically, our tensor ring overlapped low-rank factor (TR-OLRF) model is formulated as follows:
The TR latent low-rank factor (TR-LLRF) model is outlined below:
The two models have two distinctive advantages. Firstly, the low-rank assumption is placed on tensor factors instead of on the original tensor, this reduces the computational complexity of the SVD operation largely. Secondly, low-rankness on tensor factors can enhance the robustness to rank selection.
3.1 Solving scheme
To solve the equations (9) and (10), we apply the augmented Lagrangian multiplier method (ADMM) which is efficient and widely used. Because the variables of TR-OLRF are interdependent, we adopt alternative variables, and the augmented Lagrangian function of TR-OLRF model is:
where are the alternative variables of , are Lagrangian multipliers, denotes the inner product, and is a penalty parameter.
To update and , , the augmented Lagrangian function are formulated by:
For , the th iteration update scheme of alternating direction method of multipliers (ADMM) of TR-OLRF model is listed below:
where , is the reverse operator of that transforms mode- matricization of a tensor to the original tensor, is the singular value thresholding (SVT) operator, i.e., if is the singular value decomposition of matrix , then , and is the set of indices of missing entries.
Similarly, the augmented Lagrangian function of TR-LLRF model can be written as:
To update and , the augmented Lagrangian function is formulated by:
The corresponding update scheme of TR-LLRF model is listed below:
The ADMM solving model is updated iteratively based on the above model and updating scheme. The implementation process and hyper-parameter selection of the two algorithms are summarized in Alg. 1 and Alg. 2.
|Alg. 1 TR overlapped low-rank factors (TR-OLRF)||Alg. 2 TR latent low-rank factors (TR-LLRF)|
|1: Input: , initial TR-rank .||1: Input: , initial TR-rank ,.|
|2: Initialization: , , , , , , element of s.t. , , , , .||2: Initialization: , , , , , , element of s.t. , , , , .|
|3: While the stopping condition is not satisfied do||3: While the stopping condition is not satisfied do|
|4: k=k+1;||4: k=k+1;|
|5: Update variables by equation (14).||5: Update variables by equation (18).|
|6: If , break||6: If , break|
|7: End while||7: End while|
|8: Output: and , .||8: Output: and , .|
3.2 Computational complexity
We next compared the computational complexity of our TR-OLRF and TR-LLRF to the state-of-the-art algorithms TR-ALS , SiLRTC-TT , SiLRTC  and FBCP . The comparative algorithms are state-of-the-art algorithms and are similar to our algorithms. The complexities are summarized in Tab. 1, where we denote the dimension of tensor by , , and all the TT-ranks, TR-ranks and CP ranks are set to . From Tab. 1 we can see that compared to Schatten norm based algorithms, the computational complexity of our algorithms are linear in tensor dimension. Compared to TR-ALS and BCPF, the complexity of our algorithms is independent from the number of observed entries. The computational complexity of our algorithms increase fast when increases, however, due to the linear scalability of TR decomposition, is often small in model selection of proposed algorithms. In addition, most of the stated algorithms are rank adaptive, i.e., robust to rank selection.
4 Experiment results
4.1 Synthetic data
To verifying the performance of our two proposed algorithms, we test two tensors of size and . The tensors were generated by TR factors of TR-ranks and respectively. The values of the TR factors were drawn from an normal distribution . We define as the sum of square root of TR-rank (i.e. ) to be the index of model complexity. The observed entries of the tensors were randomly removed. We verified the performance of the proposed two algorithms in several scenarios, with the mean RSE values of 10 times of dependent experiments as the final results. All the hyper-parameters of the two algorithms were set according to Alg. 1 and Alg. 2.
For the first experiment, we test the completion performance of our two algorithms and four other state-of-the-art algorithms under different missing rates, from 0.1 to 0.99. For our algorithms, we set the TR-rank to be the same as the real rank of the synthetic data and other hyper-parameters were set as default. For other compared algorithms, we tuned the hyper-parameters respectively to obtain the best results of each algorithm. Fig. 2 shows the experiment results for the order-four tensor and order-six tensor respectively.
For the second experiment, we tested the completion performance of our two algorithms under various SSR values, the missing rate was set to , and used again two different tensors. The results in the first picture of Fig. 3 show that our two algorithms obtained the lowest RSE values when the SSR was near the real SSR, and when the SSR value increased, the RSE value remained stable. This indicates that our algorithms are robust to rank selection.
For the third and forth experiments, we tested the performance of our algorithms over different values of , missing rate was set to , and TR-rank is chosen as the real rank of the two tensors. Fig. 3 shows the robustness for the three different values of and verifies that our two algorithms are robust to the selection of .
4.2 Hyperspectral image
A hyperspectral image of size was next considered. This was an image of urban landscape collected by a satellite. We compare our TR-OLRF and TR-LLRF to TR-ALS , TT-SiLRTC , SiLRTC and BCPF. We examined order-three, order-five, order-seven and order-eight tensors respectively. The missing rate is set as 0.9 and the hyper-parameters are set as defaults. For each tensor, we choose all the TR-ranks as a same value, i.e., . The tensor size and TR-ranks are recorded in the first column of Tab. 2, and the RSE values of each tensor against each algorithm are listed in Tab. 2.
From the results we can see, our algorithms significantly outperform TT-SiLRTC, SiLRTC, BCPF. Though the results of TR-ALS are comparable to our algorithms, it should be noted that the computational time of TR-ALS is more than double of the time TR-OLRF and TR-LLRF spent (1891 seconds vs 756 seconds and 988 seconds) in order to get the similar results.
In order to solve the large-scale SVD calculation and rank selection problem that most tensor completion methods have. We proposed two algorithms which impose low-rank assumption on tensor factors. Based on tensor ring decomposition, we proposed two optimization models named as TR-OLRF and TR-LLRF. The two models can be solved efficiently by ADMM algorithm. We test the algorithms on synthetic data in various situations by synthetic data and real world data. The high performance and high efficiency of ur algorithms are obtained from the experiment results. In addition, the results also show that the proposed algorithms are robust to tensor rank and other model parameters. The proposed method is heuristic to all the model-based low-rank tensor completion and decomposition, and it can be applied to various tensor decompositions to create more efficient and robust algorithms.
-  Evrim Acar, Daniel M Dunlavy, Tamara G Kolda, and Morten Mørup. Scalable tensor factorizations for incomplete data. Chemometrics and Intelligent Laboratory Systems, 106(1):41–56, 2011.
-  Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. The Journal of Machine Learning Research, 15(1):2773–2832, 2014.
-  Johann A Bengua, Ho N Phien, Hoang Duong Tuan, and Minh N Do. Efficient tensor completion for color image and video recovery: Low-rank tensor train. IEEE Transactions on Image Processing, 26(5):2466–2479, 2017.
-  Hao Cheng, Yaoliang Yu, Xinhua Zhang, Eric Xing, and Dale Schuurmans. Scalable and sound low-rank tensor learning. In Artificial Intelligence and Statistics, pages 1114–1123, 2016.
-  Andrzej Cichocki, Namgil Lee, Ivan Oseledets, Anh-Huy Phan, Qibin Zhao, and Danilo P Mandic. Tensor networks for dimensionality reduction and large-scale optimization: Part 1 low-rank tensor decompositions. Foundations and Trends® in Machine Learning, 9(4-5):249–429, 2016.
-  Andrzej Cichocki, Anh-Huy Phan, Qibin Zhao, Namgil Lee, Ivan Oseledets, Masashi Sugiyama, and Danilo P Mandic. Tensor networks for dimensionality reduction and large-scale optimization: Part 2 applications and future perspectives. Foundations and Trends® in Machine Learning, 9(6):431–673, 2017.
-  Fengyu Cong, Qiu-Hua Lin, Li-Dan Kuang, Xiao-Feng Gong, Piia Astikainen, and Tapani Ristaniemi. Tensor decomposition of EEG signals: a brief review. Journal of Neuroscience Methods, 248:59–69, 2015.
-  Beyza Ermiş, Evrim Acar, and A Taylan Cemgil. Link prediction in heterogeneous data via generalized coupled tensor factorization. Data Mining and Knowledge Discovery, 29(1):203–236, 2015.
-  Marko Filipović and Ante Jukić. Tucker factorization with missing data with application to low- n n-rank tensor completion. Multidimensional systems and signal processing, 26(3):677–692, 2015.
-  Silvia Gandy, Benjamin Recht, and Isao Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems, 27(2):025010, 2011.
-  Lars Grasedyck, Melanie Kluge, and Sebastian Kramer. Variants of alternating least squares tensor completion in the tensor train format. SIAM Journal on Scientific Computing, 37(5):A2424–A2450, 2015.
-  Xiawei Guo, Quanming Yao, and James Tin-Yau Kwok. Efficient sparse low-rank tensor completion using the Frank-Wolfe algorithm. In AAAI, pages 1948–1954, 2017.
-  RA Harshman. Foundations of the PARAFAC procedure: Models and conditions for an" explanatory" multi-mode factor analysis. UCLA Working Papers in Phonetics, 16:1–84, 1970.
-  Christopher J Hillar and Lek-Heng Lim. Most tensor problems are NP-hard. Journal of the ACM (JACM), 60(6):45, 2013.
-  Masaaki Imaizumi, Takanori Maehara, and Kohei Hayashi. On tensor train rank minimization: Statistical efficiency and scalable algorithm. In Advances in Neural Information Processing Systems, pages 3933–3942, 2017.
Heishiro Kanagawa, Taiji Suzuki, Hayato Kobayashi, Nobuyuki Shimizu, and
Gaussian process nonparametric tensor estimator and its minimax optimality.In International Conference on Machine Learning, pages 1632–1641, 2016.
-  Alexandros Karatzoglou, Xavier Amatriain, Linas Baltrunas, and Nuria Oliver. Multiverse recommendation: n-dimensional tensor factorization for context-aware collaborative filtering. In Proceedings of the fourth ACM conference on Recommender systems, pages 79–86. ACM, 2010.
-  Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM review, 51(3):455–500, 2009.
-  Ji Liu, Przemyslaw Musialski, Peter Wonka, and Jieping Ye. Tensor completion for estimating missing values in visual data. IEEE transactions on pattern analysis and machine intelligence, 35(1):208–220, 2013.
Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov.
Tensorizing neural networks.In Advances in Neural Information Processing Systems, pages 442–450, 2015.
-  Ivan V Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5):2295–2317, 2011.
-  Marco Signoretto, Quoc Tran Dinh, Lieven De Lathauwer, and Johan AK Suykens. Learning with tensors: a framework based on convex optimization and spectral regularization. Machine Learning, 94(3):303–351, 2014.
-  Ryota Tomioka and Taiji Suzuki. Convex tensor decomposition via structured schatten norm regularization. In Advances in neural information processing systems, pages 1331–1339, 2013.
-  Ledyard R Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279–311, 1966.
-  Wenqi Wang, Vaneet Aggarwal, and Shuchin Aeron. Efficient low rank tensor ring completion. Rn, 1(r1):1, 2017.
-  Longhao Yuan, Qibin Zhao, and Jianting Cao. Completion of high order tensor data with missing entries via tensor-train decomposition. In International Conference on Neural Information Processing, pages 222–229. Springer, 2017.
-  Qibin Zhao, Masashi Sugiyama, Longhao Yuan, and Andrzej Cichocki. Learning efficient tensor representations with ring structure networks, 2018.
-  Qibin Zhao, Liqing Zhang, and Andrzej Cichocki. Bayesian cp factorization of incomplete tensors with automatic rank determination. IEEE transactions on pattern analysis and machine intelligence, 37(9):1751–1763, 2015.
-  Qibin Zhao, Guoxu Zhou, Shengli Xie, Liqing Zhang, and Andrzej Cichocki. Tensor ring decomposition. arXiv preprint arXiv:1606.05535, 2016.
-  Guoxu Zhou, Qibin Zhao, Yu Zhang, Tülay Adalı, Shengli Xie, and Andrzej Cichocki. Linked component analysis from matrices to high-order tensors: Applications to biomedical data. Proceedings of the IEEE, 104(2):310–331, 2016.