NeCPD: An Online Tensor Decomposition with Optimal Stochastic Gradient Descent

by   Ali Anaissi, et al.
The University of Sydney

Multi-way data analysis has become an essential tool for capturing underlying structures in higher-order datasets stored in tensor X∈R ^I_1 ×...× I_N. CANDECOMP/PARAFAC (CP) decomposition has been extensively studied and applied to approximate X by N loading matrices A^(1), ..., A^(N) where N represents the order of the tensor. We propose a new efficient CP decomposition solver named NeCPD for non-convex problem in multi-way online data based on stochastic gradient descent (SGD) algorithm. SGD is very useful in online setting since it allows us to update X^(t+1) in one single step. In terms of global convergence, it is well known that SGD stuck in many saddle points when it deals with non-convex problems. We study the Hessian matrix to identify theses saddle points, and then try to escape them using the perturbation approach which adds little noise to the gradient update step. We further apply Nesterov's Accelerated Gradient (NAG) method in SGD algorithm to optimally accelerate the convergence rate and compensate Hessian computational delay time per epoch. Experimental evaluation in the field of structural health monitoring using laboratory-based and real-life structural datasets show that our method provides more accurate results compared with existing online tensor analysis methods.


page 1

page 2

page 3

page 4


A Fast Parallel Tensor Decomposition with Optimal Stochastic Gradient Descent: an Application in Structural Damage Identification

Structural Health Monitoring (SHM) provides an economic approach which a...

Online Tensor-Based Learning for Multi-Way Data

The online analysis of multi-way data stored in a tensor X∈R ^I_1 ×...× ...

Shampoo: Preconditioned Stochastic Tensor Optimization

Preconditioned gradient methods are among the most general and powerful ...

Escaping Saddle Points with Stochastically Controlled Stochastic Gradient Methods

Stochastically controlled stochastic gradient (SCSG) methods have been p...

Tensor Canonical Correlation Analysis

In many applications, such as classification of images or videos, it is ...

Streaming Generalized Canonical Polyadic Tensor Decompositions

In this paper, we develop a method which we call OnlineGCP for computing...

Convergence of Stochastic Gradient Descent for PCA

We consider the problem of principal component analysis (PCA) in a strea...

I Introduction

The concept of multi-way data analysis was introduced by Tucker in 1964 as an extension of standard two-way data analysis to analyze multidimensional data known as tensor. It is often used when traditional two-way data analysis methods such as Non-negative Matrix Factorization (NMF), Principal Component Analysis (PCA) and Singular Value Decomposition (SVD) are not enough to capture the underlying structures inherited in multi-way data. In the realm of multi-way data, tensor decomposition methods such as

and (CP) [14] have been extensively studied and applied in various fields including signal processing, civil engineer and time series analysis. The CP decomposition has gained much popularity for analyzing multi-way data due to its ease of interpretation. For example, given a tensor , CP method decomposes by loading matrices each represents one mode explicitly, where is the tensor order and each matrix represents one mode explicitly. In contrast to method where the three modes can interact with each other making it difficult to interpret the resultant matrices.

The CP decomposition often uses the Alternating Least Squares (ALS) method to find the solution for a given tensor. The ALS method follows the batch mode training process which iteratively solves each component matrix by fixing all the other components, then it repeats the procedure until it converges [12]. However, ALS can lead to sensitive solutions [8][3]. Moreover, in the realm of big and non-stationary data, the ALS method raises many challenges in dealing with data that is continuously measured at high velocity from different sources/locations and dynamically changing over time. For example, structural health monitoring (SHM) data can be represented in a three-way form as which represents a large number of vibration responses measured over time by many sensors attached to a structure at different locations. This type of data can be found in many other application domains [1, 26, 15, 4]. In such applications, a naive approach would be to recompute the CP decomposition from scratch for each new incoming . Therefore, this would become impractical and computationally expensive as new incoming datum would have a minimal effect on the current tensor.

Zhou et al. [31] proposed a method called onlineCP to address the problem of online CP decomposition using ALS algorithm. The method was able to incrementally update the temporal mode in multi-way data but failed for non-temporal modes [12]. In recent years, several studies have been proposed to solve the CP decomposition using stochastic gradient descent (SGD) algorithm which has the capability to deal with big data and online learning. However, such methods are inefficient and impractical due to slow convergence, numerical uncertainty and non-convergence [9, 5, 18, 23]. To address the aforementioned problems, we propose an efficient solver method called NeCPD (Nesterov CP Decomposition) for analyzing large-scale high-order data. The novelty of our proposed method is summarized in the following contributions:

  1. Online CP Decomposition. Our method is capable to update in one single step. We realize this by employing the Stochastic Gradient Descent (SGD) algorithm to solve the CP decomposition as a baseline method.

  2. Global convergence guarantee. We followed the perturbation approach which adds a little noise to the gradient update step to reinforce the next update step to start moving away from a saddle point toward the correct direction.

  3. Optimal convergence rate. Our method employs Nesterov’s Accelerated Gradient (NAG) method into SGD algorithm to optimally accelerate the convergence rate [27]. It achieves a global convergence rate of comparing to for traditional SGD.

  4. Empirical analysis on structural datasets. We conduct experimental analysis using laboratory-based and real-life datasets in the field of SHM. The experimental analysis shows that our method can achieve more stable and fast tensor decomposition compared to other known existing online and offline methods.

The remainder of this paper is organized as follows. Section II introduces background knowledge and review of the related work. Section III describes our novel NeCPD algorithm for CP decomposition based on SGD algorithm augmented with NAG method and perturbation approach. Section IV presents the motivation of this work. Section V shows the performance of NeCPD on structural datasets and presents our experimental results on both laboratory-based and real-life datasets. The conclusion and discussion of future research work are presented in Section VI.

Ii Background and Related work

Ii-a CP Decomposition

Given a three-way tensor , CP decomposes into three matrices , and , where is the latent factors. It can be written as follows:


where ”

” is a vector outer product.

is the latent element, and are r-th columns of component matrices , and . The main goal of CP decomposition is to decrease the sum square error between the model and a given tensor . Equation 2

shows our loss function

needs to be optimized.


where is the sum squares of and the subscript is the Frobenius norm. The loss function presented in Equation 2 is a non-convex problem with many local minima since it aims to optimize the sum squares of three matrices. Several algorithms have been proposed to solve CP decomposition [28, 17, 22]. Among these algorithms, ALS has been heavily employed which repeatedly solves each component matrix by locking all other components until it converges [21]. The rational idea of the least square algorithm is to set the partial derivative of the loss function to zero with respect to the parameter we need to minimize. Algorithm 1 presents the detailed steps of ALS.

Alternating Least Squares
Input: Tensor , number of components
Output: Matrices , and
  1. Initialize

  2. Repeat

  3. ( is the unfolded matrix of in a current mode)

  4. until convergence

Algorithm 1 Alternating Least Squares for CP

Ii-B Stochastic Gradient Descent

Stochastic gradient descent algorithm is a key tool for optimization problems. Lets say our aim is to optimize a loss function , where is a data point drawn from a distribution and is a variable. The stochastic optimization problem can be defined as follows:


The stochastic gradient descent method solves the above problem defined in Equation 3 by repeatedly updates to minimize . It starts with some initial value of and then repeatedly performs the update as follows:


where is the learning rate and is a random sample drawn from the given distribution . This method guarantees the convergence of the loss function to the global minimum when it is convex. However, it can be susceptible to many local minima and saddle points when the loss function exists in a non-convex setting. Thus it becomes an NP-hard problem. In fact, the main bottleneck here is due to the existence of many saddle points and not to the local minima [9]. This is because the rational idea of gradient algorithm depends only on the gradient information which may have even though it is not at a minimum.

Previous studies have used SGD for matrix factorization and tensor decomposition with extensions to handle the aforementioned issues. Naiyang et al.[11] applies Nesterov’s optimal gradient method to SGD for non-negative matrix factorization. This method accelerates the NMF process with less computational time. Similarly, Shuxin et al.[30] used an SGD algorithm for matrix factorization using Taylor expansion and Hessian information. They proposed a new asynchronous SGD algorithm to compensate for the delay resultant from a Hessian computation.

Recently, SGD has attracted several researchers working on tensor decomposition. For instance, Ge et al.[9] proposed a perturbed SGD (PSGD) algorithm for orthogonal tensor optimization. They presented several theoretical analysis that ensures convergence; however, the method is not applicable to non-orthogonal tensor. They also didn’t address the problem of slow convergence. Similarly, Maehara et al.[18] propose a new algorithm for CP decomposition based on a combination of SGD and ALS methods (SALS). The authors claimed the algorithm works well in terms of accuracy. Yet its theoretical properties have not been completely proven and the saddle point problem was not addressed. Rendle and Thieme [23] propose a pairwise interaction tensor factorization method based on Bayesian personalized rank. The algorithm was designed to work only on three-way tensor data. To the best of our knowledge, this is the first work applies SGD algorithm augmented with Nesterov’s optimal gradient and perturbation methods for optimal CP decomposition of multi-way tensor data.

Iii Nesterov CP Decomposition (NeCPD)

Given an -order tensor , NeCPD initially divides the CP problem into a convex sub-problems since its loss function is non-convex problem which may have many local minima. For simplicity, we present our method based on three-way tensor data. However, it can be naturally extended to handle a higher-order tensor.

In a three-way tensor , can be decomposed into three matrices , and , where is the latent factors. Following the SGD, we need to calculate the partial derivative of the loss function defined in Equation 2 with respect to the three modes and alternatively as follows:


where is an unfolding matrix of tensor in mode . The gradient update step for and are as follows:


The rational idea of SGD algorithm depends only on the gradient information of . In such non-convex setting, this partial derivative may encounter data points with even though it is not at a global minimum. These data points are known as saddle points which may detente the optimization process to reach the desired local minimum if not escaped [9]. These saddle points can be identified by studying the second-order derivative (aka Hessian) . Theoretically, when the , must be a local minimum; if , then we are at a local maximum; if

has both positive and negative eigenvalues, the point is a saddle point. The second order methods guarantee convergence, but the computing of Hessian matrix

is high, which makes the method infeasible for high dimensional data and online learning. Ge

et al.[9] show that saddle points are very unstable and can be escaped if we slightly perturb them with some noise. Based on this, we use the perturbation approach which adds Gaussian noise to the gradient. This reinforces the next update step to start moving away from that saddle point toward the correct direction. After a random perturbation, it is highly unlikely that the point remains in the same band and hence it can be efficiently escaped (i.e., no longer a saddle point). Since we interested in fast optimization process due to online settings, we further incorporate Nesterov’s method into the perturbed-SGD algorithm to accelerate the convergence rate. Recently, Nesterov’s Accelerated Gradient (NAG) [19] has received much attention for solving convex optimization problems [11, 20, 10]. It introduces a smart variation of momentum that works slightly better than standard momentum. This technique modifies the traditional SGD by introducing velocity and friction , which tries to control the velocity and prevents overshooting the valley while allowing faster descent. Our idea behind Nesterov’s is to calculate the gradient at a position that we know our momentum is about to take us instead of calculating the gradient at the current position. In practice, it performs a simple step of gradient descent to go from to , and then it shifts slightly further than in the direction given by . In this setting, we model the gradient update step with NAG as follows:




where is a Gaussian noise, is the step size, and is the regularization and penalization parameter into the norms to achieve smooth representations of the outcome and thus bypassing the perturbation surrounding the local minimum problem. The updates for and are similar to the aforementioned ones. With NAG, our method achieves a global convergence rate of comparing to for traditional gradient descent. Based on the above models, we present our NeCPD algorithm 2.

Input: Tensor , number of components
Output: Matrices , and
  1. Initialize

  2. Repeat

  3. Compute the partial derivative of and using Equation III

  4. Compute of and using Equation 8

  5. Update the matrices and using Equation 7

  6. until convergence

Algorithm 2 NeCPD algorithm

Iv Motivation

Numerous types of data are naturally structured as multi-way data. For instance, structural health monitoring (SHM) data can be represented in a three-way form as . Arranging and analyzing the SHM data in multidimensional form would allow to capture the correlation between sensors at different locations and at the same time which was not possible using the standard two-way matrix

. Furthermore, in SHM only positive data instances i.e healthy state are available. Thus, the problem becomes an anomaly detection problem in higher-order datasets. Rytter


affirms that damage identification requires also damage localization and severity assessment which are considered much more complex than damage detection since they require a supervised learning approach


Given a positive three-way SHM data , NeCPD decomposes into three matrices and . The matrix represents the temporal mode where each row contains information about the vibration responses related to an event at time . The analysis of this component matrix can help to detect the damage of the monitored structure. Therefore, we use the matrix to build a one-class anomaly detection model using only the positive training events. For each new incoming , we update the three matrices and incrementally as described in Algorithm 2

. Then the constructed model estimates the agreement between the new event

and the trained data.

For damage localization, we analyze the data in the location matrix , where each row captures meaningful information for each sensor location. When the matrix is updated due to the arrival of a new event , we study the variation of the values in each row of matrix by computing the average distance from ’s row to -nearest neighboring locations as an anomaly score for damage localization. For severity assessment in damage identification, we study the decision values returned from the one-class model. This is because a structure with more severe damage will behave much differently from a normal one.

V Experimental Results

We conduct all our experiments using an Intel(R) Core(TM) i7 CPU 3.60GHz with 16GB memory. We use R language to implement our algorithms with the help of the two packages rTensor and e1071 for tensor tools and one-class model.

V-a Comparison on synthetic data

Our initial experiment was to compare our NeCPD method to SGD, PSGD, and SALS algorithms in terms of robustness and convergence. We generated a synthetic long time dimension tensor from 12 random loading matrices

in which entries were drawn from uniform distribution

. We evaluated the performance of each method by plotting the number of sample versus the root mean square error (RMSE). For all experiments we use the learning rate . It can be clearly seen from Figure 1 that NeCPD outperformed the SGD and PSGD algorithms in terms of convergence and robustnesses. Both SALS and NeCPD converged to a small RMSE but it was lower and faster in NeCPD.

Fig. 1: Comparison of algorithms in terms of robustness and convergence.

V-B Comparison on SHM data

We evaluate our NeCPD algorithm on laboratory-based and real-life structural datasets in the area of SHM. The two datasets are naturally in multi-way form. The first dataset is collected from a cable-stayed bridge in Western Sydney, Australia. The second one is a specimen building structure obtained from Los Alamos National Laboratory (LANL) [16].

V-B1 The Cable-Stayed Bridge:

We used 24 uniaxial accelerometers and 28 strain gauges to measure the vibration and strain responses of the bridge at different locations. However, only accelerations data collected from sensors with were used in this study. Figure 2 illustrates the locations of the 24 sensors on the bridge deck. Since the bridge is in healthy condition and in order to evaluate the performance of our method in damage identification tasks, we placed two different stationary vehicles with different masses to emulate two different levels of damage severity [13, 7]. Three different categories of data were collected in this experiment: ”Healthy-Data” when the bridge is free of vehicles; ”Car-Damage” when a light car vehicle is placed on the bridge close to location ; and ”Bus-Damage” when a heavy bus vehicle is located on the bridge at location . This experiment generates 262 samples (i.e., events) separated into three categories: ”Healthy-Data” (125 samples), ”Car-Damage” data (107 samples) and ”Bus-Damage” data(30 samples). Each event consists of acceleration data for a period of 2 seconds sampled at a rate of 600Hz. The resultant event’s feature vector composed of 1200 frequency values.

Fig. 2: The locations on the bridge’s deck of the 24 accelerometers used in this study. The cross girder of the bridge is displayed as [4].

V-B2 Building Data:

This data is based on experiments conducted by LANL [16] using a specimen for a three-story building structure as shown in Figure 3. Each joint in the building was instrumented by two accelerometers. The excitation data was generated using a shaker placed at corner . Similarly, for the sake of damage detection evaluation, the damage was simulated by detaching or loosening the bolts at the joints to induce the aluminum floor plate moving freely relative to the Unistrut column. Three different categories of data were collected in this experiment: ”Healthy-Data” when all the bolts were firmly tightened; ”Damage-3C” data when the bolt at location 3C was loosened; and ”Damage-1A3C” data when the bolts at locations 1A and 3C were loosened simultaneously. This experiment generates 240 samples (i.e., events) which also were separated into three categories: Healthy-Data (150 samples), ”Damage-3C” data (60 samples) and ”Damage-1A3C” data(30 samples). The acceleration data was sampled at 1600 Hz. Each event was measured for a period of 5.12 seconds resulting in a vector of 8192 frequency values.

Fig. 3: Three-story building and floor layout [16].

V-B3 Feature Extraction:

The raw signals of the sensing data we collected in the aforementioned experiments exist in the time domain. In practice, time domain-based features may not capture the physical meaning of the structure. Thus, it is noteworthy to convert the generated data to a frequency domain. For all datasets, we initially normalize the time-domain features to have zero mean and one standard deviation. Then we use the fast Fourier transform method to convert them into the frequency domain. The resultant three-way data collected from the Cable-Stayed Bridge now has a structure of 600 features

24 sensors 262 events. For Building data, we compute the difference between signals of two adjacent sensors which leads to 12 different joints in the three stories as in [16]. Then we select the first 150 frequencies as a feature vector resulting in a three-way data with a structure of 768 features 12 locations 240 events.

V-B4 Experimental Setup:

For all case studies we apply the following procedures:

  • The bootstrap technique to randomly select 80% of the healthy samples for training and the remaining 20% for testing in addition to the damage samples. All the reported accuracies are based on the average results over ten trials of the bootstrap experiment.

  • The core consistency diagnostic (CORCONDIA) technique described in [6] is used to decide the number of rank-one tensors in the NeCPD.

  • We use one-class support vector machine (OSVM)

    [25] as our one class model for anomaly detection method. The Gaussian kernel parameter in OCSVM is tuned using the Edged Support Vector (ESV) algorithm [2], and the rate of anomalies was set to 0.05.

  • The F-score measure to compute the accuracy values. It is defined as where and (the number of true positive, false positive and false negative are abbreviated by TP, FP and FN, respectively).

  • The results of the competitive method SALS proposed in [18] are compared against our NeCPD method.

V-B5 Results and Discussions:

V-B6 The Cable-Stayed Bridge:

Our NeCPD method with one-class SVM was initially validated using vibration data collected from the cable-stayed bridge described in Section V-B1. The healthy training three-way tensor data (i.e. training set) was in the form of . The 137 examples related to the two damage cases were added to the remaining 20% of the healthy data to form a testing set, which was later used for the model evaluation. This experiment generates a damage detection accuracy F-score of on the testing data. On the other hand, the F-score accuracy of one-class SVM using SALS is recorded at .

As we anticipated, tensor analysis with NeCPD is able to capture the underlying structure in multi-way data with better convergence. This is demonstrated by plotting the decision values returned from one-class SVM based NeCPD (as shown in Figure 4). We can clearly separate the two damage cases (”Car-Damage” and ”Bus-Damage”) in this dataset where the decision values are further decreased for the samples related to the more severe damage cases (i.e., ”Bus-Damage”). These results suggest using the decision values obtained by NeCPD and one-class SVM as structural health scores to identify the damage severity in a one-class aspect. In contrast, the resultant decision values of one-class SVM based on SALS are also able to track the progress of the damage severity in the structure but with a slight decreasing trend in decision values for ”Bus-Damage” (see Figure 4).

(a) NeCPD.
(b) SALS.
Fig. 4: Damage estimation applied on Bridge data using decision values obtained by one-class SVM.
(a) NeCPD.
(b) SALS.
Fig. 5: Damage localization for the Bridge data: NeCPD successfully localized damage locations.

The last step was to analyze the location matrix obtained by NeCPD to locate the detected damage. Each row in this matrix captures meaningful information for each sensor location. Therefore, we calculate the average distance from each row in the matrix to -nearest neighboring rows. Figure 5 shows the obtained -nn score for each sensor. The first 25 events (depicted on the x-axis) represent healthy data, followed by 107 events related to ”Car-Damage” and 30 events to ”Bus-Damage”. It can be clearly observed that NeCPD method successfully localized the damage in the structure. Whereas, sensors and related to the ”Car-Damage” and ”Bus-Damage” respectively behave significantly different from all the other sensors apart from the position of the introduced damage. Also, we observe that the adjacent sensors to the damage location (e.g , , and ) react differently due to the arrival pattern of the damage events. The SALS method, however, is not able to successfully locate the damage since it fails to update the location matrix incrementally.

V-B7 Building Data:

Our second experiment is conducted using the acceleration data acquired from 24 sensors instrumented on the three-story building as described in Section V-B2. The healthy training three-way data ( set) was in the form of . The remaining 20% of the healthy data and the data obtained from the two damage cases were used for testing (i.e.testing set). The NeCPD with one-class SVM achieve an F-score of on the testing data compared to obtained from one-class SVM based SALS.

Similar to the Bridge dataset, we further analyzed the resultant decision values which were also able to characterize damage severity. Figure 6 demonstrates that the more severe damage to the and location test data the more deviation from the training data with lower decision values.

(a) NeCPD.
(b) SALS.
Fig. 6: Damage estimation applied on Building data using decision values obtained by one-class SVM.

The last experiment is to compute the -nn score for each sensor based on the -nearest neighboring of the average distance between each row of the matrix . Figure 7 shows the resultant -nn score for each sensor. The first 30 events (depicted on the x-axis) represent the healthy data, followed by 60 events describing when the damage was introduced in location . The last 30 events represent the damage occurred in both locations and . It can be clearly observed that the NeCPD method successfully localized the structure’s damage where sensors and behave significantly different from all the other sensors apart from the position of the introduced damage. However, the SALS method is not able to successfully locate damage since it fails to update the location matrix incrementally.

(a) NeCPD.
(b) SALS.
Fig. 7: Damage localization for the Building data: NeCPD successfully localized damage locations.

Vi Conclusion

This paper investigated the CP decomposition with stochastic gradient descent algorithm for multi-way data analysis. This leads to a new method named NeCPD for tensor decomposition. Our new method guarantees the convergence for a given non-convex problem by modeling the second order derivative of the loss function and incorporating little noise to the gradient update. Furthermore, NeCPD employs Nesterov’s method to compensate for the delays of the optimization process and accelerate the convergence rate. Based on laboratory and real datasets from the area of SHM, our NeCPD, with one-class SVM model for anomaly detection, achieve accurate results in damage detection, localization, and assessment in online and one-class settings. Among the key future work is how to parallelize the tensor decomposition with NeCPD. Also, it would be useful to apply NeCPD with datasets from different domains.

Vii Acknowledgement

The authors wish to thank the Roads and Maritime Services (RMS) in New South Wales, New South Wales Government in Australia and Data61 (CSIRO) for provision of the support and testing facilities for this research work. Thanks are also extended to Western Sydney University for facilitating the experiments on the cable-stayed bridge.


  • [1] E. Acar and B. Yener (2009) Unsupervised multiway data analysis: a literature survey. IEEE transactions on knowledge and data engineering 21 (1), pp. 6–20. Cited by: §I.
  • [2] A. Anaissi, A. Braytee, and M. Naji (2018) Gaussian kernel parameter optimization in one-class support vector machines. In

    2018 International Joint Conference on Neural Networks (IJCNN)

    pp. 1–8. Cited by: 3rd item.
  • [3] A. Anaissi, Y. Lee, and M. Naji (2018) Regularized tensor learning with adaptive one-class support vector machines. In International Conference on Neural Information Processing, pp. 612–624. Cited by: §I.
  • [4] A. Anaissi, M. Makki Alamdari, T. Rakotoarivelo, and N. Khoa (2018) A tensor-based structural damage identification and severity assessment. Sensors 18 (1), pp. 111. Cited by: §I, Fig. 2.
  • [5] A. Anandkumar and R. Ge (2016) Efficient approaches for escaping higher order saddle points in non-convex optimization. In Conference on learning theory, pp. 81–102. Cited by: §I.
  • [6] R. Bro and H. A. Kiers (2003) A new efficient method for determining the number of components in parafac models. Journal of chemometrics 17 (5), pp. 274–286. Cited by: 2nd item.
  • [7] F. Cerda, J. Garrett, J. Bielak, P. Rizzo, J. Barrera, Z. Zhang, S. Chen, M. T. McCann, and J. Kovacevic (2012) Indirect structural health monitoring in bridges: scale experiments. In Proc. Int. Conf. Bridge Maint., Safety Manag., Lago di Como, pp. 346–353. Cited by: §V-B1.
  • [8] L. Eldén (1980) Perturbation theory for the least squares problem with linear equality constraints. SIAM Journal on Numerical Analysis 17 (3), pp. 338–350. Cited by: §I.
  • [9] R. Ge, F. Huang, C. Jin, and Y. Yuan (2015) Escaping from saddle points—online stochastic gradient for tensor decomposition. In Conference on Learning Theory, pp. 797–842. Cited by: §I, §II-B, §II-B, §III.
  • [10] S. Ghadimi and G. Lan (2016) Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Mathematical Programming 156 (1-2), pp. 59–99. Cited by: §III.
  • [11] N. Guan, D. Tao, Z. Luo, and B. Yuan (2012) NeNMF: an optimal gradient method for nonnegative matrix factorization. IEEE Transactions on Signal Processing 60 (6), pp. 2882–2898. Cited by: §II-B, §III.
  • [12] N. L. D. Khoa, A. Anaissi, and Y. Wang (2017) Smart infrastructure maintenance using incremental tensor analysis. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 959–967. Cited by: §I, §I.
  • [13] A. Kody, X. Li, and B. Moaveni (2013) Identification of physically simulated damage on a footbridge based on ambient vibration data. In Structures Congress 2013: Bridging Your Passion with Your Profession, pp. 352–362. Cited by: §V-B1.
  • [14] T. G. Kolda and B. W. Bader (2009) Tensor decompositions and applications. SIAM review 51 (3), pp. 455–500. Cited by: §I.
  • [15] T. G. Kolda and J. Sun (2008) Scalable tensor decompositions for multi-aspect data mining. In 2008 Eighth IEEE international conference on data mining, pp. 363–372. Cited by: §I.
  • [16] A. C. Larson and R. Von Dreele (1987) Los alamos national laboratory report no. Technical report LA-UR-86-748. Cited by: Fig. 3, §V-B2, §V-B3, §V-B.
  • [17] V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky (2014)

    Speeding-up convolutional neural networks using fine-tuned cp-decomposition

    arXiv preprint arXiv:1412.6553. Cited by: §II-A.
  • [18] T. Maehara, K. Hayashi, and K. Kawarabayashi (2016) Expected tensor decomposition with stochastic gradient descent. In

    Thirtieth AAAI conference on artificial intelligence

    Cited by: §I, §II-B, 5th item.
  • [19] Y. Nesterov (2013) Introductory lectures on convex optimization: a basic course. Vol. 87, Springer Science & Business Media. Cited by: §III.
  • [20] A. Nitanda (2014) Stochastic proximal gradient descent with acceleration techniques. In Advances in Neural Information Processing Systems, pp. 1574–1582. Cited by: §III.
  • [21] E. E. Papalexakis, C. Faloutsos, and N. D. Sidiropoulos (2017) Tensors for data mining and data fusion: models, applications, and scalable algorithms. ACM Transactions on Intelligent Systems and Technology (TIST) 8 (2), pp. 16. Cited by: §II-A.
  • [22] S. Rendle, L. Balby Marinho, A. Nanopoulos, and L. Schmidt-Thieme (2009) Learning optimal ranking with tensor factorization for tag recommendation. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 727–736. Cited by: §II-A.
  • [23] S. Rendle and L. Schmidt-Thieme (2010) Pairwise interaction tensor factorization for personalized tag recommendation. In Proceedings of the third ACM international conference on Web search and data mining, pp. 81–90. Cited by: §I, §II-B.
  • [24] A. Rytter (1993) Vibrational based inspection of civil engineering structures. Ph.D. Thesis, Dept. of Building Technology and Structural Engineering, Aalborg University. Cited by: §IV.
  • [25] B. Schölkopf, R. C. Williamson, A. J. Smola, J. Shawe-Taylor, and J. C. Platt (2000)

    Support vector method for novelty detection

    In Advances in neural information processing systems, pp. 582–588. Cited by: 3rd item.
  • [26] J. Sun, D. Tao, S. Papadimitriou, P. S. Yu, and C. Faloutsos (2008) Incremental tensor analysis: theory and applications. ACM Transactions on Knowledge Discovery from Data (TKDD) 2 (3), pp. 11. Cited by: §I.
  • [27] I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton (2013)

    On the importance of initialization and momentum in deep learning.

    ICML (3) 28 (1139-1147), pp. 5. Cited by: item 3.
  • [28] P. Symeonidis, A. Nanopoulos, and Y. Manolopoulos (2008) Tag recommendations based on tensor dimensionality reduction. In Proceedings of the 2008 ACM conference on Recommender systems, pp. 43–50. Cited by: §II-A.
  • [29] K. Worden and G. Manson (2006)

    The application of machine learning to structural health monitoring

    Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 365 (1851), pp. 515–537. Cited by: §IV.
  • [30] S. Zheng, Q. Meng, T. Wang, W. Chen, N. Yu, Z. Ma, and T. Liu (2017) Asynchronous stochastic gradient descent with delay compensation. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 4120–4129. Cited by: §II-B.
  • [31] S. Zhou, N. X. Vinh, J. Bailey, Y. Jia, and I. Davidson (2016) Accelerating online cp decompositions for higher order tensors. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1375–1384. Cited by: §I.