1 Introduction
Tensor decomposition has a rich multidisciplinary history, but it has only recently become ubiquitous due to a surge of applications in data mining, machine learning, and signal processing
Kolda and Bader (2009); Rabanser et al. (2017); Sidiropoulos et al. (2017). The most prevalent tensor decompositions are the CP and Tucker decompositions, which are often thought of as generalized singular value decompositions. Consequently, there are natural notions of lowrank tensor decomposition. Unlike matrix factorization, however, even computing the analogs of rank for a tensor is NPhard
Hillar and Lim (2013). Therefore, most lowrank tensor decomposition algorithms fix the rank structure in advance and then optimize the variables of the decomposition to fit the data. While conceptually simple, this technique can be extremely effective for many realworld applications since highdimensional data is often inherently lowdimensional.One of the cornerstones of lowrank tensor decomposition is the alternating least squares (ALS) algorithm. For CP and Tucker decompositions, ALS cyclically optimizes disjoint blocks of variables while keeping all others fixed. If no additional constraints or regularization are imposed, then each step of ALS is an ordinary least squares problem. For Tucker decompositions, the higher order singular value decomposition (HOSVD) and higher order orthogonal iteration (HOOI) algorithms
Kolda and Bader (2009) are popular alternatives, but (1) do not scale as easily since they compute SVDs of matricizations of the data tensor in each step, and (2) do work well for data with missing entries. Therefore, we focus on ALS for Tucker decompositions and incorporate a new technique from randomized numerical linear algebra called ridge leverage score sampling to speed up its bottleneck step. Recently, Cheng et al. (2016) and Larsen and Kolda (2020) showed that (statistical) leverage score sampling is useful for accelerating ALS for CP decompositions. Leverage scores measure the importance of each observation in a least squares problem, and were recently generalized by Alaoui and Mahoney (2015) to account for Tikhonov regularization, hence the name ridge leverage scores. The CP decomposition algorithms in Cheng et al. (2016); Larsen and Kolda (2020), as well as the work of Diao et al. (2019) on Kronecker product regression, exploit the fact that the necessary leverage score distribution can be closely approximated by a related product distribution. This ultimately leads to efficient sampling subroutines. We follow a similar approach to give a fast samplingbased ALS algorithm for Tucker decompositions, and we show how to seamlessly extend leverage score sampling methods to account for L2 regularization (i.e., approximate ridge regression).1.1 Our Contributions
This work gives several new results for ridge leverage score sampling and its applications in lowrank tensor decomposition. Below is a summary of our contributions:

Our first result is a method for augmenting approximate ridge leverage score distributions such that (1) we can sample from the augmented distribution in the same amount of time as the original distribution, and (2) if we sample rows according to this augmented distribution, then we can construct a sketched version for any ridge regression problem such that the solution vector for the sketch gives a approximation to the original instance. We then show that the statistical leverage scores of the design matrix for any ridge regression problem are a useful overestimate of the ridge leverage scores when augmented. Moreover, we quantify how the sample complexity of sketching algorithms decreases as the value of increases (i.e., as the effective dimensionality of the ridge regression problem shrinks).

Our second key result explores how ridge leverage score sampling can be used to compute lowrank Tucker decompositions of tensors. We consider the ubiquitous alternating least squares algorithm and speed up its core tensor update step—a notorious bottleneck for Tucker decomposition algorithms. We use our approximate ridge regression subroutine and exploit the fact that the design matrix in every core tensor update is a Kronecker product of the factor matrices. In particular, this means that the leverage score distribution of the design matrix is a product distribution of the leverage score distributions for the factor matrices, hence we can sample rows from the augmented distribution in time sublinear in the number of its rows. Our core tensor update is designed to be fast both in theory and in practice, since its time complexity is predominantly a function of the rank and sketching parameters.

Next, as a step towards better understanding alternating least squares for tensor completion, we derive upper bounds for the ridge leverage scores when rows are removed from the design matrix (i.e., if the tensor has missing entries). While these bounds hold for general matrices and can be pessimistic if a large fraction of rows are removed, the proofs provide useful insight into exactly how ridge leverage scores generalize classical leverage scores.

Lastly, we demonstrate how our approximate core tensor update based on fast (factored) leverage score sampling leads to massive improvements in the running time for lowrank Tucker decompositions while preserving the original solution quality of ALS. Specifically, we profile this algorithm using large, dense synthetic tensors and the movie data of Malik and Becker (2018), which explores sketching Tucker decompositions in a data stream model.
1.2 Related Works
Tensor Decomposition.
The algorithms of Cheng et al. (2016) and Larsen and Kolda (2020) that use leverage score sampling with ALS to compute (unregularized) CP decompositions are most directly related. Avoiding degeneracies in a CP decomposition using ALS has carefully been studied in Comon et al. (2009). For first order methods, a step of gradient descent typically takes as long as an iteration of ALS since both involve computing the same quantities (Sidiropoulos et al., 2017, Remark 3). SGDbased methods, however, are known to be efficient for certain structured CP decompositions Ge et al. (2015). For Tucker decompositions, Frandsen and Ge (2020) recently showed that if the tensor being learned has an exact Tucker decomposition, then all local minima are globally optimal. Streaming algorithms for Tucker decompositions based on sketching and SGD have also recently been explored by Malik and Becker (2018) and Traore et al. (2019)
, respectively. The more general problem of lowrank tensor completion is a fundamental approach for estimating the values of missing data
Acar et al. (2011); Jain et al. (2013); Jain and Oh (2014); Filipović and Jukić (2015), and has led to notable breakthroughs in computer vision
Liu et al. (2012). Popular approaches for tensor completion are based Riemannian optimization Kressner et al. (2014); Kasai and Mishra (2016); Nimishakavi et al. (2018) alternating least squares Zhou et al. (2013); Grasedyck et al. (2015); Liu and Moitra (2020), and projected gradient methods Yu and Liu (2016).Ridge Leverage Scores.
Alaoui and Mahoney (2015) recently extended the idea of statistical leverage scores to the setting of ridge regression, and used these scores to derive a sampling distribution that reduces the sketch size (i.e., the number of sampled rows) to the effective dimension of the problem. Since then, sampling from approximate ridge leverage score distributions has played a critical role in fundamental works for sparse lowrank matrix approximation Cohen et al. (2017), an improved Nyström method via recursive sampling Musco and Musco (2017), bounding the statistical risk of ridge regression McCurdy (2018), a new sketchingbased iterative method for ridge regression Chowdhury et al. (2018), and improved bounds for the number of random Fourier features needed for ridge regression as a function of the effective dimension Li et al. (2019). Closely related are fast recursive algorithms for computing approximate leverage scores Cohen et al. (2015) and for solving overconstrained least squares Li et al. (2013). Recent works have also explored sketching for sparse Kronecker product regression, which exploit a similar product distribution property of leverage scores Diao et al. (2018, 2019).
2 Preliminaries
Notation and Terminology.
The order of a tensor is the number of its dimensions, also known as ways or modes. Scalars (zerothorder tensors) are denoted by normal lowercase letters , vectors (firstorder tensors) by boldface lowercase letters , and matrices (secondorder tensors) by boldface uppercase letters . Higherorder tensors are denoted by boldface script letters . For higherorder tensors, we use normal uppercase letters to denote the size of an index set (e.g., ). The th entry of a vector is denoted by , the th entry of a matrix by , and the th entry of a thirdorder tensor by .
Linear Algebra.
Let denote the identity matrix and denote the zero matrix. Denote the transpose of a matrix by and the Moore–Penrose inverse by . The singular value decomposition (SVD) of is a factorization of the form , where and are orthogonal matrices, and is a diagonal matrix with nonnegative real numbers on its diagonal. The entries of are the singular values of , and the number of nonzero singular values is equal to . The compact SVD is a similar decomposition where is a diagonal matrix containing only the nonzero singular values. Lastly, we denote the Kronecker product of two matrices and by .
Tensor Products.
The fibers of a tensor are vectors created by fixing all but one index (e.g., for a thirdorder tensor , the column, row, and tube fibers are denoted by , , and , respectively). The mode unfolding of a tensor is the matrix that arranges the mode fibers of as the columns of ordered lexicographically by index. Going one step further, the vectorization of is the vector formed by vertically stacking the entries of ordered lexicographically by index (e.g., this transforms matrix into a tall vector by stacking its columns).
The mode product of a tensor and matrix is denoted by with . Intuitively, this operation multiplies each mode fiber of by the matrix . Elementwise, this operation is expressed as follows:
The Frobenius norm of a tensor is the square root of the sum of the squares of all its entries.
Tucker Decomposition.
The Tucker decomposition decomposes a tensor into a core tensor and multiple factor matrices . We can express the problem of finding a Tucker decomposition of
as minimizing the loss function
where is a regularization parameter. The elements of are
(1) 
Equation Equation 1 shows that is the sum of rank1 tensors. The tuple is the multilinear rank of the decomposition and is chosen to be much smaller than the dimensions of . Sometimes columnwise orthogonality constraints are enforced on the factor matrices, and hence the Tucker decomposition can be thought of as a higherorder SVD, but such constraints are not required.
Ridge Leverage Scores.
The ridge leverage score of the th row of a matrix is
(2) 
We also define the related cross ridge leverage score as . The matrix of cross ridge leverage scores is , and we denote its diagonal by since this vector contains the ridge leverage scores of . Ridge leverage scores generalize the statistical leverage scores of a matrix, in that setting recovers the leverage scores of , which we denote by the vector . If we let be the compact SVD of , it can be shown that
(3) 
where . Therefore, it follows that each since
is an orthogonal matrix. The
effective dimension of the ridge regression problem is the sum of ridge leverage scores:(4) 
The regularization parameter shrinks the dimensionality of the problem, so iff . The ridge leverage score of a row measures its importance when constructing the row space of in the context of ridge regression. See Alaoui and Mahoney (2015); Cohen et al. (2015) for further details and intuition about ridge leverage scores.
3 Approximate Ridge Regression by Leverage Score Sampling
We start by introducing a row samplingbased approach for approximately solving any ridge regression problem via ridge leverage scores. Computing ridge leverage scores is often as expensive as solving the ridge regression problem itself, and thus is not immediately useful for constructing a smaller, sketched instance to use as a proxy. However, in this section, we show how to use ridge leverage scores overestimates to construct a feasible sampling distribution over the rows of an augmented design matrix, and then we use this new distribution to efficiently sketch an ordinary least squares problem whose solution vector is a approximation to the input ridge regression problem. Furthermore, we show that classical leverage scores are a sufficient overestimate of ridge leverage scores, for any regularization strength , and always result in efficient sketching subroutines.
It will be useful to first give some context into the derivation of ridge leverage scores introduced by Alaoui and Mahoney (2015). Let us rewrite the ridge regression objective as an ordinary least squares problem in terms of an augmented design matrix and response vector:
(5) 
Lemma 3.1.
Let be any matrix and let be defined as in Equation 5. For each row index , the ridge leverage scores of are equal to the corresponding leverage scores of . Concretely, we have .
Proof.
The lemma follows from and the definition of ridge leverage scores. ∎
Lemma 3.1 gives an alternate explanation for why the effective dimension shrinks as increases, since the gap is the sum of leverage scores for the augmented rows corresponding to the regularization terms.
Next we quantify approximate ridge leverage scores from a probability distribution point of view.
Definition 3.2.
The vector is a overestimate for the ridge leverage score distribution of if, for all , it satisfies
Whenever overestimates are used in a leverage scorebased sketching method, the sample complexity of the sketching algorithm increases by a factor of
. Therefore, we want to construct approximate distributions that minimize the maximum relative decrease in probability. For ridge leverage score sampling (in contrast with classical leverage scores), it is possible to have
due to a decrease in effective dimension , which is beneficial since it means fewer samples are needed.Now we define an augmented probability distribution, which will allow us to seamlessly apply sketching tools for ordinary least squares to ridge regression. The augmented distribution is constructed from a ridge leverage score overestimate, and one of its key properties is that we can sample from it in the same amount of time that we can sample from the overestimate distribution.
Definition 3.3.
Let denote the augmented distribution of the overestimate , where is a lower bound for the effective dimension of . The sample space of this distribution is the index set , and its probability mass function is defined as
Note that sampling from this augmented distribution does not require any information about the leverage scores for the additional rows in the augmented design matrix in Equation Equation 5. To efficiently generate a sample from this distribution, we first flip a coin to branch on the two subsets of indices and then we sample from each conditional distribution accordingly.
Next we claim the augmented distribution of a overestimate for a ridge leverage score distribution is a overestimate for the leverage score distribution of the augmented design matrix in Equation Equation 5. The proofs for all of the remaining results in this section are deferred to Appendix A.
Lemma 3.4.
If is a overestimate for the ridge leverage score distribution of , then the probability vector for the distribution is a overestimate for the leverage score distribution of , where
(6) 
This lemma is a simple consequence of the definitions of a overestimate and the effective dimension. Although Equation Equation 6 may initially seem unwieldy since could be difficult to compute, we can cleanly lower bound using the fact that .
The next result follows immediately from the proof of Lemma 3.4. Observe that there is no explicit dependency on the effective dimension in this statement, as it gets cancelled out when using classical leverage scores as the overestimates.
Corollary 3.5.
The leverage scores of are a overestimate for the ridge leverage scores of . Therefore, the distribution is a overestimate for the leverage scores of .
We can always guarantee by using the statistical leverage scores as the overestimate for ridge leverage scores since and by letting . We choose to write the statements above, however, in terms of since they perfectly recover leverage score overestimates for ordinary least squares when and the effective dimension .
We are now ready to present our approximate ridge regression algorithm and its guarantees. This algorithm constructs an augmented distribution from the provided overestimate and uses Lemma 3.4 together with several wellknown sketching ideas from randomized numerical linear algebra Drineas et al. (2006, 2011); Woodruff (2014) to bound the number of row samples needed from the augmented design matrix in Equation Equation 5. We explain how all of the leverage score sampling building blocks interact in Appendix A. In particular, we show that the number of samples defined on Line 3 of Algorithm 1 is sufficient for the sketch matrix to be a subspace embedding of the augmented least squares problem.
Theorem 3.6.
Algorithm 1 samples rows and returns a vector such that, with probability at least , we have
Let denote the time complexity of sampling from and let be the time needed to solve a least squares problem of size . The running time of Algorithm 1 is .
One of the main takeaways from this result is that if we use leverage scores as the input overestimate, Corollary 3.5 ensures that only row samples are needed since . This holds for any regularization strength since corresponds to full effective dimension and is in some sense the hardest type of ridge regression problem. Finally, we note that the running times in Theorem 3.6 are templated so that the result can easily be combined with fast sampling routines and modern least squares algorithms.
4 Fast LowRank Tucker Decomposition
Now we use the ApproximateRidgeRegression algorithm to accelerate the core tensor update steps in the alternating least squares (ALS) algorithm for Tucker decompositions. To achieve this, we exploit the Kronecker product structure of the design matrix in the core tensor subproblem and use the leverage scores of the design matrix as an overestimate for the true ridge leverage scores. The leverage scores of a Kronecker product matrix factor cleanly into the product of leverage scores of its factor matrices, which allows us to sample from the augmented leverage score distribution in time that is sublinear in the number of rows of this design matrix. A similar technique was recently used in the context of ALS for tensor CP decompositions, where the leverage scores of a Kronecker product matrix were used as an overestimate for the leverage scores of a Khatri–Rao product design matrix Cheng et al. (2016); Larsen and Kolda (2020).
We start by presenting the ALS algorithm for Tucker decompositions and briefly analyze the amount of work in each step. To update factor matrix , it solves ridge regression problems, all of which share the same design matrix of size . For each core tensor update, we solve a ridge regression problem whose design matrix has dimensions . Updating the core tensor is by far the most expensive step in an iteration of ALS, hence our motivation for making it faster by an approximate ridge regression subroutine.
4.1 Approximate Core Tensor Update
Next we explore the structure of ridge leverage scores for Kronecker product matrices and describe how to efficiently sample from the augmented leverage score distribution. The following result shows how ridge leverage scores of a Kronecker product matrix decompose according to the SVDs of its factor matrices. In the special case of leverage scores (i.e., ), the expression completely factors, which we can exploit since it induces a product distribution. The proof of this result repeatedly uses the mixedproduct property for Kronecker products with the pseudoinversebased definition of ridge leverage scores in Equation Equation 2. We defer the proofs of all results in this section to Appendix B.
Lemma 4.1.
Suppose , where each factor matrix , and let denote the canonical row indexing of according to its factors. If the SVD of , then for , the cross ridge leverage scores of are
where the sum is over the row index set . Therefore, given the SVDs of the factor matrices, we can compute each cross ridge leverage score of in time. Furthermore, for (statistical) cross leverage scores, we have
Now we present our fast samplingbased core tensor update. The algorithm first computes the leverage scores for each factor matrix, and then it initializes a data structure to query entries in the Kronecker product design matrix without explicitly constructing it (to avoid creating a memory bottleneck). Similarly, the algorithm then initializes a data structure to sample from the leverage score distribution of by independently sampling the index for each dimension from the corresponding factor matrix leverage score distribution. This ensures that the time complexity for sampling all row indices of is sublinear in the number of its rows. Finally, the algorithm calls the approximate ridge regression subroutine with these data structures as implicit inputs and updates the core tensor accordingly.
Theorem 4.2.
Let . Algorithm 3 gives a approximation to the optimal core tensor weights with probability at least in time , where is the matrix multiplication exponent, and uses space.
The approximation and running time guarantees in Theorem 4.2 follow primarily from Corollary 3.5 and Theorem 3.6, which show that leverage scores are a sampleefficient overestimate for any ridge regression problem. One appealing consequence of an approximate core tensor update whose running time is predominantly a function of the multilinear rank and sketching parameters (as opposed to the size of the input tensor ) is that the max block improvement (MBI) algorithm Chen et al. (2012) becomes feasible for a much larger set of problems, since ALS is a special case of block coordinate descent.
4.2 Missing Data and Ridge Leverage Score Upper Bounds
Now we take a step towards the more general tensor completion problem, where the goal is to learn a tensor decomposition that fits the observed entries well and also generalizes to unseen data. For Tucker decompositions and the ALS algorithm, this corresponds to removing rows of the Kronecker product design matrix (i.e., observations in the input tensor) before updating the core. To apply samplingbased sketching techniques in this setting and accelerate the core tensor update, we need to understand how the ridge leverage scores of a matrix change as its rows are removed. In particular, we need to give accurate upper bounds for the ridge leverage scores of the matrix with removed rows. Our first result shows how ridge leverage scores increase as the rows of the matrix are removed.
Theorem 4.3.
Let be the cross ridge leverage score matrix of . For any , let denote the matrix containing only the rows of indexed by . Let denote the set of missing row indices, and let be the principal submatrix of containing only the entries indexed by . Then, for any and , we have
where
denotes the maximum eigenvalue of matrix
.The proof relies on the Woodbury matrix identity and Cauchy interlacing theorem and provides useful insight into the role that cross ridge leverage score plays when row is removed but remains. We also generalize a wellknown property of cross leverage scores, which allows to sample from a product distribution in the tensor completion setting when combined with Theorem 4.3.
Lemma 4.4.
For any and , the sum of the squared cross ridge leverage scores is at most the ridge leverage score itself, with equality iff . Concretely,
These results imply that the ridge leverage scores of are a overestimate for ridge leverage scores of , where . We discuss how this affects the sample complexity for sketching in Appendix B (e.g., if the factor matrices for are semiorthogonal, then ).
5 Experiments
We compare the performance of ALS with and without the FastCoreUpdate algorithm by computing lowrank Tucker decompositions of large, dense synthetic tensors and realworld movie data from the TensorSketch experiments of Malik and Becker (2018). Our experiments build on Tensorly Kossaifi et al. (2019) and were run on an Intel Xeon W2135 processor (8.25 MB cache and 3.70 GHz) using 128GB of RAM. We refer to the ALS algorithm that uses ridge leverage score sampling for the core as ALSRS.
Synthetic Benchmarks.
First we stress test ALS and ALSRS on large, dense tensors. We consider various lowrank Tucker decompositions and show that the running time to update the core in ALSRS remains fixed as a function of the core size, without any loss in solution accuracy. These profiles further illustrate that the core tensor update is a severe bottleneck in an unmodified ALS algorithm.
Specifically, for each input shape we generate a random Tucker decomposition with an (8, 8, 8) core tensor. All of the entries in the factor matrices and core tensor are i.i.d. uniform random variables from
. We add Gaussian noise from to one percent of the entries in this Tucker tensor, and call the resulting tensor . The noise gives a lower bound of 0.10 for the RMSE when learning . Next we initialize a random Tucker decomposition with a smaller core size and fit it to using ALS and ALSRS, both starting from the same initial state. For each input shape and multilinear rank of the learned Tucker decomposition, we run ALS and ALSRS for 10 iterations and report the mean running time of each step type in Table 1. In all the experiments we set , and for ALSRS we set and . The columns labeled F1, F2, and F3 correspond to factor matrix updates.ALS  ALSRS  

Input Shape  Rank  F1 (s)  F2 (s)  F3 (s)  Core (s)  RMSE  Core (s)  RMSE 
(512, 512, 512)  (2, 2, 2)  2.0  2.0  0.3  7.5  0.364  3.9  0.364 
(4, 2, 2)  2.1  2.0  0.3  13.3  0.363  9.4  0.363  
(4, 4, 2)  2.1  2.1  0.3  24.8  0.328  23.1  0.328  
(4, 4, 4)  2.1  2.1  0.4  48.1  0.284  54.7  0.284  
(1024, 512, 512)  (2, 2, 2)  4.5  4.4  0.6  15.4  0.392  3.9  0.392 
(4, 2, 2)  4.6  4.4  0.7  26.9  0.387  9.3  0.387  
(4, 4, 2)  4.7  4.6  0.7  50.3  0.342  22.7  0.342  
(4, 4, 4)  4.7  4.6  0.9  96.8  0.300  53.9  0.300  
(1024, 1024, 512)  (2, 2, 2)  13.3  13.2  1.3  35.0  0.425  3.9  0.425 
(4, 2, 2)  13.4  13.0  1.3  58.0  0.413  9.3  0.413  
(4, 4, 2)  13.7  13.7  1.4  104.0  0.382  22.7  0.382  
(4, 4, 4)  13.7  13.7  1.9  196.9  0.321  54.0  0.321  
(1024, 1024, 1024)  (2, 2, 2)  27.7  27.6  2.7  67.9  0.406  3.9  0.406 
(4, 2, 2)  28.8  27.6  2.7  110.9  0.403  9.3  0.403  
(4, 4, 2)  28.9  28.8  2.8  196.3  0.355  22.8  0.356  
(4, 4, 4)  28.5  28.3  3.8  367.1  0.294  54.0  0.294 
We draw several conclusions from Table 1. First, by using ridge leverage score sampling, ALSRS guarantees fast, highquality core tensor updates while solving a substantially smaller ridge regression problem in each iteration. Comparing the core columns of ALS and ALSRS at fixed ranks as the input shape increases clearly demonstrates this. Second, ALS and ALSRS converge to the same local optimum for this data, which is evident from the RMSE columns since both start from the same seed.
Movie Experiments.
Now we use ALS and ALSRS to compute lowrank Tucker decompositions of a video of a nature scene. Malik and Becker (2018) used their TensorSketch algorithm for Tucker decompositions on this data since most frames are very similar, except for a couple disturbances when a person walks by. Thus, the video is expected to be compressible. It consists of 2493 frames, each an image of size (1080, 1920, 3) corresponding to height, width, and the RGB channel. Similar to the experiments in Malik and Becker (2018), we construct tensors of shape (100, 1080, 1920, 3) from 100frame slices of the video for our comparison. We again let , and set and for ALSRS.
We use a Tucker decomposition with a core of shape (2, 4, 4, 2), and we present the mean running times for each update step type over 5 iterations, after which both algorithms have converged. The update steps of ALS took (8.8, 3.1, 8.8, 163.1, 9935.5) seconds, corresponding to F1, F2, F3, F4, and the core update. In contrast, ALSRS took (9.2, 2.7, 9.3, 221.6, 85.8) seconds per step type. Both algorithms converge to 0.188 RMSE, and at each step of every iteration, the RMSEs of the two solutions differed in absolute value by at most 2.98e5. Hence, they converged along the same paths. Using ALSRS in this experiment sped up the core tensor update by more than a factor of 100 and shifted the performance bottleneck to the factor matrix update corresponding to the RGB channel.
6 Conclusion
This work accelerates ALS for regularized lowrank Tucker decompositions by using ridge leverage score sampling in the core tensor update. Specificaly, it introduces a new approximate ridge regression algorithm that augments classical leverage score distributions, and it begins to explore properties of ridge leverage scores in a dynamic setting. These results also immediately extend the leverage score samplingbased CP decomposition algorithms in Cheng et al. (2016); Larsen and Kolda (2020) to support L2 regularization. Lastly, our synthetic and realworld experiments demonstrate that this approximate core tensor update leads to substantial improvements in the running time of ALS for lowrank Tucker decompositions while preserving the original solution quality of ALS.
Acknowledgements
MG was supported in part by the Institute for Data Engineering and Science (IDEaS) and Transdisciplinary Research Institute for Advancing Data Science (TRIAD) Research Scholarships for Ph.D. students and postdocs. Part of this work was done while he was a summer intern at Google Research.
References
 Acar et al. (2011) Acar, E., Dunlavy, D. M., Kolda, T. G., and Mørup, M. (2011). Scalable tensor factorizations for incomplete data. Chemometrics and Intelligent Laboratory Systems, 106(1):41–56.
 Alaoui and Mahoney (2015) Alaoui, A. and Mahoney, M. W. (2015). Fast randomized kernel ridge regression with statistical guarantees. In Advances in Neural Information Processing Systems, volume 28.
 Alman and Williams (2021) Alman, J. and Williams, V. V. (2021). A refined laser method and faster matrix multiplication. In Proceedings of the 2021 ACMSIAM Symposium on Discrete Algorithms (SODA), pages 522–539. SIAM.
 Chen et al. (2012) Chen, B., He, S., Li, Z., and Zhang, S. (2012). Maximum block improvement and polynomial optimization. SIAM Journal on Optimization, 22(1):87–107.
 Cheng et al. (2016) Cheng, D., Peng, R., Liu, Y., and Perros, I. (2016). SPALS: Fast alternating least squares via implicit leverage scores sampling. Advances in Neural Information Processing Systems, 29:721–729.
 Chowdhury et al. (2018) Chowdhury, A., Yang, J., and Drineas, P. (2018). An iterative, sketchingbased framework for ridge regression. In International Conference on Machine Learning, pages 989–998. PMLR.
 Cohen et al. (2015) Cohen, M. B., Lee, Y. T., Musco, C., Musco, C., Peng, R., and Sidford, A. (2015). Uniform sampling for matrix approximation. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, pages 181–190.
 Cohen et al. (2017) Cohen, M. B., Musco, C., and Musco, C. (2017). Input sparsity time lowrank approximation via ridge leverage score sampling. In Proceedings of the TwentyEighth Annual ACMSIAM Symposium on Discrete Algorithms, pages 1758–1777. SIAM.
 Comon et al. (2009) Comon, P., Luciani, X., and De Almeida, A. L. (2009). Tensor decompositions, alternating least squares and other tales. Journal of Chemometrics: A Journal of the Chemometrics Society, 23(78):393–405.
 Diao et al. (2019) Diao, H., Jayaram, R., Song, Z., Sun, W., and Woodruff, D. (2019). Optimal sketching for kronecker product regression and low rank approximation. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.

Diao et al. (2018)
Diao, H., Song, Z., Sun, W., and Woodruff, D. (2018).
Sketching for kronecker product regression and psplines.
In
International Conference on Artificial Intelligence and Statistics
, pages 1299–1308. PMLR.  Drineas et al. (2006) Drineas, P., Kannan, R., and Mahoney, M. W. (2006). Fast Monte Carlo algorithms for matrices I: Approximating matrix multiplication. SIAM Journal on Computing, 36(1):132–157.
 Drineas et al. (2011) Drineas, P., Mahoney, M. W., Muthukrishnan, S., and Sarlós, T. (2011). Faster least squares approximation. Numerische Mathematik, 117(2):219–249.
 Filipović and Jukić (2015) Filipović, M. and Jukić, A. (2015). Tucker factorization with missing data with application to lowrank tensor completion. Multidimensional systems and signal processing, 26(3):677–692.
 Frandsen and Ge (2020) Frandsen, A. and Ge, R. (2020). Optimization landscape of tucker decomposition. Mathematical Programming, pages 1–26.
 Ge et al. (2015) Ge, R., Huang, F., Jin, C., and Yuan, Y. (2015). Escaping from saddle points—Online stochastic gradient for tensor decomposition. In Conference on Learning Theory, pages 797–842. PMLR.
 Grasedyck et al. (2015) Grasedyck, L., Kluge, M., and Kramer, S. (2015). Variants of alternating least squares tensor completion in the tensor train format. SIAM Journal on Scientific Computing, 37(5):A2424–A2450.
 Hillar and Lim (2013) Hillar, C. J. and Lim, L.H. (2013). Most tensor problems are NPhard. Journal of the ACM (JACM), 60(6):1–39.

Jain et al. (2013)
Jain, P., Netrapalli, P., and Sanghavi, S. (2013).
Lowrank matrix completion using alternating minimization.
In
Proceedings of the fortyfifth annual ACM symposium on Theory of computing
, pages 665–674.  Jain and Oh (2014) Jain, P. and Oh, S. (2014). Provable tensor factorization with missing data. arXiv preprint arXiv:1406.2784.
 Kasai and Mishra (2016) Kasai, H. and Mishra, B. (2016). Lowrank tensor completion: a Riemannian manifold preconditioning approach. In International Conference on Machine Learning, pages 1012–1021. PMLR.
 Kolda and Bader (2009) Kolda, T. G. and Bader, B. W. (2009). Tensor decompositions and applications. SIAM Review, 51(3):455–500.
 Kossaifi et al. (2019) Kossaifi, J., Panagakis, Y., Anandkumar, A., and Pantic, M. (2019). Tensorly: Tensor learning in Python. Journal of Machine Learning Research (JMLR), 20(26).
 Kressner et al. (2014) Kressner, D., Steinlechner, M., and Vandereycken, B. (2014). Lowrank tensor completion by Riemannian optimization. BIT Numerical Mathematics, 54(2):447–468.
 Larsen and Kolda (2020) Larsen, B. W. and Kolda, T. G. (2020). Practical leveragebased sampling for lowrank tensor decomposition. arXiv preprint arXiv:2006.16438.
 Li et al. (2013) Li, M., Miller, G. L., and Peng, R. (2013). Iterative row sampling. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, pages 127–136. IEEE.
 Li et al. (2019) Li, Z., Ton, J.F., Oglic, D., and Sejdinovic, D. (2019). Towards a unified analysis of random Fourier features. In International Conference on Machine Learning, pages 3905–3914. PMLR.
 Liu and Moitra (2020) Liu, A. and Moitra, A. (2020). Tensor completion made practical. In Advances in Neural Information Processing Systems, volume 33, pages 18905–18916.
 Liu et al. (2012) Liu, J., Musialski, P., Wonka, P., and Ye, J. (2012). Tensor completion for estimating missing values in visual data. IEEE transactions on pattern analysis and machine intelligence, 35(1):208–220.
 Malik and Becker (2018) Malik, O. A. and Becker, S. (2018). Lowrank Tucker decomposition of large tensors using TensorSketch. Advances in Neural Information Processing Systems, 31:10096–10106.
 McCurdy (2018) McCurdy, S. (2018). Ridge regression and provable deterministic ridge leverage score sampling. In Advances in Neural Information Processing Systems, volume 31.
 Musco and Musco (2017) Musco, C. and Musco, C. (2017). Recursive sampling for the Nyström method. In Advances in Neural Information Processing Systems, volume 30.
 Nimishakavi et al. (2018) Nimishakavi, M., Jawanpuria, P. K., and Mishra, B. (2018). A dual framework for lowrank tensor completion. In Advances in Neural Information Processing Systems, volume 31.
 Rabanser et al. (2017) Rabanser, S., Shchur, O., and Günnemann, S. (2017). Introduction to tensor decompositions and their applications in machine learning. arXiv preprint arXiv:1711.10781.
 Sidiropoulos et al. (2017) Sidiropoulos, N. D., De Lathauwer, L., Fu, X., Huang, K., Papalexakis, E. E., and Faloutsos, C. (2017). Tensor decomposition for signal processing and machine learning. IEEE Transactions on Signal Processing, 65(13):3551–3582.
 Traore et al. (2019) Traore, A., Berar, M., and Rakotomamonjy, A. (2019). Singleshot : a scalable tucker tensor decomposition. In Advances in Neural Information Processing Systems, volume 32.
 Woodruff (2014) Woodruff, D. P. (2014). Sketching as a Tool for Numerical Linear Algebra.
 Yu and Liu (2016) Yu, R. and Liu, Y. (2016). Learning from multiway data: Simple and efficient tensor regression. In International Conference on Machine Learning, pages 373–381. PMLR.
 Zhou et al. (2013) Zhou, H., Li, L., and Zhu, H. (2013). Tensor regression with applications in neuroimaging data analysis. Journal of the American Statistical Association, 108(502):540–552.
Appendix A Missing Analysis from Section 3
Here we show how to use overestimates for the ridge leverage scores of a design matrix to create a substantially smaller ordinary least squares problem whose solution vector gives a approximation to the original ridge regression problem. Our proof relies on several sketching and leverage score sampling results from randomized numerical linear algebra Drineas et al. (2006, 2011); Woodruff (2014). The leverage score sampling part of our argument was recently organized all in one place in (Larsen and Kolda, 2020, Appendix B), as these prerequisite results are wellunderstood but scattered across many references.
a.1 Fast Approximate Least Squares
We start by considering the overdetermined least squares problem defined by a matrix and response vector , where and . Define the optimal sum of squared residuals to be
(7) 
Assume for now that is full rank, and let the compact SVD of the design matrix be . By definition, is an orthonormal basis for the column space of . Let be an orthonormal basis for the dimensional subspace that is orthogonal to the column space of . For notational simplicity, let denote the projection of onto the orthogonal subspace . The vector is important because its norm is equal to the norm of the residual vector. To see this, observe that can be chosen so that perfectly matches the part of in the column space of , but cannot (by definition) match anything in the range of :
We denote the solution to the least squares problem by , hence we have .
Now we build on a structural result of Drineas et al. (2011, Lemma 1) that establishes sufficient conditions on any sketching matrix such that the solution to the approximate least squares problem
(8) 
gives a relativeerror approximation to the original least squares problem. The two conditions that we require of matrix are:
(9)  
(10) 
for some . Note that while the ApproximateRidgeRegression algorithm is randomized, the following lemma is a deterministic statement. Failure probabilities will enter our analysis later when we show that this algorithm satisfies conditions Equation 9 and Equation 10 with sufficiently high probability.
Lemma A.1 (Drineas et al. (2011)).
Consider the overconstrained least squares approximation problem in Equation 7, and let the matrix contain the top left singular vectors of . Assume the matrix satisfies conditions Equation 9 and Equation 10 above, for some . Then, the solution to the approximate least squares problem Equation 8 satisfies:
(11)  
(12) 
Proof.
Let us first rewrite the sketched least squares problem induced by as
(13)  
(14) 
Equation Equation 13 follows since , and Equation 14 follows because the columns of span the same subspace as the columns of . Now, let be such that , and note that minimizes Equation 14. This fact follows from
Thus, by the normal equations, we have
Taking the norm of both sides and observing that under condition Equation 9 we have , for all , it follows that
(15) 
Using condition Equation 10, we observe that
(16) 
To establish the first claim of the lemma, let us rewrite the squared norm of the residual vector as
(17)  
(18)  
(19) 
where Equation 17 follows from the Pythagorean theorem since , which is orthogonal to , and consequently ; Equation 18 follows from the definition of and ; and Equation 19 follows from Equation 16 and the orthogonality of .
To establish the second claim of the lemma, recall that . Taking the norm of both sides of this expression, we have that
(20)  
(21) 
where Equation 20 follows since is the smallest singular value of and ; and Equation 21 follows from Equation 16 and the orthogonality of . ∎
a.2 Tools for Satisfying the Structural Conditions
Next we present two results that will be useful for proving that the sketching matrix satisfies the structural conditions in Equations Equation 9 and Equation 10. The first result is (Woodruff, 2014, Theorem 17), which states that is a subspace embedding for the column space of . This result can be thought of as approximate isometry and is noticeably stronger than the desired condition .
Theorem A.2 (Woodruff (2014)).
Consider and its compact SVD . For any , let be a overestimate for the leverage score distribution of . Let . Construct a sampling matrix and rescaling matrix as follows. Initially, let and . For each column of and , independently, and with replacement, choose a row index with probability , and set and . Then with probability at least , simultaneously for all , we have
To prove the second structural condition, we use the following result about squareddistance sampling for approximate matrix multiplication in (Drineas et al., 2006, Lemma 8). In our analysis for leverage score samplingbased ridge regression, it is possible (and beneficial) that in the statement below. Therefore, we modify the original theorem statement and provide a proof to show that the result is unaffected.
Theorem A.3 (Drineas et al. (2006)).
Let , , and denote the number of samples. Let be a probability vector such that, for all , we have
for some constant . Sample row indices from , independently and with replacement, and form the approximate product
Comments
There are no comments yet.