
Decomposing Overcomplete 3rd Order Tensors using SumofSquares Algorithms
Tensor rank and lowrank tensor decompositions have many applications in...
04/21/2015 ∙ by Rong Ge, et al. ∙ 0 ∙ shareread it

Randomized CP Tensor Decomposition
The CANDECOMP/PARAFAC (CP) tensor decomposition is a popular dimensional...
03/27/2017 ∙ by N. Benjamin Erichson, et al. ∙ 0 ∙ shareread it

Fast online lowrank tensor subspace tracking by CP decomposition using recursive least squares from incomplete observations
We consider the problem of online subspace tracking of a partially obser...
09/29/2017 ∙ by Hiroyuki Kasai, et al. ∙ 0 ∙ shareread it

Nearoptimal sample complexity for convex tensor completion
We analyze low rank tensor completion (TC) using noisy measurements of a...
11/14/2017 ∙ by Navid Ghadermarzy, et al. ∙ 0 ∙ shareread it

Vectorial Dimension Reduction for Tensors Based on Bayesian Inference
Dimensionality reduction for highorder tensors is a challenging problem...
07/03/2017 ∙ by Fujiao Ju, et al. ∙ 0 ∙ shareread it

Fast spectral algorithms from sumofsquares proofs: tensor decomposition and planted sparse vectors
We consider two problems that arise in machine learning applications: th...
12/08/2015 ∙ by Samuel B. Hopkins, et al. ∙ 0 ∙ shareread it

Lowrank Random Tensor for Bilinear Pooling
Bilinear pooling is capable of extracting highorder information from da...
06/03/2019 ∙ by Yan Zhang, et al. ∙ 0 ∙ shareread it
Near Optimal Sketching of LowRank Tensor Regression
We study the least squares regression problem _Θ∈S_ D,RAΘb_2, where S_ D,R is the set of Θ for which Θ = ∑_r=1^Rθ_1^(r)∘...∘θ_D^(r) for vectors θ_d^(r)∈R^p_d for all r ∈ [R] and d ∈ [D], and ∘ denotes the outer product of vectors. That is, Θ is a lowdimensional, lowrank tensor. This is motivated by the fact that the number of parameters in Θ is only R ·∑_d=1^D p_d, which is significantly smaller than the ∏_d=1^D p_d number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors Θ, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on sparse random projections Φ∈R^m × n, with m ≪ n, to reduce the problem to a much smaller problem _ΘΦ A Θ  Φ b_2, for which if Θ' is a nearoptimum to the smaller problem, then it is also a near optimum to the original problem. We obtain significantly smaller dimension and sparsity in Φ than is possible for ordinary least squares regression, and we also provide a number of numerical simulations supporting our theory.
READ FULL TEXT