Near Optimal Sketching of Low-Rank Tensor Regression

09/20/2017 ∙ by Jarvis Haupt, et al. ∙ 0

We study the least squares regression problem _Θ∈S_ D,RAΘ-b_2, where S_ D,R is the set of Θ for which Θ = ∑_r=1^Rθ_1^(r)∘...∘θ_D^(r) for vectors θ_d^(r)∈R^p_d for all r ∈ [R] and d ∈ [D], and ∘ denotes the outer product of vectors. That is, Θ is a low-dimensional, low-rank tensor. This is motivated by the fact that the number of parameters in Θ is only R ·∑_d=1^D p_d, which is significantly smaller than the ∏_d=1^D p_d number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors Θ, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on sparse random projections Φ∈R^m × n, with m ≪ n, to reduce the problem to a much smaller problem _ΘΦ A Θ - Φ b_2, for which if Θ' is a near-optimum to the smaller problem, then it is also a near optimum to the original problem. We obtain significantly smaller dimension and sparsity in Φ than is possible for ordinary least squares regression, and we also provide a number of numerical simulations supporting our theory.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.