
Tensor vs Matrix Methods: Robust Tensor Decomposition under Block Sparse Perturbations
Robust tensor CP decomposition involves decomposing a tensor into low ra...
10/15/2015 ∙ by Prateek Jain, et al. ∙ 0 ∙ shareread it

Tensor Regression Networks with various LowRank Tensor Approximations
Tensor regression networks achieve high rate of compression of model par...
12/27/2017 ∙ by Xingwei Cao, et al. ∙ 0 ∙ shareread it

Stochastically RankRegularized Tensor Regression Networks
Overparametrization of deep neural networks has recently been shown to ...
02/27/2019 ∙ by Arinbjörn Kolbeinsson, et al. ∙ 83 ∙ shareread it

Gradientbased Optimization for Regression in the Functional TensorTrain Format
We consider the task of lowmultilinearrank functional regression, i.e....
01/03/2018 ∙ by Alex A. Gorodetsky, et al. ∙ 0 ∙ shareread it

Tensor Regression Meets Gaussian Processes
Lowrank tensor regression, a new model class that learns highorder cor...
10/31/2017 ∙ by Rose Yu, et al. ∙ 0 ∙ shareread it

Parallel Nonnegative CP Decomposition of Dense Tensors
The CP tensor decomposition is a lowrank approximation of a tensor. We ...
06/19/2018 ∙ by Grey Ballard, et al. ∙ 0 ∙ shareread it

Multivariate Convolutional Sparse Coding with Low Rank Tensor
This paper introduces a new multivariate convolutional sparse coding bas...
08/09/2019 ∙ by Pierre Humbert, et al. ∙ 8 ∙ shareread it
Boosted Sparse and LowRank Tensor Regression
We propose a sparse and lowrank tensor regression model to relate a univariate outcome to a feature tensor, in which each unitrank tensor from the CP decomposition of the coefficient tensor is assumed to be sparse. This structure is both parsimonious and highly interpretable, as it implies that the outcome is related to the features through a few distinct pathways, each of which may only involve subsets of feature dimensions. We take a divideandconquer strategy to simplify the task into a set of sparse unitrank tensor regression problems. To make the computation efficient and scalable, for the unitrank tensor regression, we propose a stagewise estimation procedure to efficiently trace out its entire solution path. We show that as the step size goes to zero, the stagewise solution paths converge exactly to those of the corresponding regularized regression. The superior performance of our approach is demonstrated on various realworld and synthetic examples.
READ FULL TEXT