
Highdimension Tensor Completion via Gradientbased Optimization Under Tensortrain Format
In this paper, we propose a novel approach to recover the missing entrie...
04/05/2018 ∙ by Longhao Yuan. Qibin Zhao, et al. ∙ 0 ∙ shareread it

A Dual Framework for Lowrank Tensor Completion
We propose a novel formulation of the lowrank tensor completion problem...
12/04/2017 ∙ by Madhav Nimishakavi, et al. ∙ 0 ∙ shareread it

Boosted Sparse and LowRank Tensor Regression
We propose a sparse and lowrank tensor regression model to relate a uni...
11/03/2018 ∙ by Lifang He, et al. ∙ 0 ∙ shareread it

Kriging in Tensor Train data format
Combination of lowtensor rank techniques and the Fast Fourier transform...
04/21/2019 ∙ by Sergey Dolgov, et al. ∙ 0 ∙ shareread it

Spectral Experts for Estimating Mixtures of Linear Regressions
Discriminative latentvariable models are typically learned using EM or ...
06/17/2013 ∙ by Arun Tejasvi Chaganty, et al. ∙ 0 ∙ shareread it

Stochastically RankRegularized Tensor Regression Networks
Overparametrization of deep neural networks has recently been shown to ...
02/27/2019 ∙ by Arinbjörn Kolbeinsson, et al. ∙ 83 ∙ shareread it

optimParallel: an R Package Providing Parallel Versions of the GradientBased Optimization Methods of optim()
The R package optimParallel provides a parallel version of the gradient...
04/30/2018 ∙ by Florian Gerber, et al. ∙ 0 ∙ shareread it
Gradientbased Optimization for Regression in the Functional TensorTrain Format
We consider the task of lowmultilinearrank functional regression, i.e., learning a lowrank parametric representation of functions from scattered realvalued data. Our first contribution is the development and analysis of an efficient gradient computation that enables gradientbased optimization procedures, including stochastic gradient descent and quasiNewton methods, for learning the parameters of a functional tensortrain (FT). The functional tensortrain uses the tensortrain (TT) representation of lowrank arrays as an ansatz for a class of lowmultilinearrank functions. The FT is represented by a set of matrixvalued functions that contain a set of univariate functions, and the regression task is to learn the parameters of these univariate functions. Our second contribution demonstrates that using nonlinearly parameterized univariate functions, e.g., symmetric kernels with moving centers, within each core can outperform the standard approach of using a linear expansion of basis functions. Our final contributions are new rank adaptation and groupsparsity regularization procedures to minimize overfitting. We use several benchmark problems to demonstrate at least an order of magnitude lower accuracy with gradientbased optimization methods than standard alternating least squares procedures in the lowsample number regime. We also demonstrate an order of magnitude reduction in accuracy on a test problem resulting from using nonlinear parameterizations over linear parameterizations. Finally we compare regression performance with 22 other nonparametric and parametric regression methods on 10 realworld data sets. We achieve topfive accuracy for seven of the data sets and best accuracy for two of the data sets. These rankings are the best amongst parametric models and competetive with the best nonparametric methods.
READ FULL TEXT