DeepAI AI Chat
Log In Sign Up

Gradient-based Optimization for Regression in the Functional Tensor-Train Format

01/03/2018
by   Alex A. Gorodetsky, et al.
University of Michigan
0

We consider the task of low-multilinear-rank functional regression, i.e., learning a low-rank parametric representation of functions from scattered real-valued data. Our first contribution is the development and analysis of an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). The functional tensor-train uses the tensor-train (TT) representation of low-rank arrays as an ansatz for a class of low-multilinear-rank functions. The FT is represented by a set of matrix-valued functions that contain a set of univariate functions, and the regression task is to learn the parameters of these univariate functions. Our second contribution demonstrates that using nonlinearly parameterized univariate functions, e.g., symmetric kernels with moving centers, within each core can outperform the standard approach of using a linear expansion of basis functions. Our final contributions are new rank adaptation and group-sparsity regularization procedures to minimize overfitting. We use several benchmark problems to demonstrate at least an order of magnitude lower accuracy with gradient-based optimization methods than standard alternating least squares procedures in the low-sample number regime. We also demonstrate an order of magnitude reduction in accuracy on a test problem resulting from using nonlinear parameterizations over linear parameterizations. Finally we compare regression performance with 22 other nonparametric and parametric regression methods on 10 real-world data sets. We achieve top-five accuracy for seven of the data sets and best accuracy for two of the data sets. These rankings are the best amongst parametric models and competetive with the best non-parametric methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/05/2018

High-dimension Tensor Completion via Gradient-based Optimization Under Tensor-train Format

In this paper, we propose a novel approach to recover the missing entrie...
11/21/2022

Approximation in the extended functional tensor train format

This work proposes the extended functional tensor train (EFTT) format fo...
03/03/2020

A Riemannian Newton Optimization Framework for the Symmetric Tensor Rank Approximation Problem

The symmetric tensor rank approximation problem (STA) consists in comput...
10/22/2020

Beyond Lazy Training for Over-parameterized Tensor Decomposition

Over-parametrization is an important technique in training neural networ...
02/19/2021

Implicit Regularization in Tensor Factorization

Implicit regularization in deep learning is perceived as a tendency of g...
01/28/2023

PROTES: Probabilistic Optimization with Tensor Sampling

We develop new method PROTES for optimization of the multidimensional ar...