Tensor-on-Tensor Regression: Riemannian Optimization, Over-parameterization, Statistical-computational Gap, and Their Interplay

06/17/2022
by   Yuetian Luo, et al.
0

We study the tensor-on-tensor regression, where the goal is to connect tensor responses to tensor covariates with a low Tucker rank parameter tensor/matrix without the prior knowledge of its intrinsic rank. We propose the Riemannian gradient descent (RGD) and Riemannian Gauss-Newton (RGN) methods and cope with the challenge of unknown rank by studying the effect of rank over-parameterization. We provide the first convergence guarantee for the general tensor-on-tensor regression by showing that RGD and RGN respectively converge linearly and quadratically to a statistically optimal estimate in both rank correctly-parameterized and over-parameterized settings. Our theory reveals an intriguing phenomenon: Riemannian optimization methods naturally adapt to over-parameterization without modifications to their implementation. We also give the first rigorous evidence for the statistical-computational gap in scalar-on-tensor regression under the low-degree polynomials framework. Our theory demonstrates a “blessing of statistical-computational gap" phenomenon: in a wide range of scenarios in tensor-on-tensor regression for tensors of order three or higher, the computationally required sample size matches what is needed by moderate rank over-parameterization when considering computationally feasible estimators, while there are no such benefits in the matrix settings. This shows moderate rank over-parameterization is essentially “cost-free" in terms of sample size in tensor-on-tensor regression of order three or higher. Finally, we conduct simulation studies to show the advantages of our proposed methods and to corroborate our theoretical findings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2021

Low-rank Tensor Estimation via Riemannian Gauss-Newton: Statistical Optimality and Second-Order Convergence

In this paper, we consider the estimation of a low Tucker rank tensor fr...
research
12/29/2020

Inference for Low-rank Tensors – No Need to Debias

In this paper, we consider the statistical inference for several low-ran...
research
03/03/2020

A Riemannian Newton Optimization Framework for the Symmetric Tensor Rank Approximation Problem

The symmetric tensor rank approximation problem (STA) consists in comput...
research
04/15/2022

Statistical-Computational Trade-offs in Tensor PCA and Related Problems via Communication Complexity

Tensor PCA is a stylized statistical inference problem introduced by Mon...
research
06/06/2023

Online Tensor Learning: Computational and Statistical Trade-offs, Adaptivity and Optimal Regret

We investigate a generalized framework for estimating latent low-rank te...
research
08/27/2021

Provable Tensor-Train Format Tensor Completion by Riemannian Optimization

The tensor train (TT) format enjoys appealing advantages in handling str...
research
05/04/2020

A Solution for Large Scale Nonlinear Regression with High Rank and Degree at Constant Memory Complexity via Latent Tensor Reconstruction

This paper proposes a novel method for learning highly nonlinear, multiv...

Please sign up or login with your details

Forgot password? Click here to reset