DeepAI AI Chat
Log In Sign Up

Low-rank Tensor Estimation via Riemannian Gauss-Newton: Statistical Optimality and Second-Order Convergence

by   Yuetian Luo, et al.
University of Wisconsin-Madison

In this paper, we consider the estimation of a low Tucker rank tensor from a number of noisy linear measurements. The general problem covers many specific examples arising from applications, including tensor regression, tensor completion, and tensor PCA/SVD. We propose a Riemannian Gauss-Newton (RGN) method with fast implementations for low Tucker rank tensor estimation. Different from the generic (super)linear convergence guarantee of RGN in the literature, we prove the first quadratic convergence guarantee of RGN for low-rank tensor estimation under some mild conditions. A deterministic estimation error lower bound, which matches the upper bound, is provided that demonstrates the statistical optimality of RGN. The merit of RGN is illustrated through two machine learning applications: tensor regression and tensor SVD. Finally, we provide the simulation results to corroborate our theoretical findings.


page 1

page 2

page 3

page 4


An Optimal Statistical and Computational Framework for Generalized Tensor Estimation

This paper describes a flexible framework for generalized low-rank tenso...

Optimal High-order Tensor SVD via Tensor-Train Orthogonal Iteration

This paper studies a general framework for high-order tensor SVD. We pro...

Generalized Low-rank plus Sparse Tensor Estimation by Fast Riemannian Optimization

We investigate a generalized framework to estimate a latent low-rank plu...

Inference for Low-rank Tensors – No Need to Debias

In this paper, we consider the statistical inference for several low-ran...

A Sharp Blockwise Tensor Perturbation Bound for Orthogonal Iteration

In this paper, we develop novel perturbation bounds for the high-order o...

On High-dimensional and Low-rank Tensor Bandits

Most existing studies on linear bandits focus on the one-dimensional cha...