Understanding Deflation Process in Over-parametrized Tensor Decomposition

by   Rong Ge, et al.

In this paper we study the training dynamics for gradient flow on over-parametrized tensor decomposition problems. Empirically, such training process often first fits larger components and then discovers smaller components, which is similar to a tensor deflation process that is commonly used in tensor decomposition algorithms. We prove that for orthogonally decomposable tensor, a slightly modified version of gradient flow would follow a tensor deflation process and recover all the tensor components. Our proof suggests that for orthogonal tensors, gradient flow dynamics works similarly as greedy low-rank learning in the matrix setting, which is a first step towards understanding the implicit regularization effect of over-parametrized models for low-rank tensors.


page 1

page 2

page 3

page 4


The low-rank approximation of fourth-order partial-symmetric and conjugate partial-symmetric tensor

We present an orthogonal matrix outer product decomposition for the four...

Beyond Lazy Training for Over-parameterized Tensor Decomposition

Over-parametrization is an important technique in training neural networ...

Stochastic uncertainty analysis of gravity gradient tensor components and their combinations

Full tensor gravity (FTG) devices provide up to five independent compone...

Statistical mechanics of low-rank tensor decomposition

Often, large, high dimensional datasets collected across multiple modali...

When Does Non-Orthogonal Tensor Decomposition Have No Spurious Local Minima?

We study the optimization problem for decomposing d dimensional fourth-o...

TensoRF: Tensorial Radiance Fields

We present TensoRF, a novel approach to model and reconstruct radiance f...

Tensor-Based Modulation for Unsourced Massive Random Access

We introduce a modulation for unsourced massive random access whereby th...