Understanding Deflation Process in Over-parametrized Tensor Decomposition

06/11/2021
by   Rong Ge, et al.
0

In this paper we study the training dynamics for gradient flow on over-parametrized tensor decomposition problems. Empirically, such training process often first fits larger components and then discovers smaller components, which is similar to a tensor deflation process that is commonly used in tensor decomposition algorithms. We prove that for orthogonally decomposable tensor, a slightly modified version of gradient flow would follow a tensor deflation process and recover all the tensor components. Our proof suggests that for orthogonal tensors, gradient flow dynamics works similarly as greedy low-rank learning in the matrix setting, which is a first step towards understanding the implicit regularization effect of over-parametrized models for low-rank tensors.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/05/2021

The low-rank approximation of fourth-order partial-symmetric and conjugate partial-symmetric tensor

We present an orthogonal matrix outer product decomposition for the four...
10/22/2020

Beyond Lazy Training for Over-parameterized Tensor Decomposition

Over-parametrization is an important technique in training neural networ...
05/18/2022

Stochastic uncertainty analysis of gravity gradient tensor components and their combinations

Full tensor gravity (FTG) devices provide up to five independent compone...
10/23/2018

Statistical mechanics of low-rank tensor decomposition

Often, large, high dimensional datasets collected across multiple modali...
11/22/2019

When Does Non-Orthogonal Tensor Decomposition Have No Spurious Local Minima?

We study the optimization problem for decomposing d dimensional fourth-o...
03/17/2022

TensoRF: Tensorial Radiance Fields

We present TensoRF, a novel approach to model and reconstruct radiance f...
06/11/2020

Tensor-Based Modulation for Unsourced Massive Random Access

We introduce a modulation for unsourced massive random access whereby th...