Beyond Lazy Training for Over-parameterized Tensor Decomposition

10/22/2020
by   Xiang Wang, et al.
0

Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. In this paper we study a closely related tensor decomposition problem: given an l-th order tensor in (R^d)^⊗ l of rank r (where r≪ d), can variants of gradient descent find a rank m decomposition where m > r? We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least m = Ω(d^l-1), while a variant of gradient descent can find an approximate tensor when m = O^*(r^2.5llog d). Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/11/2021

Understanding Deflation Process in Over-parametrized Tensor Decomposition

In this paper we study the training dynamics for gradient flow on over-p...
09/30/2019

On the convergence of gradient descent for two layer neural networks

It has been shown that gradient descent can yield the zero training loss...
06/28/2015

Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods

Training neural networks is a challenging non-convex optimization proble...
01/03/2018

Gradient-based Optimization for Regression in the Functional Tensor-Train Format

We consider the task of low-multilinear-rank functional regression, i.e....
01/16/2013

Big Neural Networks Waste Capacity

This article exposes the failure of some big neural networks to leverage...
05/29/2019

On the Inductive Bias of Neural Tangent Kernels

State-of-the-art neural networks are heavily over-parameterized, making ...
02/24/2017

Strongly-Typed Agents are Guaranteed to Interact Safely

As artificial agents proliferate, it is becoming increasingly important ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.