DeepAI AI Chat
Log In Sign Up

Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods

by   Majid Janzamin, et al.

Training neural networks is a challenging non-convex optimization problem, and backpropagation or gradient descent can get stuck in spurious local optima. We propose a novel algorithm based on tensor decomposition for guaranteed training of two-layer neural networks. We provide risk bounds for our proposed method, with a polynomial sample complexity in the relevant parameters, such as input dimension and number of neurons. While learning arbitrary target functions is NP-hard, we provide transparent conditions on the function and the input for learnability. Our training method is based on tensor decomposition, which provably converges to the global optimum, under a set of mild non-degeneracy conditions. It consists of simple embarrassingly parallel linear and multi-linear operations, and is competitive with standard stochastic gradient descent (SGD), in terms of computational complexity. Thus, we propose a computationally efficient method with guaranteed risk bounds for training neural networks with one hidden layer.


page 1

page 2

page 3

page 4


Provable Methods for Training Neural Networks with Sparse Connectivity

We provide novel guaranteed approaches for training feedforward neural n...

Training neural networks using monotone variational inequality

Despite the vast empirical success of neural networks, theoretical under...

Neural Networks with Few Multiplications

For most deep learning algorithms training is notoriously time consuming...

Beyond Lazy Training for Over-parameterized Tensor Decomposition

Over-parametrization is an important technique in training neural networ...

On the Computational Efficiency of Training Neural Networks

It is well-known that neural networks are computationally hard to train....

A Variational Inequality Model for Learning Neural Networks

Neural networks have become ubiquitous tools for solving signal and imag...

Training Neural Networks Using Features Replay

Training a neural network using backpropagation algorithm requires passi...