Provable Methods for Training Neural Networks with Sparse Connectivity

12/08/2014
by   Hanie Sedghi, et al.
0

We provide novel guaranteed approaches for training feedforward neural networks with sparse connectivity. We leverage on the techniques developed previously for learning linear networks and show that they can also be effectively adopted to learn non-linear networks. We operate on the moments involving label and the score function of the input, and show that their factorization provably yields the weight matrix of the first layer of a deep network under mild conditions. In practice, the output of our method can be employed as effective initializers for gradient descent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2015

Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods

Training neural networks is a challenging non-convex optimization proble...
research
05/16/2020

An Effective and Efficient Training Algorithm for Multi-layer Feedforward Neural Networks

Network initialization is the first and critical step for training neura...
research
05/16/2020

An Effective and Efficient Initialization Scheme for Multi-layer Feedforward Neural Networks

Network initialization is the first and critical step for training neura...
research
05/20/2018

A Vest of the Pseudoinverse Learning Algorithm

In this letter, we briefly review the basic scheme of the pseudoinverse ...
research
09/23/2018

Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks

In this note, we study the dynamics of gradient descent on objective fun...
research
02/18/2020

Learning Parities with Neural Networks

In recent years we see a rapidly growing line of research which shows le...
research
11/14/2017

Deep Rewiring: Training very sparse deep networks

Neuromorphic hardware tends to pose limits on the connectivity of deep n...

Please sign up or login with your details

Forgot password? Click here to reset