Fractional moment-preserving initialization schemes for training fully-connected neural networks

05/25/2020
by   Mert Gurbuzbalaban, et al.
0

A common approach to initialization in deep neural networks is to sample the network weights from a Gaussian distribution to preserve the variance of preactivations. On the other hand, recent research shows that for a large number of deep neural networks, training process can often lead to non-Gaussianity and heavy tails in the distribution of the network weights, where the weights will not have a finite variance but rather have a (non-integer) fractional moment of order s with s<2. Motivated by this fact, we develop initialization schemes for fully connected feed-forward networks that can provably preserve any given moment of order s∈ (0,2) for ReLU, Leaky ReLU, Randomized Leaky ReLU and linear activations. The proposed strategy do not have an extra cost during the training procedure. We also show through numerical experiments that our initialization can improve the training and test performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2021

A Weight Initialization Based on the Linear Product Structure for Neural Networks

Weight initialization plays an important role in training neural network...
research
06/27/2022

AutoInit: Automatic Initialization via Jacobian Tuning

Good initialization is essential for training Deep Neural Networks (DNNs...
research
03/27/2019

A Sober Look at Neural Network Initializations

Initializing the weights and the biases is a key part of the training pr...
research
03/05/2018

How to Start Training: The Effect of Initialization and Architecture

We investigate the effects of initialization and architecture on the sta...
research
06/20/2023

Principles for Initialization and Architecture Selection in Graph Neural Networks with ReLU Activations

This article derives and validates three principles for initialization a...
research
08/23/2023

Stabilizing RNN Gradients through Pre-training

Numerous theories of learning suggest to prevent the gradient variance f...
research
07/17/2022

Improving Deep Neural Network Random Initialization Through Neuronal Rewiring

The deep learning literature is continuously updated with new architectu...

Please sign up or login with your details

Forgot password? Click here to reset