Improving Deep Neural Network Random Initialization Through Neuronal Rewiring

07/17/2022
by   Leonardo Scabini, et al.
0

The deep learning literature is continuously updated with new architectures and training techniques. However, weight initialization is overlooked by most recent research, despite some intriguing findings regarding random weights. On the other hand, recent works have been approaching Network Science to understand the structure and dynamics of Artificial Neural Networks (ANNs) after training. Therefore, in this work, we analyze the centrality of neurons in randomly initialized networks. We show that a higher neuronal strength variance may decrease performance, while a lower neuronal strength variance usually improves it. A new method is then proposed to rewire neuronal connections according to a preferential attachment (PA) rule based on their strength, which significantly reduces the strength variance of layers initialized by common methods. In this sense, PA rewiring only reorganizes connections, while preserving the magnitude and distribution of the weights. We show through an extensive statistical analysis in image classification that performance is improved in most cases, both during training and testing, when using both simple and complex architectures and learning schedules. Our results show that, aside from the magnitude, the organization of the weights is also relevant for better initialization of deep ANNs.

READ FULL TEXT
research
10/25/2021

ZerO Initialization: Initializing Residual Networks with only Zeros and Ones

Deep neural networks are usually initialized with random weights, with a...
research
05/25/2020

Fractional moment-preserving initialization schemes for training fully-connected neural networks

A common approach to initialization in deep neural networks is to sample...
research
04/16/2020

Hcore-Init: Neural Network Initialization based on Graph Degeneracy

Neural networks are the pinnacle of Artificial Intelligence, as in recen...
research
02/13/2019

Variance-Preserving Initialization Schemes Improve Deep Network Training: But Which Variance is Preserved?

Before training a neural net, a classic rule of thumb is to randomly ini...
research
02/27/2020

Deep Randomized Neural Networks

Randomized Neural Networks explore the behavior of neural systems where ...
research
11/24/2017

Critical Learning Periods in Deep Neural Networks

Critical periods are phases in the early development of humans and anima...
research
03/08/2021

Cluster-based Input Weight Initialization for Echo State Networks

Echo State Networks (ESNs) are a special type of recurrent neural networ...

Please sign up or login with your details

Forgot password? Click here to reset