If dropout limits trainable depth, does critical initialisation still matter? A large-scale statistical analysis on ReLU networks

10/13/2019
by   Arnu Pretorius, et al.
0

Recent work in signal propagation theory has shown that dropout limits the depth to which information can propagate through a neural network. In this paper, we investigate the effect of initialisation on training speed and generalisation for ReLU networks within this depth limit. We ask the following research question: given that critical initialisation is crucial for training at large depth, if dropout limits the depth at which networks are trainable, does initialising critically still matter? We conduct a large-scale controlled experiment, and perform a statistical analysis of over 12000 trained networks. We find that (1) trainable networks show no statistically significant difference in performance over a wide range of non-critical initialisations; (2) for initialisations that show a statistically significant difference, the net effect on performance is small; (3) only extreme initialisations (very small or very large) perform worse than criticality. These findings also apply to standard ReLU networks of moderate depth as a special case of zero dropout. Our results therefore suggest that, in the shallow-to-moderate depth setting, critical initialisation provides zero performance gains when compared to off-critical initialisations and that searching for off-critical initialisations that might improve training speed or generalisation, is likely to be a fruitless endeavour.

READ FULL TEXT
research
11/01/2018

Critical initialisation for deep signal propagation in noisy rectifier neural networks

Stochastic regularisation is an important weapon in the arsenal of a dee...
research
12/19/2019

Mean field theory for deep dropout networks: digging up gradient backpropagation deeply

In recent years, the mean field theory has been applied to the study of ...
research
11/04/2016

Deep Information Propagation

We study the behavior of untrained neural networks whose weights and bia...
research
07/07/2021

Self-organized criticality in neural networks

We demonstrate, both analytically and numerically, that learning dynamic...
research
10/23/2020

On Convergence and Generalization of Dropout Training

We study dropout in two-layer neural networks with rectified linear unit...
research
09/28/2020

Quantal synaptic dilution enhances sparse encoding and dropout regularisation in deep networks

Dropout is a technique that silences the activity of units stochasticall...
research
12/15/2016

Improving Neural Network Generalization by Combining Parallel Circuits with Dropout

In an attempt to solve the lengthy training times of neural networks, we...

Please sign up or login with your details

Forgot password? Click here to reset