Dynamical Isometry for Residual Networks

10/05/2022
by   Advait Gadhikar, et al.
0

The training success, training speed and generalization ability of neural networks rely crucially on the choice of random parameter initialization. It has been shown for multiple architectures that initial dynamical isometry is particularly advantageous. Known initialization schemes for residual blocks, however, miss this property and suffer from degrading separability of different inputs for increasing depth and instability without Batch Normalization or lack feature diversity. We propose a random initialization scheme, RISOTTO, that achieves perfect dynamical isometry for residual networks with ReLU activation functions even for finite depth and width. It balances the contributions of the residual and skip branches unlike other schemes, which initially bias towards the skip connections. In experiments, we demonstrate that in most cases our approach outperforms initialization schemes proposed to make Batch Normalization obsolete, including Fixup and SkipInit, and facilitates stable training. Also in combination with Batch Normalization, we find that RISOTTO often achieves the overall best result.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2020

Batch Normalization Biases Deep Residual Networks Towards Shallow Paths

Batch normalization has multiple benefits. It improves the conditioning ...
research
01/01/2020

A Comprehensive and Modularized Statistical Framework for Gradient Norm Equality in Deep Neural Networks

In recent years, plenty of metrics have been proposed to identify networ...
research
12/23/2021

A Robust Initialization of Residual Blocks for Effective ResNet Training without Batch Normalization

Batch Normalization is an essential component of all state-of-the-art ne...
research
05/21/2021

Maximum and Leaky Maximum Propagation

In this work, we present an alternative to conventional residual connect...
research
08/18/2021

Generalizing MLPs With Dropouts, Batch Normalization, and Skip Connections

A multilayer perceptron (MLP) is typically made of multiple fully connec...
research
01/27/2019

Fixup Initialization: Residual Learning Without Normalization

Normalization layers are a staple in state-of-the-art deep neural networ...
research
05/28/2023

On the impact of activation and normalization in obtaining isometric embeddings at initialization

In this paper, we explore the structure of the penultimate Gram matrix i...

Please sign up or login with your details

Forgot password? Click here to reset