Improving the Trainability of Deep Neural Networks through Layerwise Batch-Entropy Regularization

08/01/2022
by   David Peer, et al.
0

Training deep neural networks is a very demanding task, especially challenging is how to adapt architectures to improve the performance of trained models. We can find that sometimes, shallow networks generalize better than deep networks, and the addition of more layers results in higher training and test errors. The deep residual learning framework addresses this degradation problem by adding skip connections to several neural network layers. It would at first seem counter-intuitive that such skip connections are needed to train deep networks successfully as the expressivity of a network would grow exponentially with depth. In this paper, we first analyze the flow of information through neural networks. We introduce and evaluate the batch-entropy which quantifies the flow of information through each layer of a neural network. We prove empirically and theoretically that a positive batch-entropy is required for gradient descent-based training approaches to optimize a given loss function successfully. Based on those insights, we introduce batch-entropy regularization to enable gradient descent-based training algorithms to optimize the flow of information through each hidden layer individually. With batch-entropy regularization, gradient descent optimizers can transform untrainable networks into trainable networks. We show empirically that we can therefore train a "vanilla" fully connected network and convolutional neural network – no skip connections, batch normalization, dropout, or any other architectural tweak – with 500 layers by simply adding the batch-entropy regularization term to the loss function. The effect of batch-entropy regularization is not only evaluated on vanilla neural networks, but also on residual networks, autoencoders, and also transformer models over a wide range of computer vision as well as natural language processing tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/27/2022

Langevin algorithms for very deep Neural Networks with application to image classification

Training a very deep neural network is a challenging task, as the deeper...
research
02/21/2019

A Mean Field Theory of Batch Normalization

We develop a mean field theory for batch normalization in fully-connecte...
research
10/10/2022

A global analysis of global optimisation

Theoretical understanding of the training of deep neural networks has ma...
research
10/14/2022

Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again

Despite the enormous success of Graph Convolutional Networks (GCNs) in m...
research
11/27/2018

A Fully Sequential Methodology for Convolutional Neural Networks

Recent work has shown that the performance of convolutional neural netwo...
research
10/30/2021

Neural Network based on Automatic Differentiation Transformation of Numeric Iterate-to-Fixedpoint

This work proposes a Neural Network model that can control its depth usi...
research
07/30/2020

Generalization Comparison of Deep Neural Networks via Output Sensitivity

Although recent works have brought some insights into the performance im...

Please sign up or login with your details

Forgot password? Click here to reset