On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation Units

08/03/2015
by   Zhibin Liao, et al.
0

Deep feedforward neural networks with piecewise linear activations are currently producing the state-of-the-art results in several public datasets. The combination of deep learning models and piecewise linear activation functions allows for the estimation of exponentially complex functions with the use of a large number of subnetworks specialized in the classification of similar input examples. During the training process, these subnetworks avoid overfitting with an implicit regularization scheme based on the fact that they must share their parameters with other subnetworks. Using this framework, we have made an empirical observation that can improve even more the performance of such models. We notice that these models assume a balanced initial distribution of data points with respect to the domain of the piecewise linear activation function. If that assumption is violated, then the piecewise linear activation units can degenerate into purely linear activation units, which can result in a significant reduction of their capacity to learn complex functions. Furthermore, as the number of model layers increases, this unbalanced initial distribution makes the model ill-conditioned. Therefore, we propose the introduction of batch normalisation units into deep feedforward neural networks with piecewise linear activations, which drives a more balanced use of these activation units, where each region of the activation function is trained with a relatively large proportion of training samples. Also, this batch normalisation promotes the pre-conditioning of very deep learning models. We show that by introducing maxout and batch normalisation units to the network in network model results in a model that produces classification results that are better than or comparable to the current state of the art in CIFAR-10, CIFAR-100, MNIST, and SVHN datasets.

READ FULL TEXT
research
08/02/2021

Piecewise Linear Units Improve Deep Neural Networks

The activation function is at the heart of a deep neural networks nonlin...
research
11/14/2018

Drop-Activation: Implicit Parameter Reduction and Harmonic Regularization

Overfitting frequently occurs in deep learning. In this paper, we propos...
research
11/09/2015

Batch-normalized Maxout Network in Network

This paper reports a novel deep architecture referred to as Maxout netwo...
research
06/25/2021

Ladder Polynomial Neural Networks

Polynomial functions have plenty of useful analytical properties, but th...
research
12/22/2015

Deep Learning with S-shaped Rectified Linear Activation Units

Rectified linear activation units are important components for state-of-...
research
06/15/2021

Predicting Unreliable Predictions by Shattering a Neural Network

Piecewise linear neural networks can be split into subfunctions, each wi...
research
11/29/2020

Scaling down Deep Learning

Though deep learning models have taken on commercial and political relev...

Please sign up or login with your details

Forgot password? Click here to reset