Hidden Unit Specialization in Layered Neural Networks: ReLU vs. Sigmoidal Activation

10/16/2019
by   Elisa Oostwal, et al.
0

We study layered neural networks of rectified linear units (ReLU) in a modelling framework for stochastic training processes. The comparison with sigmoidal activation functions is in the center of interest. We compute typical learning curves for shallow networks with K hidden units in matching student teacher scenarios. The systems exhibit sudden changes of the generalization performance via the process of hidden unit specialization at critical sizes of the training set. Surprisingly, our results show that the training behavior of ReLU networks is qualitatively different from that of networks with sigmoidal activations. In networks with K >= 3 sigmoidal hidden units, the transition is discontinuous: Specialized network configurations co-exist and compete with states of poor performance even for very large training sets. On the contrary, the use of ReLU activations results in continuous transitions for all K: For large enough training sets, two competing, differently specialized states display similar generalization abilities, which coincide exactly for large networks in the limit K to infinity.

READ FULL TEXT
research
01/20/2020

Memory capacity of neural networks with threshold and ReLU activations

Overwhelming theoretical and empirical evidence shows that mildly overpa...
research
01/16/2018

Empirical Explorations in Training Networks with Discrete Activations

We present extensive experiments training and testing hidden units in de...
research
08/10/2023

Optimizing Performance of Feedforward and Convolutional Neural Networks through Dynamic Activation Functions

Deep learning training training algorithms are a huge success in recent ...
research
05/21/2020

Supervised Learning in the Presence of Concept Drift: A modelling framework

We present a modelling framework for the investigation of supervised lea...
research
06/12/2019

Semi-flat minima and saddle points by embedding neural networks to overparameterization

We theoretically study the landscape of the training error for neural ne...
research
10/20/2020

Smooth activations and reproducibility in deep networks

Deep networks are gradually penetrating almost every domain in our lives...
research
03/18/2019

On-line learning dynamics of ReLU neural networks using statistical physics techniques

We introduce exact macroscopic on-line learning dynamics of two-layer ne...

Please sign up or login with your details

Forgot password? Click here to reset