Smooth activations and reproducibility in deep networks

10/20/2020
by   Gil I. Shamir, et al.
0

Deep networks are gradually penetrating almost every domain in our lives due to their amazing success. However, with substantive performance accuracy improvements comes the price of irreproducibility. Two identical models, trained on the exact same training dataset may exhibit large differences in predictions on individual examples even when average accuracy is similar, especially when trained on highly distributed parallel systems. The popular Rectified Linear Unit (ReLU) activation has been key to recent success of deep networks. We demonstrate, however, that ReLU is also a catalyzer to irreproducibility in deep networks. We show that not only can activations smoother than ReLU provide better accuracy, but they can also provide better accuracy-reproducibility tradeoffs. We propose a new family of activations; Smooth ReLU (SmeLU), designed to give such better tradeoffs, while also keeping the mathematical expression simple, and thus implementation cheap. SmeLU is monotonic, mimics ReLU, while providing continuous gradients, yielding better reproducibility. We generalize SmeLU to give even more flexibility and then demonstrate that SmeLU and its generalized form are special cases of a more general methodology of REctified Smooth Continuous Unit (RESCU) activations. Empirical results demonstrate the superior accuracy-reproducibility tradeoffs with smooth activations, SmeLU in particular.

READ FULL TEXT
research
02/14/2022

Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations

Real world recommendation systems influence a constantly growing set of ...
research
10/19/2020

Anti-Distillation: Improving reproducibility of deep networks

Deep networks have been revolutionary in improving performance of machin...
research
11/24/2020

Comparisons among different stochastic selection of activation layers for convolutional neural networks for healthcare

Classification of biological images is an important task with crucial ap...
research
06/03/2019

Deep ReLU Networks Have Surprisingly Few Activation Patterns

The success of deep networks has been attributed in part to their expres...
research
10/16/2019

Hidden Unit Specialization in Layered Neural Networks: ReLU vs. Sigmoidal Activation

We study layered neural networks of rectified linear units (ReLU) in a m...
research
06/01/2022

Rotate the ReLU to implicitly sparsify deep networks

In the era of Deep Neural Network based solutions for a variety of real-...
research
05/30/2019

Function approximation by deep networks

We show that deep networks are better than shallow networks at approxima...

Please sign up or login with your details

Forgot password? Click here to reset