Noisy Activation Functions

03/01/2016
by   Caglar Gulcehre, et al.
0

Common nonlinear activation functions used in neural networks can cause training difficulties due to the saturation behavior of the activation function, which may hide dependencies that are not visible to vanilla-SGD (using first order gradients only). Gating mechanisms that use softly saturating activation functions to emulate the discrete switching of digital logic circuits are good examples of this. We propose to exploit the injection of appropriate noise so that the gradients may flow easily, even if the noiseless application of the activation function would yield zero gradient. Large noise will dominate the noise-free gradient and allow stochastic gradient descent toexplore more. By adding noise only to the problematic parts of the activation function, we allow the optimization procedure to explore the boundary between the degenerate (saturating) and the well-behaved parts of the activation function. We also establish connections to simulated annealing, when the amount of noise is annealed down, making it easier to optimize hard objective functions. We find experimentally that replacing such saturating activation functions by noisy variants helps training in many contexts, yielding state-of-the-art or competitive results on different datasets and task, especially when training seems to be the most difficult, e.g., when curriculum learning is necessary to obtain good results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2023

STL: A Signed and Truncated Logarithm Activation Function for Neural Networks

Activation functions play an essential role in neural networks. They pro...
research
08/29/2022

Normalized Activation Function: Toward Better Convergence

Activation functions are essential for neural networks to introduce non-...
research
07/29/2021

Otimizacao de pesos e funcoes de ativacao de redes neurais aplicadas na previsao de series temporais

Neural Networks have been applied for time series prediction with good e...
research
04/26/2022

Self-scalable Tanh (Stan): Faster Convergence and Better Generalization in Physics-informed Neural Networks

Physics-informed Neural Networks (PINNs) are gaining attention in the en...
research
05/13/2022

Uninorm-like parametric activation functions for human-understandable neural models

We present a deep learning model for finding human-understandable connec...
research
03/16/2022

Adaptive n-ary Activation Functions for Probabilistic Boolean Logic

Balancing model complexity against the information contained in observed...
research
06/09/2020

A Note on Deepfake Detection with Low-Resources

Deepfakes are videos that include changes, quite often substituting face...

Please sign up or login with your details

Forgot password? Click here to reset