ProbAct: A Probabilistic Activation Function for Deep Neural Networks

05/26/2019
by   Joonho Lee, et al.
0

Activation functions play an important role in the training of artificial neural networks and the Rectified Linear Unit (ReLU) has been the mainstream in recent years. Most of the activation functions currently used are deterministic in nature, whose input-output relationship is fixed. In this work, we propose a probabilistic activation function, called ProbAct. The output value of ProbAct is sampled from a normal distribution, with the mean value same as the output of ReLU and with a fixed or trainable variance for each element. In the trainable ProbAct, the variance of the activation distribution is trained through back-propagation. We also show that the stochastic perturbation through ProbAct is a viable generalization technique that can prevent overfitting. In our experiments, we demonstrate that when using ProbAct, it is possible to boost the image classification performance on CIFAR-10, CIFAR-100, and STL-10 datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/28/2020

Trainable Activation Function in Image Classification

In the current research of neural networks, the activation function is m...
research
04/28/2020

Trainable Activation Function Supported CNN in Image Classification

In the current research of neural networks, the activation function is m...
research
01/15/2022

Phish: A Novel Hyper-Optimizable Activation Function

Deep-learning models estimate values using backpropagation. The activati...
research
01/18/2021

Learning DNN networks using un-rectifying ReLU with compressed sensing application

The un-rectifying technique expresses a non-linear point-wise activation...
research
09/24/2018

Dynamical Isometry is Achieved in Residual Networks in a Universal Way for any Activation Function

We demonstrate that in residual neural networks (ResNets) dynamical isom...
research
07/31/2020

An Investigation on Deep Learning with Beta Stabilizer

Artificial neural networks (ANN) have been used in many applications suc...
research
10/22/2021

Logical Activation Functions: Logit-space equivalents of Boolean Operators

Neuronal representations within artificial neural networks are commonly ...

Please sign up or login with your details

Forgot password? Click here to reset