Nish: A Novel Negative Stimulated Hybrid Activation Function

10/17/2022
by   Yildiray Anagun, et al.
0

An activation function has a significant impact on the efficiency and robustness of the neural networks. As an alternative, we evolved a cutting-edge non-monotonic activation function, Negative Stimulated Hybrid Activation Function (Nish). It acts as a Rectified Linear Unit (ReLU) function for the positive region and a sinus-sigmoidal function for the negative region. In other words, it incorporates a sigmoid and a sine function and gaining new dynamics over classical ReLU. We analyzed the consistency of the Nish for different combinations of essential networks and most common activation functions using on several most popular benchmarks. From the experimental results, we reported that the accuracy rates achieved by the Nish is slightly better than compared to the Mish in classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2018

Flatten-T Swish: a thresholded ReLU-Swish-like activation function for deep learning

Activation functions are essential for deep learning methods to learn an...
research
08/23/2019

Mish: A Self Regularized Non-Monotonic Neural Activation Function

The concept of non-linearity in a Neural Network is introduced by an act...
research
11/06/2020

Parametric Flatten-T Swish: An Adaptive Non-linear Activation Function For Deep Learning

Activation function is a key component in deep learning that performs no...
research
12/11/2020

ALReLU: A different approach on Leaky ReLU activation function to improve Neural Networks Performance

Despite the unresolved 'dying ReLU problem', the classical ReLU activati...
research
11/27/2021

Why KDAC? A general activation function for knowledge discovery

Named entity recognition based on deep learning (DNER) can effectively m...
research
05/24/2022

Constrained Monotonic Neural Networks

Deep neural networks are becoming increasingly popular in approximating ...
research
07/26/2018

Effectiveness of Scaled Exponentially-Regularized Linear Units (SERLUs)

Recently, self-normalizing neural networks (SNNs) have been proposed wit...

Please sign up or login with your details

Forgot password? Click here to reset