The nlogistic-sigmoid function

08/06/2020
by   Oluwasegun A. Somefun, et al.
0

The variants of the logistic-sigmoid functions used in artificial neural networks are inherently, by definition, limited by vanishing gradients. Defining the logistic-sigmoid function to become n-times repeated over a finite input-output mapping can significantly reduce the presence of this limitation. Here we propose the nlogistic-sigmoid function as a generalization for the definition of logistic-sigmoid functions. Our results demonstrate that by its definition, the nlogistic-sigmoid function can reduce this vanishing of gradients, as it outperforms both the classic logistic-sigmoid function and the rectified linear-unit function in terms of learning and generalization on two representative case studies. We anticipate that this function will be the choice sigmoid activation function for deep learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/03/2018

PLU: The Piecewise Linear Unit Activation Function

Successive linear transforms followed by nonlinear "activation" function...
research
05/21/2018

On the Selection of Initialization and Activation Function for Deep Neural Networks

The weight initialization and the activation function of deep neural net...
research
10/02/2019

The option pricing model based on time values: an application of the universal approximation theory on unbounded domains

Hutchinson, Lo and Poggio raised the question that if learning works can...
research
03/19/2018

Deep learning improved by biological activation functions

`Biologically inspired' activation functions, such as the logistic sigmo...
research
05/17/2021

Activation function design for deep networks: linearity and effective initialisation

The activation function deployed in a deep neural network has great infl...
research
07/06/2016

A Modified Activation Function with Improved Run-Times For Neural Networks

In this paper we present a modified version of the Hyperbolic Tangent Ac...
research
04/26/2022

Self-scalable Tanh (Stan): Faster Convergence and Better Generalization in Physics-informed Neural Networks

Physics-informed Neural Networks (PINNs) are gaining attention in the en...

Please sign up or login with your details

Forgot password? Click here to reset