hyper-sinh: An Accurate and Reliable Function from Shallow to Deep Learning in TensorFlow and Keras

11/15/2020
by   Renfei Ma, et al.
0

This paper presents the 'hyper-sinh', a variation of the m-arcsinh activation function suitable for Deep Learning (DL)-based algorithms for supervised learning, such as Convolutional Neural Networks (CNN). hyper-sinh, developed in the open source Python libraries TensorFlow and Keras, is thus described and validated as an accurate and reliable activation function for both shallow and deep neural networks. Improvements in accuracy and reliability in image and text classification tasks on five (N = 5) benchmark data sets available from Keras are discussed. Experimental results demonstrate the overall competitive classification performance of both shallow and deep neural networks, obtained via this novel function. This function is evaluated with respect to gold standard activation functions, demonstrating its overall competitive accuracy and reliability for both image and text classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/16/2020

m-arcsinh: An Efficient and Reliable Function for SVM and MLP in scikit-learn

This paper describes the 'm-arcsinh', a modified ('m-') version of the i...
research
12/11/2020

ALReLU: A different approach on Leaky ReLU activation function to improve Neural Networks Performance

Despite the unresolved 'dying ReLU problem', the classical ReLU activati...
research
10/15/2020

QReLU and m-QReLU: Two novel quantum activation functions to aid medical diagnostics

The ReLU activation function (AF) has been extensively applied in deep n...
research
10/21/2022

Stochastic Adaptive Activation Function

The simulation of human neurons and neurotransmission mechanisms has bee...
research
11/08/2018

Activation Functions: Comparison of trends in Practice and Research for Deep Learning

Deep neural networks have been successfully used in diverse emerging dom...
research
12/20/2021

Integral representations of shallow neural network with Rectified Power Unit activation function

In this effort, we derive a formula for the integral representation of a...
research
04/13/2020

Topology of deep neural networks

We study how the topology of a data set M = M_a ∪ M_b ⊆R^d, representing...

Please sign up or login with your details

Forgot password? Click here to reset