Introducing the DOME Activation Functions

09/30/2021
by   Mohamed E. Hussein, et al.
0

In this paper, we introduce a novel non-linear activation function that spontaneously induces class-compactness and regularization in the embedding space of neural networks. The function is dubbed DOME for Difference Of Mirrored Exponential terms. The basic form of the function can replace the sigmoid or the hyperbolic tangent functions as an output activation function for binary classification problems. The function can also be extended to the case of multi-class classification, and used as an alternative to the standard softmax function. It can also be further generalized to take more flexible shapes suitable for intermediate layers of a network. In this version of the paper, we only introduce the concept. In a subsequent version, experimental evaluation will be added.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/13/2022

Evaluating CNN with Oscillatory Activation Function

The reason behind CNNs capability to learn high-dimensional complex feat...
research
05/15/2020

A New Activation Function for Training Deep Neural Networks to Avoid Local Minimum

Activation functions have a major role to play and hence are very import...
research
09/02/2021

Effect of the output activation function on the probabilities and errors in medical image segmentation

The sigmoid activation is the standard output activation function in bin...
research
12/28/2021

Reduced Softmax Unit for Deep Neural Network Accelerators

The Softmax activation layer is a very popular Deep Neural Network (DNN)...
research
07/27/2020

A Novel Method for Scalable VLSI Implementation of Hyperbolic Tangent Function

Hyperbolic tangent and Sigmoid functions are used as non-linear activati...
research
01/31/2023

Classified as unknown: A novel Bayesian neural network

We establish estimations for the parameters of the output distribution f...
research
05/21/2019

A Universal Approximation Result for Difference of log-sum-exp Neural Networks

We show that a neural network whose output is obtained as the difference...

Please sign up or login with your details

Forgot password? Click here to reset