Moderate Adaptive Linear Units (MoLU)

02/27/2023
by   Hankyul Koh, et al.
0

We propose a new high-performance activation function, Moderate Adaptive Linear Units (MoLU), for the deep neural network. The MoLU is a simple, beautiful and powerful activation function that can be a good main activation function among hundreds of activation functions. Because the MoLU is made up of the elementary functions, not only it is a infinite diffeomorphism (i.e. smooth and infinitely differentiable over whole domains), but also it decreases training time.

READ FULL TEXT
research
12/21/2014

Learning Activation Functions to Improve Deep Neural Networks

Artificial neural networks typically have a fixed, non-linear activation...
research
10/21/2022

Stochastic Adaptive Activation Function

The simulation of human neurons and neurotransmission mechanisms has bee...
research
05/17/2021

Activation function design for deep networks: linearity and effective initialisation

The activation function deployed in a deep neural network has great infl...
research
05/22/2018

Breaking the Activation Function Bottleneck through Adaptive Parameterization

Standard neural network architectures are non-linear only by virtue of a...
research
06/26/2018

Gradient Acceleration in Activation Functions

Dropout has been one of standard approaches to train deep neural network...
research
02/03/2016

A continuum among logarithmic, linear, and exponential functions, and its potential to improve generalization in neural networks

We present the soft exponential activation function for artificial neura...
research
05/14/2020

Activation functions are not needed: the ratio net

The function approximator that finds the function mapping the feature to...

Please sign up or login with your details

Forgot password? Click here to reset