Log In Sign Up

Activation Functions in Artificial Neural Networks: A Systematic Overview

by   Johannes Lederer, et al.

Activation functions shape the outputs of artificial neurons and, therefore, are integral parts of neural networks in general and deep learning in particular. Some activation functions, such as logistic and relu, have been used for many decades. But with deep learning becoming a mainstream research topic, new activation functions have mushroomed, leading to confusion in both theory and practice. This paper provides an analytic yet up-to-date overview of popular activation functions and their properties, which makes it a timely resource for anyone who studies or applies neural networks.


page 1

page 2

page 3

page 4


Smooth function approximation by deep neural networks with general activation functions

There has been a growing interest in expressivity of deep neural network...

A survey on recently proposed activation functions for Deep Learning

Artificial neural networks (ANN), typically referred to as neural networ...

Quantum activation functions for quantum neural networks

The field of artificial neural networks is expected to strongly benefit ...

Activation Functions: Comparison of trends in Practice and Research for Deep Learning

Deep neural networks have been successfully used in diverse emerging dom...

Approximating Activation Functions

ReLU is widely seen as the default choice for activation functions in ne...

A Comprehensive Survey and Performance Analysis of Activation Functions in Deep Learning

Neural networks have shown tremendous growth in recent years to solve nu...

Domain Wall Leaky Integrate-and-Fire Neurons with Shape-Based Configurable Activation Functions

Complementary metal oxide semiconductor (CMOS) devices display volatile ...