A survey on modern trainable activation functions

05/02/2020 ∙ by Andrea Apicella, et al. ∙ 0

In the literature, there is a strong interest to identify and define activation functions which can improve neural network performance. In recent years there is a renovated interest of the scientific community in investigating activation functions which can be trained during the learning process, usually referred as trainable, learnable or adaptable activation functions. They appear to lead to better network performance. Diverse and heterogeneous models of trainable activation function have been proposed in the literature. In this paper, we present a survey of these models. Starting from a discussion on the use of the term "activation function" in literature, we propose a taxonomy of trainable activation functions, highlight common and distinctive proprieties of recent and past models, and discuss on main advantages and limitations of this type of approach. We show that many of the proposed approaches are equivalent to add neuron layers which use fixed activation functions (nontrainable activation functions) and some simple local rule constrains the corresponding weight layers.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.