Learning Neural Activations

An artificial neuron is modelled as a weighted summation followed by an activation function which determines its output. A wide variety of activation functions such as rectified linear units (ReLU), leaky-ReLU, Swish, MISH, etc. have been explored in the literature. In this short paper, we explore what happens when the activation function of each neuron in an artificial neural network is learned natively from data alone. This is achieved by modelling the activation function of each neuron as a small neural network whose weights are shared by all neurons in the original network. We list our primary findings in the conclusions section. The code for our analysis is available at: https://github.com/amina01/Learning-Neural-Activations.

READ FULL TEXT

page 5

page 6

page 7

research
01/22/2018

E-swish: Adjusting Activations to Different Network Depths

Activation functions have a notorious impact on neural networks on both ...
research
02/02/2020

Non-linear Neurons with Human-like Apical Dendrite Activations

In order to classify linearly non-separable data, neurons are typically ...
research
02/11/2023

Synaptic Stripping: How Pruning Can Bring Dead Neurons Back To Life

Rectified Linear Units (ReLU) are the default choice for activation func...
research
04/01/2021

The Compact Support Neural Network

Neural networks are popular and useful in many fields, but they have the...
research
07/03/2023

First Steps towards a Runtime Analysis of Neuroevolution

We consider a simple setting in neuroevolution where an evolutionary alg...
research
05/11/2019

Deep Learning: a new definition of artificial neuron with double weight

Deep learning is a subset of a broader family of machine learning method...
research
01/15/2021

A New Artificial Neuron Proposal with Trainable Simultaneous Local and Global Activation Function

The activation function plays a fundamental role in the artificial neura...

Please sign up or login with your details

Forgot password? Click here to reset