Formalising the Use of the Activation Function in Neural Inference

02/02/2021
by   Dalton A R Sakthivadivel, et al.
0

We investigate how activation functions can be used to describe neural firing in an abstract way, and in turn, why they work well in artificial neural networks. We discuss how a spike in a biological neurone belongs to a particular universality class of phase transitions in statistical physics. We then show that the artificial neurone is, mathematically, a mean field model of biological neural membrane dynamics, which arises from modelling spiking as a phase transition. This allows us to treat selective neural firing in an abstract way, and formalise the role of the activation function in perceptron learning. Along with deriving this model and specifying the analogous neural case, we analyse the phase transition to understand the physics of neural network learning. Together, it is show that there is not only a biological meaning, but a physical justification, for the emergence and performance of canonical activation functions; implications for neural learning and inference are also discussed.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro