Recurrent Rational Networks
Latest insights from biology show that intelligence does not only emerge from the connections between the neurons, but that individual neurons shoulder more computational responsibility. Current Neural Network architecture design and search are biased on fixed activation functions. Using more advanced learnable activation functions provide Neural Networks with higher learning capacity. However, general guidance for building such networks is still missing. In this work, we first explain why rationals offer an optimal choice for activation functions. We then show that they are closed under residual connections, and inspired by recurrence for residual networks we derive a self-regularized version of Rationals: Recurrent Rationals. We demonstrate that (Recurrent) Rational Networks lead to high performance improvements on Image Classification and Deep Reinforcement Learning.
READ FULL TEXT