Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs

11/30/2021
by   Sameera Ramasinghe, et al.
0

Coordinate-MLPs are emerging as an effective tool for modeling multidimensional continuous signals, overcoming many drawbacks associated with discrete grid-based approximations. However, coordinate-MLPs with ReLU activations, in their rudimentary form, demonstrate poor performance in representing signals with high fidelity, promoting the need for positional embedding layers. Recently, Sitzmann et al. proposed a sinusoidal activation function that has the capacity to omit positional embedding from coordinate-MLPs while still preserving high signal fidelity. Despite its potential, ReLUs are still dominating the space of coordinate-MLPs; we speculate that this is due to the hyper-sensitivity of networks – that employ such sinusoidal activations – to the initialization schemes. In this paper, we attempt to broaden the current understanding of the effect of activations in coordinate-MLPs, and show that there exists a broader class of activations that are suitable for encoding signals. We affirm that sinusoidal activations are only a single example in this class, and propose several non-periodic functions that empirically demonstrate more robust performance against random initializations than sinusoids. Finally, we advocate for a shift towards coordinate-MLPs that employ these non-traditional activation functions due to their high performance and simplicity.

READ FULL TEXT

page 1

page 7

page 8

page 9

page 14

page 15

research
01/22/2018

E-swish: Adjusting Activations to Different Network Depths

Activation functions have a notorious impact on neural networks on both ...
research
06/04/2020

Overcoming Overfitting and Large Weight Update Problem in Linear Rectifiers: Thresholded Exponential Rectified Linear Units

In past few years, linear rectified unit activation functions have shown...
research
02/22/2021

Elementary superexpressive activations

We call a finite family of activation functions superexpressive if any m...
research
05/24/2022

Imposing Gaussian Pre-Activations in a Neural Network

The goal of the present work is to propose a way to modify both the init...
research
07/28/2022

PEA: Improving the Performance of ReLU Networks for Free by Using Progressive Ensemble Activations

In recent years novel activation functions have been proposed to improve...
research
05/18/2022

Trading Positional Complexity vs. Deepness in Coordinate Networks

It is well noted that coordinate-based MLPs benefit – in terms of preser...
research
04/08/2021

Modulated Periodic Activations for Generalizable Local Functional Representations

Multi-Layer Perceptrons (MLPs) make powerful functional representations ...

Please sign up or login with your details

Forgot password? Click here to reset