Trading Positional Complexity vs. Deepness in Coordinate Networks

05/18/2022
by   Jianqiao Zheng, et al.
0

It is well noted that coordinate-based MLPs benefit – in terms of preserving high-frequency information – through the encoding of coordinate positions as an array of Fourier features. Hitherto, the rationale for the effectiveness of these positional encodings has been mainly studied through a Fourier lens. In this paper, we strive to broaden this understanding by showing that alternative non-Fourier embedding functions can indeed be used for positional encoding. Moreover, we show that their performance is entirely determined by a trade-off between the stable rank of the embedded matrix and the distance preservation between embedded coordinates. We further establish that the now ubiquitous Fourier feature mapping of position is a special case that fulfills these conditions. Consequently, we present a more general theory to analyze positional encoding in terms of shifted basis functions. In addition, we argue that employing a more complex positional encoding – that scales exponentially with the number of modes – requires only a linear (rather than deep) coordinate function to achieve comparable performance. Counter-intuitively, we demonstrate that trading positional embedding complexity for network deepness is orders of magnitude faster than current state-of-the-art; despite the additional embedding complexity. To this end, we develop the necessary theoretical formulae and empirically verify that our theoretical claims hold in practice.

READ FULL TEXT

page 16

page 27

page 28

page 29

research
07/06/2021

Rethinking Positional Encoding

It is well noted that coordinate based MLPs benefit greatly – in terms o...
research
12/21/2021

Learning Positional Embeddings for Coordinate-MLPs

We propose a novel method to enhance the performance of coordinate-MLPs ...
research
02/01/2022

On Regularizing Coordinate-MLPs

We show that typical implicit regularization assumptions for deep neural...
research
06/04/2021

Can convolutional ResNets approximately preserve input distances? A frequency analysis perspective

ResNets constrained to be bi-Lipschitz, that is, approximately distance ...
research
11/30/2021

Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs

Coordinate-MLPs are emerging as an effective tool for modeling multidime...
research
02/23/2020

Comparing the Parameter Complexity of Hypernetworks and the Embedding-Based Alternative

In the context of learning to map an input I to a function h_I:X→R, we c...
research
10/22/2019

A Coordinated View of Cyberspace

Cyberspace is an online world created by growing network of computing an...

Please sign up or login with your details

Forgot password? Click here to reset