Implicit Neural Representations with Periodic Activation Functions

06/17/2020
by   Vincent Sitzmann, et al.
16

Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives. We analyze Siren activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine Sirens with hypernetworks to learn priors over the space of Siren functions.

READ FULL TEXT

page 3

page 4

page 19

page 25

page 27

page 31

page 32

page 33

research
07/21/2022

Sobolev Training for Implicit Neural Representations with Approximated Image Derivatives

Recently, Implicit Neural Representations (INRs) parameterized by neural...
research
01/21/2023

Versatile Neural Processes for Learning Implicit Neural Representations

Representing a signal as a continuous function parameterized by neural n...
research
07/02/2020

A differential neural network learns stochastic differential equations and the Black-Scholes equation for pricing multi-asset options

Neural networks with sufficiently smooth activation functions can approx...
research
09/21/2022

Periodic Extrapolative Generalisation in Neural Networks

The learning of the simplest possible computational pattern – periodicit...
research
10/03/2022

Random Weight Factorization Improves the Training of Continuous Neural Representations

Continuous neural representations have recently emerged as a powerful an...
research
08/02/2022

Lossy compression of multidimensional medical images using sinusoidal activation networks: an evaluation study

In this work, we evaluate how neural networks with periodic activation f...
research
07/20/2022

Streamable Neural Fields

Neural fields have emerged as a new data representation paradigm and hav...

Please sign up or login with your details

Forgot password? Click here to reset