Polynomial Implicit Neural Representations For Large Diverse Datasets

03/20/2023
by   Rajhans Singh, et al.
0

Implicit neural representations (INR) have gained significant popularity for signal and image representation for many end-tasks, such as superresolution, 3D modeling, and more. Most INR architectures rely on sinusoidal positional encoding, which accounts for high-frequency information in data. However, the finite encoding size restricts the model's representational power. Higher representational power is needed to go from representing a single given image to representing large and diverse datasets. Our approach addresses this gap by representing an image with a polynomial function and eliminates the need for positional encodings. Therefore, to achieve a progressively higher degree of polynomial representation, we use element-wise multiplications between features and affine-transformed coordinate locations after every ReLU layer. The proposed method is evaluated qualitatively and quantitatively on large datasets like ImageNet. The proposed Poly-INR model performs comparably to state-of-the-art generative models without any convolution, normalization, or self-attention layers, and with far fewer trainable parameters. With much fewer training parameters and higher representative power, our approach paves the way for broader adoption of INR models for generative modeling tasks in complex domains. The code is available at <https://github.com/Rajhans0/Poly_INR>

READ FULL TEXT

page 7

page 13

page 14

page 15

page 16

page 17

page 18

page 19

research
07/17/2022

E-NeRV: Expedite Neural Video Representation with Disentangled Spatial-Temporal Context

Recently, the image-wise implicit neural representation of videos, NeRV,...
research
09/30/2022

Verifiable and Energy Efficient Medical Image Analysis with Quantised Self-attentive Deep Neural Networks

Convolutional Neural Networks have played a significant role in various ...
research
05/09/2023

PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces

A signed distance function (SDF) parametrized by an MLP is a common ingr...
research
05/03/2023

Shap-E: Generating Conditional 3D Implicit Functions

We present Shap-E, a conditional generative model for 3D assets. Unlike ...
research
06/01/2023

NeuroGF: A Neural Representation for Fast Geodesic Distance and Path Queries

Geodesics are essential in many geometry processing applications. Howeve...
research
11/22/2022

Efficient Frequency Domain-based Transformers for High-Quality Image Deblurring

We present an effective and efficient method that explores the propertie...
research
02/09/2023

Polynomial Neural Fields for Subband Decomposition and Manipulation

Neural fields have emerged as a new paradigm for representing signals, t...

Please sign up or login with your details

Forgot password? Click here to reset