ACORN: Adaptive Coordinate Networks for Neural Scene Representation

05/06/2021
by   Julien N. P. Martel, et al.
10

Neural representations have emerged as a new paradigm for applications in rendering, imaging, geometric modeling, and simulation. Compared to traditional representations such as meshes, point clouds, or volumes they can be flexibly incorporated into differentiable learning-based pipelines. While recent improvements to neural representations now make it possible to represent signals with fine details at moderate resolutions (e.g., for images and 3D shapes), adequately representing large-scale or complex scenes has proven a challenge. Current neural representations fail to accurately represent images at resolutions greater than a megapixel or 3D scenes with more than a few hundred thousand polygons. Here, we introduce a new hybrid implicit-explicit network architecture and training strategy that adaptively allocates resources during training and inference based on the local complexity of a signal of interest. Our approach uses a multiscale block-coordinate decomposition, similar to a quadtree or octree, that is optimized during training. The network architecture operates in two stages: using the bulk of the network parameters, a coordinate encoder generates a feature grid in a single forward pass. Then, hundreds or thousands of samples within each block can be efficiently evaluated using a lightweight feature decoder. With this hybrid implicit-explicit network architecture, we demonstrate the first experiments that fit gigapixel images to nearly 40 dB peak signal-to-noise ratio. Notably this represents an increase in scale of over 1000x compared to the resolution of previously demonstrated image-fitting experiments. Moreover, our approach is able to represent 3D shapes significantly faster and better than previous techniques; it reduces training times from days to hours or minutes and memory requirements by over an order of magnitude.

READ FULL TEXT

page 2

page 4

page 5

page 6

page 7

page 9

page 12

page 14

research
02/07/2022

MINER: Multiscale Implicit Neural Representations

We introduce a new neural signal representation designed for the efficie...
research
12/09/2021

BACON: Band-limited Coordinate Networks for Multiscale Scene Representation

Coordinate-based networks have emerged as a powerful tool for 3D represe...
research
01/28/2022

CoordX: Accelerating Implicit Neural Representation with a Split MLP Architecture

Implicit neural representations with multi-layer perceptrons (MLPs) have...
research
07/08/2022

Neural Implicit Dictionary via Mixture-of-Expert Training

Representing visual signals by coordinate-based deep fully-connected net...
research
01/27/2023

A Comparison of Tiny-nerf versus Spatial Representations for 3d Reconstruction

Neural rendering has emerged as a powerful paradigm for synthesizing ima...
research
02/10/2023

Deep Learning on Implicit Neural Representations of Shapes

Implicit Neural Representations (INRs) have emerged in the last few year...
research
05/15/2023

Curvature-Aware Training for Coordinate Networks

Coordinate networks are widely used in computer vision due to their abil...

Please sign up or login with your details

Forgot password? Click here to reset