Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

01/16/2022
by   Thomas Müller, et al.
5

Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920×1080.

READ FULL TEXT

page 1

page 3

page 7

page 8

page 9

page 11

research
09/24/2018

No Multiplication? No Floating Point? No Problem! Training Networks for Efficient Inference

For successful deployment of deep neural networks on highly--resource-co...
research
05/30/2023

Reduced Precision Floating-Point Optimization for Deep Neural Network On-Device Learning on MicroControllers

Enabling On-Device Learning (ODL) for Ultra-Low-Power Micro-Controller U...
research
03/28/2019

Implementing Noise with Hash functions for Graphics Processing Units

We propose a modification to Perlin noise which use computable hash func...
research
02/03/2023

Robust Camera Pose Refinement for Multi-Resolution Hash Encoding

Multi-resolution hash encoding has recently been proposed to reduce the ...
research
09/16/2020

WarpCore: A Library for fast Hash Tables on GPUs

Hash tables are ubiquitous. Properties such as an amortized constant tim...
research
03/10/2023

Hardware Acceleration of Neural Graphics

Rendering and inverse-rendering algorithms that drive conventional compu...
research
11/06/2020

Modular Primitives for High-Performance Differentiable Rendering

We present a modular differentiable renderer design that yields performa...

Please sign up or login with your details

Forgot password? Click here to reset