SqueezeNeRF: Further factorized FastNeRF for memory-efficient inference

04/06/2022
by   Krishna Wadhwani, et al.
0

Neural Radiance Fields (NeRF) has emerged as the state-of-the-art method for novel view generation of complex scenes, but is very slow during inference. Recently, there have been multiple works on speeding up NeRF inference, but the state of the art methods for real-time NeRF inference rely on caching the neural network output, which occupies several giga-bytes of disk space that limits their real-world applicability. As caching the neural network of original NeRF network is not feasible, Garbin et al. proposed "FastNeRF" which factorizes the problem into 2 sub-networks - one which depends only on the 3D coordinate of a sample point and one which depends only on the 2D camera viewing direction. Although this factorization enables them to reduce the cache size and perform inference at over 200 frames per second, the memory overhead is still substantial. In this work, we propose SqueezeNeRF, which is more than 60 times memory-efficient than the sparse cache of FastNeRF and is still able to render at more than 190 frames per second on a high spec GPU during inference.

READ FULL TEXT
research
06/23/2021

Real-time Neural Radiance Caching for Path Tracing

We present a real-time neural radiance caching method for path-traced gl...
research
12/08/2020

A Novel Transformation Approach of Shared-link Coded Caching Schemes for Multiaccess Networks

Coded caching is a promising technique to smooth the network traffic by ...
research
09/19/2023

Ditto: An Elastic and Adaptive Memory-Disaggregated Caching System

In-memory caching systems are fundamental building blocks in cloud servi...
research
07/31/2020

Learning Forward Reuse Distance

Caching techniques are widely used in the era of cloud computing from ap...
research
12/15/2022

Real-Time Neural Light Field on Mobile Devices

Recent efforts in Neural Rendering Fields (NeRF) have shown impressive r...
research
01/27/2023

A Learned Cache Eviction Framework with Minimal Overhead

Recent work shows the effectiveness of Machine Learning (ML) to reduce c...
research
01/03/2023

A Theory of I/O-Efficient Sparse Neural Network Inference

As the accuracy of machine learning models increases at a fast rate, so ...

Please sign up or login with your details

Forgot password? Click here to reset