One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation

11/29/2022
by   Shuangkang Fang, et al.
0

Neural Radiance Fields (NeRF) methods have proved effective as compact, high-quality and versatile representations for 3D scenes, and enable downstream tasks such as editing, retrieval, navigation, etc. Various neural architectures are vying for the core structure of NeRF, including the plain Multi-Layer Perceptron (MLP), sparse tensors, low-rank tensors, hashtables and their compositions. Each of these representations has its particular set of trade-offs. For example, the hashtable-based representations admit faster training and rendering but their lack of clear geometric meaning hampers downstream tasks like spatial-relation-aware editing. In this paper, we propose Progressive Volume Distillation (PVD), a systematic distillation method that allows any-to-any conversions between different architectures, including MLP, sparse or low-rank tensors, hashtables and their compositions. PVD consequently empowers downstream applications to optimally adapt the neural representations for the task at hand in a post hoc fashion. The conversions are fast, as distillation is progressively performed on different levels of volume representations, from shallower to deeper. We also employ special treatment of density to deal with its specific numerical instability problem. Empirical evidence is presented to validate our method on the NeRF-Synthetic, LLFF and TanksAndTemples datasets. For example, with PVD, an MLP-based NeRF model can be distilled from a hashtable-based Instant-NGP model at a 10X 20X faster speed than being trained the original NeRF from scratch, while achieving a superior level of synthesis quality. Code is available at https://github.com/megvii-research/AAAI2023-PVD.

READ FULL TEXT

page 5

page 7

page 17

page 18

page 19

page 20

page 21

page 22

research
04/08/2023

PVD-AL: Progressive Volume Distillation with Active Learning for Efficient Conversion Between Different NeRF Architectures

Neural Radiance Fields (NeRF) have been widely adopted as practical and ...
research
07/20/2023

SLPD: Slide-level Prototypical Distillation for WSIs

Improving the feature representation ability is the foundation of many w...
research
05/26/2022

Matryoshka Representations for Adaptive Deployment

Learned representations are a central component in modern ML systems, se...
research
07/26/2022

Learning Protein Representations via Complete 3D Graph Networks

We consider representation learning for proteins with 3D structures. We ...
research
07/28/2023

VPP: Efficient Conditional 3D Generation via Voxel-Point Progressive Representation

Conditional 3D generation is undergoing a significant advancement, enabl...
research
06/30/2023

Stitched ViTs are Flexible Vision Backbones

Large pretrained plain vision Transformers (ViTs) have been the workhors...
research
11/01/2021

Arch-Net: Model Distillation for Architecture Agnostic Model Deployment

Vast requirement of computation power of Deep Neural Networks is a major...

Please sign up or login with your details

Forgot password? Click here to reset