PVD-AL: Progressive Volume Distillation with Active Learning for Efficient Conversion Between Different NeRF Architectures

04/08/2023
by   Shuangkang Fang, et al.
0

Neural Radiance Fields (NeRF) have been widely adopted as practical and versatile representations for 3D scenes, facilitating various downstream tasks. However, different architectures, including plain Multi-Layer Perceptron (MLP), Tensors, low-rank Tensors, Hashtables, and their compositions, have their trade-offs. For instance, Hashtables-based representations allow for faster rendering but lack clear geometric meaning, making spatial-relation-aware editing challenging. To address this limitation and maximize the potential of each architecture, we propose Progressive Volume Distillation with Active Learning (PVD-AL), a systematic distillation method that enables any-to-any conversions between different architectures. PVD-AL decomposes each structure into two parts and progressively performs distillation from shallower to deeper volume representation, leveraging effective information retrieved from the rendering process. Additionally, a Three-Levels of active learning technique provides continuous feedback during the distillation process, resulting in high-performance results. Empirical evidence is presented to validate our method on multiple benchmark datasets. For example, PVD-AL can distill an MLP-based model from a Hashtables-based model at a 10 20X faster speed and 0.8dB 2dB higher PSNR than training the NeRF model from scratch. Moreover, PVD-AL permits the fusion of diverse features among distinct structures, enabling models with multiple editing properties and providing a more efficient model to meet real-time requirements. Project website:http://sk-fun.fun/PVD-AL.

READ FULL TEXT

page 1

page 2

page 3

page 6

page 7

page 10

research
11/29/2022

One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation

Neural Radiance Fields (NeRF) methods have proved effective as compact, ...
research
02/06/2022

LiDAR dataset distillation within bayesian active learning framework: Understanding the effect of data augmentation

Autonomous driving (AD) datasets have progressively grown in size in the...
research
09/04/2023

On the Query Strategies for Efficient Online Active Distillation

Deep Learning (DL) requires lots of time and data, resulting in high com...
research
02/21/2023

Evaluating the effect of data augmentation and BALD heuristics on distillation of Semantic-KITTI dataset

Active Learning (AL) has remained relatively unexplored for LiDAR percep...
research
02/16/2022

FAMIE: A Fast Active Learning Framework for Multilingual Information Extraction

This paper presents FAMIE, a comprehensive and efficient active learning...
research
06/27/2018

Adversarial Distillation of Bayesian Neural Network Posteriors

Bayesian neural networks (BNNs) allow us to reason about uncertainty in ...

Please sign up or login with your details

Forgot password? Click here to reset