Learning Unified Decompositional and Compositional NeRF for Editable Novel View Synthesis

08/05/2023
by   Yuxin Wang, et al.
0

Implicit neural representations have shown powerful capacity in modeling real-world 3D scenes, offering superior performance in novel view synthesis. In this paper, we target a more challenging scenario, i.e., joint scene novel view synthesis and editing based on implicit neural scene representations. State-of-the-art methods in this direction typically consider building separate networks for these two tasks (i.e., view synthesis and editing). Thus, the modeling of interactions and correlations between these two tasks is very limited, which, however, is critical for learning high-quality scene representations. To tackle this problem, in this paper, we propose a unified Neural Radiance Field (NeRF) framework to effectively perform joint scene decomposition and composition for modeling real-world scenes. The decomposition aims at learning disentangled 3D representations of different objects and the background, allowing for scene editing, while scene composition models an entire scene representation for novel view synthesis. Specifically, with a two-stage NeRF framework, we learn a coarse stage for predicting a global radiance field as guidance for point sampling, and in the second fine-grained stage, we perform scene decomposition by a novel one-hot object radiance field regularization module and a pseudo supervision via inpainting to handle ambiguous background regions occluded by objects. The decomposed object-level radiance fields are further composed by using activations from the decomposition module. Extensive quantitative and qualitative results show the effectiveness of our method for scene decomposition and composition, outperforming state-of-the-art methods for both novel-view synthesis and editing tasks.

READ FULL TEXT

page 3

page 6

page 7

page 8

page 11

page 12

page 13

research
09/04/2021

Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering

Implicit neural rendering techniques have shown promising results for no...
research
05/31/2022

Decomposing NeRF for Editing via Feature Field Distillation

Emerging neural radiance fields (NeRF) are a promising scene representat...
research
10/02/2022

IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis

We present intrinsic neural radiance fields, dubbed IntrinsicNeRF, that ...
research
11/30/2021

NeRFReN: Neural Radiance Fields with Reflections

Neural Radiance Fields (NeRF) has achieved unprecedented view synthesis ...
research
05/09/2022

Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation

We present Panoptic Neural Fields (PNF), an object-aware neural scene re...
research
07/12/2021

Fast and Explicit Neural View Synthesis

We study the problem of novel view synthesis of a scene comprised of 3D ...
research
07/04/2022

OS-MSL: One Stage Multimodal Sequential Link Framework for Scene Segmentation and Classification

Scene segmentation and classification (SSC) serve as a critical step tow...

Please sign up or login with your details

Forgot password? Click here to reset