Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video

12/22/2020
by   Edgar Tretschk, et al.
7

We present Non-Rigid Neural Radiance Fields (NR-NeRF), a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes. Our approach takes RGB images of a dynamic scene as input, e.g., from a monocular video recording, and creates a high-quality space-time geometry and appearance representation. In particular, we show that even a single handheld consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views, for example a `bullet-time' video effect. Our method disentangles the dynamic scene into a canonical volume and its deformation. Scene deformation is implemented as ray bending, where straight rays are deformed non-rigidly to represent scene motion. We also propose a novel rigidity regression network that enables us to better constrain rigid regions of the scene, which leads to more stable results. The ray bending and rigidity network are trained without any explicit supervision. In addition to novel view synthesis, our formulation enables dense correspondence estimation across views and time, as well as compelling video editing applications such as motion exaggeration. We demonstrate the effectiveness of our method using extensive evaluations, including ablation studies and comparisons to the state of the art. We urge the reader to watch the supplemental video for qualitative results. Our code will be open sourced.

READ FULL TEXT

page 2

page 6

page 7

research
11/26/2020

Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes

We present a method to perform novel view and time synthesis of dynamic ...
research
08/16/2023

SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes

Existing methods for the 4D reconstruction of general, non-rigidly defor...
research
06/16/2022

Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model

Capturing general deforming scenes is crucial for many computer graphics...
research
03/29/2022

Disentangled3D: Learning a 3D Generative Model with Disentangled Geometry and Appearance from Monocular Images

Learning 3D generative models from a dataset of monocular images enables...
research
03/10/2023

MovingParts: Motion-based 3D Part Discovery in Dynamic Radiance Field

We present MovingParts, a NeRF-based method for dynamic scene reconstruc...
research
02/27/2023

BaLi-RF: Bandlimited Radiance Fields for Dynamic Scene Modeling

Reasoning the 3D structure of a non-rigid dynamic scene from a single mo...
research
10/22/2022

NeuPhysics: Editable Neural Geometry and Physics from Monocular Videos

We present a method for learning 3D geometry and physics parameters of a...

Please sign up or login with your details

Forgot password? Click here to reset