Editable Free-viewpoint Video Using a Layered Neural Representation

04/30/2021
by   Jiakai Zhang, et al.
0

Generating free-viewpoint videos is critical for immersive VR/AR experience but recent neural advances still lack the editing ability to manipulate the visual perception for large dynamic scenes. To fill this gap, in this paper we propose the first approach for editable photo-realistic free-viewpoint video generation for large-scale dynamic scenes using only sparse 16 cameras. The core of our approach is a new layered neural representation, where each dynamic entity including the environment itself is formulated into a space-time coherent neural layered radiance representation called ST-NeRF. Such layered representation supports fully perception and realistic manipulation of the dynamic scene whilst still supporting a free viewing experience in a wide range. In our ST-NeRF, the dynamic entity/layer is represented as continuous functions, which achieves the disentanglement of location, deformation as well as the appearance of the dynamic entity in a continuous and self-supervised manner. We propose a scene parsing 4D label map tracking to disentangle the spatial information explicitly, and a continuous deform module to disentangle the temporal motion implicitly. An object-aware volume rendering scheme is further introduced for the re-assembling of all the neural layers. We adopt a novel layered loss and motion-aware ray sampling strategy to enable efficient training for a large dynamic scene with multiple performers, Our framework further enables a variety of editing functions, i.e., manipulating the scale and location, duplicating or retiming individual neural layers to create numerous visual effects while preserving high realism. Extensive experiments demonstrate the effectiveness of our approach to achieve high-quality, photo-realistic, and editable free-viewpoint video generation for dynamic scenes.

READ FULL TEXT

page 1

page 8

page 9

page 11

page 13

page 14

page 15

page 16

research
08/12/2021

iButter: Neural Interactive Bullet Time Generator for Human Free-viewpoint Rendering

Generating “bullet-time” effects of human free-viewpoint videos is criti...
research
04/10/2023

Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

The success of the Neural Radiance Fields (NeRFs) for modeling and free-...
research
12/04/2018

The Visual Centrifuge: Model-Free Layered Video Representations

True video understanding requires making sense of non-lambertian scenes ...
research
04/01/2021

Neural Video Portrait Relighting in Real-time via Consistency Modeling

Video portraits relighting is critical in user-facing human photography,...
research
04/06/2021

MirrorNeRF: One-shot Neural Portrait RadianceField from Multi-mirror Catadioptric Imaging

Photo-realistic neural reconstruction and rendering of the human portrai...
research
09/16/2020

Layered Neural Rendering for Retiming People in Video

We present a method for retiming people in an ordinary, natural video—ma...
research
06/20/2020

Technical Note: Generating Realistic Fighting Scenes by Game Tree

Recently, there have been a lot of researches to synthesize / edit the m...

Please sign up or login with your details

Forgot password? Click here to reset