Neural Trajectory Fields for Dynamic Novel View Synthesis

05/12/2021
by   Chaoyang Wang, et al.
8

Recent approaches to render photorealistic views from a limited set of photographs have pushed the boundaries of our interactions with pictures of static scenes. The ability to recreate moments, that is, time-varying sequences, is perhaps an even more interesting scenario, but it remains largely unsolved. We introduce DCT-NeRF, a coordinatebased neural representation for dynamic scenes. DCTNeRF learns smooth and stable trajectories over the input sequence for each point in space. This allows us to enforce consistency between any two frames in the sequence, which results in high quality reconstruction, particularly in dynamic regions.

READ FULL TEXT

page 1

page 2

page 7

page 8

page 12

research
01/01/2023

Detachable Novel Views Synthesis of Dynamic Scenes Using Distribution-Driven Neural Radiance Fields

Representing and synthesizing novel views in real-world dynamic scenes f...
research
08/30/2021

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes

Image view synthesis has seen great success in reconstructing photoreali...
research
11/10/2015

3D Time-lapse Reconstruction from Internet Photos

Given an Internet photo collection of a landmark, we compute a 3D time-l...
research
09/20/2022

wildNeRF: Complete view synthesis of in-the-wild dynamic scenes captured using sparse monocular data

We present a novel neural radiance model that is trainable in a self-sup...
research
03/03/2021

Neural 3D Video Synthesis

We propose a novel approach for 3D video synthesis that is able to repre...
research
04/01/2023

Multi-view reconstruction of bullet time effect based on improved NSFF model

Bullet time is a type of visual effect commonly used in film, television...
research
09/13/2023

Dynamic NeRFs for Soccer Scenes

The long-standing problem of novel view synthesis has many applications,...

Please sign up or login with your details

Forgot password? Click here to reset