Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis

08/18/2023
by   Jonathon Luiten, et al.
0

We present a method that simultaneously addresses the tasks of dynamic scene novel-view synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements. We follow an analysis-by-synthesis framework, inspired by recent work that models scenes as a collection of 3D Gaussians which are optimized to reconstruct input images via differentiable rendering. To model dynamic scenes, we allow Gaussians to move and rotate over time while enforcing that they have persistent color, opacity, and size. By regularizing Gaussians' motion and rotation with local-rigidity constraints, we show that our Dynamic 3D Gaussians correctly model the same area of physical space over time, including the rotation of that space. Dense 6-DOF tracking and dynamic reconstruction emerges naturally from persistent dynamic view synthesis, without requiring any correspondence or flow as input. We demonstrate a large number of downstream applications enabled by our representation, including first-person view synthesis, dynamic compositional scene synthesis, and 4D video editing.

READ FULL TEXT

page 1

page 4

page 8

page 9

research
11/26/2020

Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes

We present a method to perform novel view and time synthesis of dynamic ...
research
12/17/2020

Neural Radiance Flow for 4D View Synthesis and Video Processing

We present a method, Neural Radiance Flow (NeRFlow),to learn a 4D spatia...
research
12/03/2018

DeepVoxels: Learning Persistent 3D Feature Embeddings

In this work, we address the lack of 3D understanding of generative neur...
research
07/07/2023

RGB-D Mapping and Tracking in a Plenoxel Radiance Field

Building on the success of Neural Radiance Fields (NeRFs), recent years ...
research
09/13/2023

Dynamic NeRFs for Soccer Scenes

The long-standing problem of novel view synthesis has many applications,...
research
06/23/2022

UNeRF: Time and Memory Conscious U-Shaped Network for Training Neural Radiance Fields

Neural Radiance Fields (NeRFs) increase reconstruction detail for novel ...
research
06/22/2018

GONet++: Traversability Estimation via Dynamic Scene View Synthesis

Robots that interact with a dynamic environment, such as social robots a...

Please sign up or login with your details

Forgot password? Click here to reset