Decoupling Dynamic Monocular Videos for Dynamic View Synthesis

04/04/2023
by   Meng You, et al.
0

The challenge of dynamic view synthesis from dynamic monocular videos, i.e., synthesizing novel views for free viewpoints given a monocular video of a dynamic scene captured by a moving camera, mainly lies in accurately modeling the dynamic objects of a scene using limited 2D frames, each with a varying timestamp and viewpoint. Existing methods usually require pre-processed 2D optical flow and depth maps by additional methods to supervise the network, making them suffer from the inaccuracy of the pre-processed supervision and the ambiguity when lifting the 2D information to 3D. In this paper, we tackle this challenge in an unsupervised fashion. Specifically, we decouple the motion of the dynamic objects into object motion and camera motion, respectively regularized by proposed unsupervised surface consistency and patch-based multi-view constraints. The former enforces the 3D geometric surfaces of moving objects to be consistent over time, while the latter regularizes their appearances to be consistent across different viewpoints. Such a fine-grained motion formulation can alleviate the learning difficulty for the network, thus enabling it to produce not only novel views with higher quality but also more accurate scene flows and depth than existing methods requiring extra supervision. We will make the code publicly available.

READ FULL TEXT

page 1

page 2

page 4

page 7

page 8

page 9

research
08/02/2021

Consistent Depth of Moving Objects in Video

We present a method to estimate depth of a dynamic scene, containing arb...
research
06/08/2021

MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras

Synthesizing novel views of dynamic humans from stationary monocular cam...
research
12/26/2022

MonoNeRF: Learning a Generalizable Dynamic Radiance Field from Monocular Videos

In this paper, we target at the problem of learning a generalizable dyna...
research
03/25/2023

NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects

Dynamic Neural Radiance Field (NeRF) is a powerful algorithm capable of ...
research
05/13/2021

Dynamic View Synthesis from Dynamic Monocular Video

We present an algorithm for generating novel views at arbitrary viewpoin...
research
01/12/2021

Binary TTC: A Temporal Geofence for Autonomous Navigation

Time-to-contact (TTC), the time for an object to collide with the observ...
research
10/24/2022

Monocular Dynamic View Synthesis: A Reality Check

We study the recent progress on dynamic view synthesis (DVS) from monocu...

Please sign up or login with your details

Forgot password? Click here to reset