Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera

04/02/2020
by   Jae Shin Yoon, et al.
0

This paper presents a new method to synthesize an image from arbitrary views and times given a collection of images of a dynamic scene. A key challenge for the novel view synthesis arises from dynamic scene reconstruction where epipolar geometry does not apply to the local motion of dynamic contents. To address this challenge, we propose to combine the depth from single view (DSV) and the depth from multi-view stereo (DMV), where DSV is complete, i.e., a depth is assigned to every pixel, yet view-variant in its scale, while DMV is view-invariant yet incomplete. Our insight is that although its scale and quality are inconsistent with other views, the depth estimation from a single view can be used to reason about the globally coherent geometry of dynamic contents. We cast this problem as learning to correct the scale of DSV, and to refine each depth with locally consistent motions between views to form a coherent depth estimation. We integrate these tasks into a depth fusion network in a self-supervised fashion. Given the fused depth maps, we synthesize a photorealistic virtual view in a specific location and time with our deep blending network that completes the scene and renders the virtual view. We evaluate our method of depth estimation and view synthesis on diverse real-world dynamic scenes and show the outstanding performance over existing methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 9

research
04/07/2022

SurroundDepth: Entangling Surrounding Views for Self-Supervised Multi-Camera Depth Estimation

Depth estimation from images serves as the fundamental step of 3D percep...
research
08/04/2019

Adversarial View-Consistent Learning for Monocular Depth Estimation

This paper addresses the problem of Monocular Depth Estimation (MDE). Ex...
research
01/27/2023

Inter-View Depth Consistency Testing in Depth Difference Subspace

Multiview depth imagery will play a critical role in free-viewpoint tele...
research
11/22/2022

Depth-Supervised NeRF for Multi-View RGB-D Operating Room Images

Neural Radiance Fields (NeRF) is a powerful novel technology for the rec...
research
05/24/2022

Single-View View Synthesis in the Wild with Learned Adaptive Multiplane Images

This paper deals with the challenging task of synthesizing novel views f...
research
11/30/2021

NeRFReN: Neural Radiance Fields with Reflections

Neural Radiance Fields (NeRF) has achieved unprecedented view synthesis ...
research
09/05/2019

Depth Map Estimation for Free-Viewpoint Television

The paper presents a new method of depth estimation dedicated for free-v...

Please sign up or login with your details

Forgot password? Click here to reset