MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras

06/08/2021
by   Xuelin Chen, et al.
0

Synthesizing novel views of dynamic humans from stationary monocular cameras is a popular scenario. This is particularly attractive as it does not require static scenes, controlled environments, or specialized hardware. In contrast to techniques that exploit multi-view observations to constrain the modeling, given a single fixed viewpoint only, the problem of modeling the dynamic scene is significantly more under-constrained and ill-posed. In this paper, we introduce Neural Motion Consensus Flow (MoCo-Flow), a representation that models the dynamic scene using a 4D continuous time-variant function. The proposed representation is learned by an optimization which models a dynamic scene that minimizes the error of rendering all observation images. At the heart of our work lies a novel optimization formulation, which is constrained by a motion consensus regularization on the motion flow. We extensively evaluate MoCo-Flow on several datasets that contain human motions of varying complexity, and compare, both qualitatively and quantitatively, to several baseline methods and variants of our methods. Pretrained model, code, and data will be released for research purposes upon paper acceptance.

READ FULL TEXT

page 4

page 7

page 8

page 15

research
04/04/2023

Decoupling Dynamic Monocular Videos for Dynamic View Synthesis

The challenge of dynamic view synthesis from dynamic monocular videos, i...
research
09/11/2023

FlowIBR: Leveraging Pre-Training for Efficient Neural Image-Based Rendering of Dynamic Scenes

We introduce a novel approach for monocular novel view synthesis of dyna...
research
08/17/2019

Mono-SF: Multi-View Geometry Meets Single-View Depth for Monocular Scene Flow Estimation of Dynamic Traffic Scenes

Existing 3D scene flow estimation methods provide the 3D geometry and 3D...
research
02/23/2023

Learning Neural Volumetric Representations of Dynamic Humans in Minutes

This paper addresses the challenge of quickly reconstructing free-viewpo...
research
05/13/2021

Dynamic View Synthesis from Dynamic Monocular Video

We present an algorithm for generating novel views at arbitrary viewpoin...
research
11/27/2020

D-NeRF: Neural Radiance Fields for Dynamic Scenes

Neural rendering techniques combining machine learning with geometric re...
research
09/21/2022

PREF: Predictability Regularized Neural Motion Fields

Knowing the 3D motions in a dynamic scene is essential to many vision ap...

Please sign up or login with your details

Forgot password? Click here to reset