iMapper: Interaction-guided Joint Scene and Human Motion Mapping from Monocular Videos

06/20/2018
by   Aron Monszpart, et al.
0

A long-standing challenge in scene analysis is the recovery of scene arrangements under moderate to heavy occlusion, directly from monocular video. While the problem remains a subject of active research, concurrent advances have been made in the context of human pose reconstruction from monocular video, including image-space feature point detection and 3D pose recovery. These methods, however, start to fail under moderate to heavy occlusion as the problem becomes severely under-constrained. We approach the problems differently. We observe that people interact similarly in similar scenes. Hence, we exploit the correlation between scene object arrangement and motions performed in that scene in both directions: first, typical motions performed when interacting with objects inform us about possible object arrangements; and second, object arrangements, in turn, constrain the possible motions. We present iMapper, a data-driven method that focuses on identifying human-object interactions, and jointly reasons about objects and human movement over space-time to recover both a plausible scene arrangement and consistent human interactions. We first introduce the notion of characteristic interactions as regions in space-time when an informative human-object interaction happens. This is followed by a novel occlusion-aware matching procedure that searches and aligns such characteristic snapshots from an interaction database to best explain the input monocular video. Through extensive evaluations, both quantitative and qualitative, we demonstrate that iMapper significantly improves performance over both dedicated state-of-the-art scene analysis and 3D human pose recovery approaches, especially under medium to heavy occlusion.

READ FULL TEXT
research
08/17/2022

MoCapDeform: Monocular 3D Human Motion Capture in Deformable Scenes

3D human motion capture from monocular RGB images respecting interaction...
research
04/24/2023

HOSNeRF: Dynamic Human-Object-Scene Neural Radiance Fields from a Single Video

We introduce HOSNeRF, a novel 360 free-viewpoint rendering method that r...
research
04/30/2021

RobustFusion: Robust Volumetric Performance Reconstruction under Human-object Interactions from Monocular RGBD Stream

High-quality 4D reconstruction of human performance with complex interac...
research
07/14/2023

NIFTY: Neural Object Interaction Fields for Guided Human Motion Synthesis

We address the problem of generating realistic 3D motions of humans inte...
research
08/19/2021

Gravity-Aware Monocular 3D Human-Object Reconstruction

This paper proposes GraviCap, i.e., a new approach for joint markerless ...
research
04/24/2023

OGMN: Occlusion-guided Multi-task Network for Object Detection in UAV Images

Occlusion between objects is one of the overlooked challenges for object...
research
10/17/2020

Self-Selective Context for Interaction Recognition

Human-object interaction recognition aims for identifying the relationsh...

Please sign up or login with your details

Forgot password? Click here to reset