Dual networks based 3D Multi-Person Pose Estimation from Monocular Video

by   Yu Cheng, et al.

Monocular 3D human pose estimation has made progress in recent years. Most of the methods focus on single persons, which estimate the poses in the person-centric coordinates, i.e., the coordinates based on the center of the target person. Hence, these methods are inapplicable for multi-person 3D pose estimation, where the absolute coordinates (e.g., the camera coordinates) are required. Moreover, multi-person pose estimation is more challenging than single pose estimation, due to inter-person occlusion and close human interactions. Existing top-down multi-person methods rely on human detection (i.e., top-down approach), and thus suffer from the detection errors and cannot produce reliable pose estimation in multi-person scenes. Meanwhile, existing bottom-up methods that do not use human detection are not affected by detection errors, but since they process all persons in a scene at once, they are prone to errors, particularly for persons in small scales. To address all these challenges, we propose the integration of top-down and bottom-up approaches to exploit their strengths. Our top-down network estimates human joints from all persons instead of one in an image patch, making it robust to possible erroneous bounding boxes. Our bottom-up network incorporates human-detection based normalized heatmaps, allowing the network to be more robust in handling scale variations. Finally, the estimated 3D poses from the top-down and bottom-up networks are fed into our integration network for final 3D poses. To address the common gaps between training and testing data, we do optimization during the test time, by refining the estimated 3D human poses using high-order temporal constraint, re-projection loss, and bone length regularizations. Our evaluations demonstrate the effectiveness of the proposed method. Code and models are available: https://github.com/3dpose/3D-Multi-Person-Pose.


page 3

page 4

page 5

page 11

page 12

page 13

page 14

page 16


Multi-Person 3D Human Pose Estimation from Monocular Images

Multi-person 3D human pose estimation from a single image is a challengi...

Mutual Adaptive Reasoning for Monocular 3D Multi-Person Pose Estimation

Inter-person occlusion and depth ambiguity make estimating the 3D poses ...

Faster VoxelPose: Real-time 3D Human Pose Estimation by Orthographic Projection

While the voxel-based methods have achieved promising results for multi-...

Reconstructing Groups of People with Hypergraph Relational Reasoning

Due to the mutual occlusion, severe scale variation, and complex spatial...

HMOR: Hierarchical Multi-Person Ordinal Relations for Monocular Multi-Person 3D Pose Estimation

Remarkable progress has been made in 3D human pose estimation from a mon...

PoserNet: Refining Relative Camera Poses Exploiting Object Detections

The estimation of the camera poses associated with a set of images commo...

Beyond Weak Perspective for Monocular 3D Human Pose Estimation

We consider the task of 3D joints location and orientation prediction fr...

Please sign up or login with your details

Forgot password? Click here to reset