PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time

08/20/2020
by   Soshi Shimada, et al.
0

Marker-less 3D human motion capture from a single colour camera has seen significant progress. However, it is a very challenging and severely ill-posed problem. In consequence, even the most accurate state-of-the-art approaches have significant limitations. Purely kinematic formulations on the basis of individual joints or skeletons, and the frequent frame-wise reconstruction in state-of-the-art methods greatly limit 3D accuracy and temporal stability compared to multi-view or marker-based motion capture. Further, captured 3D poses are often physically incorrect and biomechanically implausible, or exhibit implausible environment interactions (floor penetration, foot skating, unnatural body leaning and strong shifting in depth), which is problematic for any use case in computer graphics. We, therefore, present PhysCap, the first algorithm for physically plausible, real-time and marker-less human 3D motion capture with a single colour camera at 25 fps. Our algorithm first captures 3D human poses purely kinematically. To this end, a CNN infers 2D and 3D joint positions, and subsequently, an inverse kinematics step finds space-time coherent joint angles and global 3D pose. Next, these kinematic reconstructions are used as constraints in a real-time physics-based pose optimiser that accounts for environment constraints (e.g., collision handling and floor placement), gravity, and biophysical plausibility of human postures. Our approach employs a combination of ground reaction force and residual force for plausible root control, and uses a trained neural network to detect foot contact events in images. Our method captures physically plausible and temporally stable global 3D human motion, without physically implausible postures, floor penetrations or foot skating, from video in real time and in general scenes. The video is available at http://gvv.mpi-inf.mpg.de/projects/PhysCap

READ FULL TEXT

page 1

page 5

page 6

page 9

page 10

page 13

page 14

research
03/26/2022

Neural MoCon: Neural Motion Control for Physically Plausible Human Motion Capture

Due to the visual ambiguity, purely kinematic formulations on monocular ...
research
05/03/2021

Neural Monocular 3D Human Motion Capture with Physical Awareness

We present a new trainable system for physically plausible markerless 3D...
research
07/22/2020

Contact and Human Dynamics from Monocular Video

Existing deep models predict 2D and 3D kinematic poses from video that a...
research
09/19/2022

D D: Learning Human Dynamics from Dynamic Camera

3D human pose estimation from a monocular video has recently seen signif...
research
06/22/2020

MotioNet: 3D Human Motion Reconstruction from Monocular Video with Skeleton Consistency

We introduce MotioNet, a deep neural network that directly reconstructs ...
research
07/03/2023

Real-time Monocular Full-body Capture in World Space via Sequential Proxy-to-Motion Learning

Learning-based approaches to monocular motion capture have recently show...
research
05/24/2022

Differentiable Dynamics for Articulated 3d Human Motion Reconstruction

We introduce DiffPhy, a differentiable physics-based model for articulat...

Please sign up or login with your details

Forgot password? Click here to reset