AMASS: Archive of Motion Capture as Surface Shapes

04/05/2019
by   Naureen Mahmood, et al.
0

Large datasets are the cornerstone of recent advances in computer vision using deep learning. In contrast, existing human motion capture (mocap) datasets are small and the motions limited, hampering progress on learning models of human motion. While there are many different datasets available, they each use a different parameterization of the body, making it difficult to integrate them into a single meta dataset. To address this, we introduce AMASS, a large and varied database of human motion that unifies 15 different optical marker-based mocap datasets by representing them within a common framework and parameterization. We achieve this using a new method, MoSh++, that converts mocap data into realistic 3D human meshes represented by a rigged body model; here we use SMPL [doi:10.1145/2816795.2818013], which is widely used and provides a standard skeletal representation as well as a fully rigged surface mesh. The method works for arbitrary marker sets, while recovering soft-tissue dynamics and realistic hand motion. We evaluate MoSh++ and tune its hyperparameters using a new dataset of 4D body scans that are jointly recorded with marker-based mocap. The consistent representation of AMASS makes it readily useful for animation, visualization, and generating training data for deep learning. Our dataset is significantly richer than previous human motion collections, having more than 40 hours of motion data, spanning over 300 subjects, more than 11,000 motions, and will be publicly available to the research community.

READ FULL TEXT
research
04/01/2020

SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans

We present SoftSMPL, a learning-based method to model realistic soft-tis...
research
04/25/2022

Adversarial Attention for Human Motion Synthesis

Analysing human motions is a core topic of interest for many disciplines...
research
03/21/2023

3D Human Mesh Estimation from Virtual Markers

Inspired by the success of volumetric 3D pose estimation, some recent hu...
research
11/13/2021

PhysXNet: A Customizable Approach for LearningCloth Dynamics on Dressed People

We introduce PhysXNet, a learning-based approach to predict the dynamics...
research
10/06/2017

CAMREP- Concordia Action and Motion Repository

Action recognition, motion classification, gait analysis and synthesis a...
research
02/24/2017

Deep representation learning for human motion prediction and classification

Generative models of 3D human motion are often restricted to a small num...
research
10/09/2021

SOMA: Solving Optical Marker-Based MoCap Automatically

Marker-based optical motion capture (mocap) is the "gold standard" metho...

Please sign up or login with your details

Forgot password? Click here to reset