HSPACE: Synthetic Parametric Humans Animated in Complex Environments

12/23/2021
by   Eduard Gabriel Bazavan, et al.
6

Advances in the state of the art for 3d human sensing are currently limited by the lack of visual datasets with 3d ground truth, including multiple people, in motion, operating in real-world environments, with complex illumination or occlusion, and potentially observed by a moving camera. Sophisticated scene understanding would require estimating human pose and shape as well as gestures, towards representations that ultimately combine useful metric and behavioral signals with free-viewpoint photo-realistic visualisation capabilities. To sustain progress, we build a large-scale photo-realistic dataset, Human-SPACE (HSPACE), of animated humans placed in complex synthetic indoor and outdoor environments. We combine a hundred diverse individuals of varying ages, gender, proportions, and ethnicity, with hundreds of motions and scenes, as well as parametric variations in body shape (for a total of 1,600 different humans), in order to generate an initial dataset of over 1 million frames. Human animations are obtained by fitting an expressive human body model, GHUM, to single scans of people, followed by novel re-targeting and positioning procedures that support the realistic animation of dressed humans, statistical variation of body proportions, and jointly consistent scene placement of multiple moving people. Assets are generated automatically, at scale, and are compatible with existing real time rendering and game engines. The dataset with evaluation server will be made available for research. Our large-scale analysis of the impact of synthetic data, in connection with real data and weak supervision, underlines the considerable potential for continuing quality improvements and limiting the sim-to-real gap, in this practical setting, in connection with increased model capacity.

READ FULL TEXT

page 1

page 5

page 6

page 7

research
01/05/2017

Learning from Synthetic Humans

Estimating human pose, shape, and motion from images and videos are fund...
research
06/29/2023

BEDLAM: A Synthetic Dataset of Bodies Exhibiting Detailed Lifelike Animated Motion

We show, for the first time, that neural networks trained only on synthe...
research
12/13/2021

Hallucinating Pose-Compatible Scenes

What does human pose tell us about a scene? We propose a task to answer ...
research
10/04/2022

COPILOT: Human Collision Prediction and Localization from Multi-view Egocentric Videos

To produce safe human motions, assistive wearable exoskeletons must be e...
research
11/02/2020

SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera

We present a solution to egocentric 3D body pose estimation from monocul...
research
01/05/2019

The Oxford Multimotion Dataset: Multiple SE(3) Motions with Ground Truth

Datasets advance research by posing challenging new problems and providi...
research
03/17/2022

HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR

We propose Human-centered 4D Scene Capture (HSC4D) to accurately and eff...

Please sign up or login with your details

Forgot password? Click here to reset