MIME: Human-Aware 3D Scene Generation

12/08/2022
by   Hongwei Yi, et al.
7

Generating realistic 3D worlds occupied by moving humans has many applications in games, architecture, and synthetic data creation. But generating such scenes is expensive and labor intensive. Recent work generates human poses and motions given a 3D scene. Here, we take the opposite approach and generate 3D indoor scenes given 3D human motion. Such motions can come from archival motion capture or from IMU sensors worn on the body, effectively turning human movement in a "scanner" of the 3D world. Intuitively, human movement indicates the free-space in a room and human contact indicates surfaces or objects that support activities such as sitting, lying or touching. We propose MIME (Mining Interaction and Movement to infer 3D Environments), which is a generative model of indoor scenes that produces furniture layouts that are consistent with the human movement. MIME uses an auto-regressive transformer architecture that takes the already generated objects in the scene as well as the human motion as input, and outputs the next plausible object. To train MIME, we build a dataset by populating the 3D FRONT scene dataset with 3D humans. Our experiments show that MIME produces more diverse and plausible 3D scenes than a recent generative scene method that does not know about human movement. Code and data will be available for research at https://mime.is.tue.mpg.de.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 6

page 7

page 8

page 12

research
05/21/2023

Synthesizing Diverse Human Motions in 3D Indoor Scenes

We present a novel method for populating 3D indoor scenes with virtual h...
research
10/18/2022

HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes

Learning to generate diverse scene-aware and goal-oriented human motions...
research
07/14/2023

NIFTY: Neural Object Interaction Fields for Guided Human Motion Synthesis

We address the problem of generating realistic 3D motions of humans inte...
research
12/16/2021

The Wanderings of Odysseus in 3D Scenes

Our goal is to populate digital environments, in which the digital human...
research
12/01/2021

Pose2Room: Understanding 3D Scenes from Human Activities

With wearable IMU sensors, one can estimate human poses from wearable de...
research
05/31/2021

Scene-aware Generative Network for Human Motion Synthesis

We revisit human motion synthesis, a task useful in various real world a...
research
03/17/2022

HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR

We propose Human-centered 4D Scene Capture (HSC4D) to accurately and eff...

Please sign up or login with your details

Forgot password? Click here to reset