InterGen: Diffusion-based Multi-human Motion Generation under Complex Interactions

04/12/2023
by   Han Liang, et al.
0

We have recently seen tremendous progress in diffusion advances for generating realistic human motions. Yet, they largely disregard the rich multi-human interactions. In this paper, we present InterGen, an effective diffusion-based approach that incorporates human-to-human interactions into the motion diffusion process, which enables layman users to customize high-quality two-person interaction motions, with only text guidance. We first contribute a multimodal dataset, named InterHuman. It consists of about 107M frames for diverse two-person interactions, with accurate skeletal motions and 16,756 natural language descriptions. For the algorithm side, we carefully tailor the motion diffusion model to our two-person interaction setting. To handle the symmetry of human identities during interactions, we propose two cooperative transformer-based denoisers that explicitly share weights, with a mutual attention mechanism to further connect the two denoising processes. Then, we propose a novel representation for motion input in our interaction diffusion model, which explicitly formulates the global relations between the two performers in the world frame. We further introduce two novel regularization terms to encode spatial relations, equipped with a corresponding damping scheme during the training of our interaction diffusion model. Extensive experiments validate the effectiveness and generalizability of InterGen. Notably, it can generate more diverse and compelling two-person motions than previous methods and enables various downstream applications for human interactions.

READ FULL TEXT

page 4

page 7

page 10

page 12

page 14

page 15

page 16

research
01/24/2023

Bipartite Graph Diffusion Model for Human Interaction Generation

The generation of natural human motion interactions is a hot topic in co...
research
05/21/2023

GMD: Controllable Human Motion Synthesis via Guided Diffusion Models

Denoising diffusion models have shown great promise in human motion synt...
research
10/22/2022

Diffusion Motion: Generate Text-Guided 3D Human Motion by Diffusion Model

We propose a simple and novel method for generating 3D human motion from...
research
03/15/2022

ActFormer: A GAN Transformer Framework towards General Action-Conditioned 3D Human Motion Generation

We present a GAN Transformer framework for general action-conditioned 3D...
research
06/08/2023

Stochastic Multi-Person 3D Motion Forecasting

This paper aims to deal with the ignored real-world complexities in prio...
research
08/28/2023

Priority-Centric Human Motion Generation in Discrete Latent Space

Text-to-motion generation is a formidable task, aiming to produce human ...
research
08/31/2023

InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion

This paper addresses a novel task of anticipating 3D human-object intera...

Please sign up or login with your details

Forgot password? Click here to reset