Generating Holistic 3D Human Motion from Speech

12/08/2022
by   Hongwei Yi, et al.
10

This work addresses the problem of generating 3D holistic body motions from human speech. Given a speech recording, we synthesize sequences of 3D body poses, hand gestures, and facial expressions that are realistic and diverse. To achieve this, we first build a high-quality dataset of 3D holistic body meshes with synchronous speech. We then define a novel speech-to-motion generation framework in which the face, body, and hands are modeled separately. The separated modeling stems from the fact that face articulation strongly correlates with human speech, while body poses and hand gestures are less correlated. Specifically, we employ an autoencoder for face motions, and a compositional vector-quantized variational autoencoder (VQ-VAE) for the body and hand motions. The compositional VQ-VAE is key to generating diverse results. Additionally, we propose a cross-conditional autoregressive model that generates body poses and hand gestures, leading to coherent and realistic motions. Extensive experiments and user studies demonstrate that our proposed approach achieves state-of-the-art performance both qualitatively and quantitatively. Our novel dataset and code will be released for research purposes at https://talkshow.is.tue.mpg.de.

READ FULL TEXT

page 1

page 4

page 5

page 7

page 14

page 15

research
08/15/2021

Audio2Gestures: Generating Diverse Gestures from Speech Audio with Conditional Variational Autoencoders

Generating conversational gestures from speech audio is challenging due ...
research
03/04/2022

Freeform Body Motion Generation from Speech

People naturally conduct spontaneous body motions to enhance their speec...
research
01/17/2023

Audio2Gestures: Generating Diverse Gestures from Audio

People may perform diverse gestures affected by various mental and physi...
research
12/21/2021

GOAL: Generating 4D Whole-Body Motion for Hand-Object Grasping

Generating digital humans that move realistically has many applications ...
research
11/18/2022

3d human motion generation from the text via gesture action classification and the autoregressive model

In this paper, a deep learning-based model for 3D human motion generatio...
research
04/20/2023

SINC: Spatial Composition of 3D Human Motions for Simultaneous Action Generation

Our goal is to synthesize 3D human motions given textual inputs describi...
research
05/31/2022

Text/Speech-Driven Full-Body Animation

Due to the increasing demand in films and games, synthesizing 3D avatar ...

Please sign up or login with your details

Forgot password? Click here to reset