Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates

08/18/2021
by   Shenhan Qian, et al.
0

Co-speech gesture generation is to synthesize a gesture sequence that not only looks real but also matches with the input speech audio. Our method generates the movements of a complete upper body, including arms, hands, and the head. Although recent data-driven methods achieve great success, challenges still exist, such as limited variety, poor fidelity, and lack of objective metrics. Motivated by the fact that the speech cannot fully determine the gesture, we design a method that learns a set of gesture template vectors to model the latent conditions, which relieve the ambiguity. For our method, the template vector determines the general appearance of a generated gesture sequence, while the speech audio drives subtle movements of the body, both indispensable for synthesizing a realistic gesture sequence. Due to the intractability of an objective metric for gesture-speech synchronization, we adopt the lip-sync error as a proxy metric to tune and evaluate the synchronization ability of our model. Extensive experiments show the superiority of our method in both objective and subjective evaluations on fidelity and synchronization.

READ FULL TEXT

page 4

page 8

research
12/05/2022

Audio-Driven Co-Speech Gesture Video Generation

Co-speech gesture is crucial for human-machine interaction and digital e...
research
01/14/2021

Generating coherent spontaneous speech and gesture from text

Embodied human communication encompasses both verbal (speech) and non-ve...
research
03/15/2023

Evaluating gesture-generation in a large-scale open challenge: The GENEA Challenge 2022

This paper reports on the second GENEA Challenge to benchmark data-drive...
research
08/22/2022

The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation

This paper reports on the second GENEA Challenge to benchmark data-drive...
research
06/20/2023

EMoG: Synthesizing Emotive Co-speech 3D Gesture with Diffusion Model

Although previous co-speech gesture generation methods are able to synth...
research
08/25/2021

Integrated Speech and Gesture Synthesis

Text-to-speech and co-speech gesture synthesis have until now been treat...
research
09/25/2011

Squiggle - A Glyph Recognizer for Gesture Input

Squiggle is a template-based glyph recognizer in the lineage of `1 Recog...

Please sign up or login with your details

Forgot password? Click here to reset