Integrated Speech and Gesture Synthesis

08/25/2021
by   Siyang Wang, et al.
0

Text-to-speech and co-speech gesture synthesis have until now been treated as separate areas by two different research communities, and applications merely stack the two technologies using a simple system-level pipeline. This can lead to modeling inefficiencies and may introduce inconsistencies that limit the achievable naturalness. We propose to instead synthesize the two modalities in a single model, a new problem we call integrated speech and gesture synthesis (ISG). We also propose a set of models modified from state-of-the-art neural speech-synthesis engines to achieve this goal. We evaluate the models in three carefully-designed user studies, two of which evaluate the synthesized speech and gesture in isolation, plus a combined study that evaluates the models like they will be used in real-world applications – speech and gesture presented together. The results show that participants rate one of the proposed integrated synthesis models as being as good as the state-of-the-art pipeline system we compare against, in all three tests. The model is able to achieve this with faster synthesis time and greatly reduced parameter count compared to the pipeline system, illustrating some of the potential benefits of treating speech and gesture synthesis together as a single, unified problem. Videos and code are available on our project page at https://swatsw.github.io/isg_icmi21/

READ FULL TEXT
research
06/15/2023

Diff-TTSG: Denoising probabilistic integrated speech and gesture synthesis

With read-aloud speech synthesis achieving high naturalness scores, ther...
research
01/14/2021

Generating coherent spontaneous speech and gesture from text

Embodied human communication encompasses both verbal (speech) and non-ve...
research
09/11/2023

Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation

This paper describes a system developed for the GENEA (Generation and Ev...
research
05/31/2023

Text-to-Speech Pipeline for Swiss German – A comparison

In this work, we studied the synthesis of Swiss German speech using diff...
research
06/20/2023

EMoG: Synthesizing Emotive Co-speech 3D Gesture with Diffusion Model

Although previous co-speech gesture generation methods are able to synth...
research
10/04/2022

Rhythmic Gesticulator: Rhythm-Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings

Automatic synthesis of realistic co-speech gestures is an increasingly i...
research
08/18/2021

Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates

Co-speech gesture generation is to synthesize a gesture sequence that no...

Please sign up or login with your details

Forgot password? Click here to reset