Analyzing Input and Output Representations for Speech-Driven Gesture Generation

03/08/2019
by   Taras Kucherenko, et al.
0

This paper presents a novel framework for automatic speech-driven gesture generation, applicable to human-agent interaction including both virtual agents and robots. Specifically, we extend recent deep-learning-based, data-driven methods for speech-driven gesture generation by incorporating representation learning. Our model takes speech as input and produces gestures as output, in the form of a sequence of 3D coordinates. Our approach consists of two steps. First, we learn a lower-dimensional representation of human motion using a denoising autoencoder neural network, consisting of a motion encoder MotionE and a motion decoder MotionD. The learned representation preserves the most important aspects of the human pose variation while removing less relevant variation. Second, we train a novel encoder network SpeechE to map from speech to a corresponding motion representation with reduced dimensionality. At test time, the speech encoder and the motion decoder networks are combined: SpeechE predicts motion representations based on a given speech signal and MotionD then decodes these representations to produce motion sequences. We evaluate different representation sizes in order to find the most effective dimensionality for the representation. We also evaluate the effects of using different speech features as input to the model. We find that MFCCs, alone or combined with prosodic features, perform the best. The results of a subsequent user study confirm the benefits of the representation learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2020

Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation

This paper presents a novel framework for speech-driven gesture producti...
research
12/05/2022

Audio-Driven Co-Speech Gesture Video Generation

Co-speech gesture is crucial for human-machine interaction and digital e...
research
01/13/2023

A Comprehensive Review of Data-Driven Co-Speech Gesture Generation

Gestures that accompany speech are an essential part of natural and effi...
research
01/11/2021

A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents

Embodied Conversational Agents (ECA) take on different forms, including ...
research
02/23/2021

A large, crowdsourced evaluation of gesture generation systems on common data: The GENEA Challenge 2020

Co-speech gestures, gestures that accompany speech, play an important ro...
research
10/02/2020

Understanding the Predictability of Gesture Parameters from Speech and their Perceptual Importance

Gesture behavior is a natural part of human conversation. Much work has ...
research
08/24/2023

The GENEA Challenge 2023: A large scale evaluation of gesture generation models in monadic and dyadic settings

This paper reports on the GENEA Challenge 2023, in which participating t...

Please sign up or login with your details

Forgot password? Click here to reset