A Comprehensive Review of Data-Driven Co-Speech Gesture Generation

01/13/2023
by   Simbarashe Nyatsanga, et al.
1

Gestures that accompany speech are an essential part of natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation and is considered an enabling technology in film, games, virtual social spaces, and for interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. Gesture generation has seen surging interest recently, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models, that benefit from the growing availability of data. This review article summarizes co-speech gesture generation research, with a particular focus on deep generative models. First, we articulate the theory describing human gesticulation and how it complements speech. Next, we briefly discuss rule-based and classical statistical gesture synthesis, before delving into deep learning approaches. We employ the choice of input modalities as an organizing principle, examining systems that generate gestures from audio, text, and non-linguistic input. We also chronicle the evolution of the related training data sets in terms of size, diversity, motion quality, and collection method. Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications. We highlight recent approaches to tackling the various key challenges, as well as the limitations of these approaches, and point toward areas of future development.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/30/2018

Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots

Co-speech gestures enhance interaction experiences between humans as wel...
research
01/11/2021

A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents

Embodied Conversational Agents (ECA) take on different forms, including ...
research
01/14/2021

Generating coherent spontaneous speech and gesture from text

Embodied human communication encompasses both verbal (speech) and non-ve...
research
03/08/2019

Analyzing Input and Output Representations for Speech-Driven Gesture Generation

This paper presents a novel framework for automatic speech-driven gestur...
research
08/10/2021

SGToolkit: An Interactive Gesture Authoring Toolkit for Embodied Conversational Agents

Non-verbal behavior is essential for embodied agents like social robots,...
research
02/24/2021

A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents

Embodied conversational agents (ECAs) benefit from non-verbal behavior f...
research
01/25/2022

Gesture-based Human-Machine Interaction: Taxonomy, Problem Definition, and Analysis

The possibility for humans to interact with physical or virtual systems ...

Please sign up or login with your details

Forgot password? Click here to reset