Log In Sign Up

A large, crowdsourced evaluation of gesture generation systems on common data: The GENEA Challenge 2020

by   Taras Kucherenko, et al.

Co-speech gestures, gestures that accompany speech, play an important role in human communication. Automatic co-speech gesture generation is thus a key enabling technology for embodied conversational agents (ECAs), since humans expect ECAs to be capable of multi-modal communication. Research into gesture generation is rapidly gravitating towards data-driven methods. Unfortunately, individual research efforts in the field are difficult to compare: there are no established benchmarks, and each study tends to use its own dataset, motion visualisation, and evaluation methodology. To address this situation, we launched the GENEA Challenge, a gesture-generation challenge wherein participating teams built automatic gesture-generation systems on a common dataset, and the resulting systems were evaluated in parallel in a large, crowdsourced user study using the same motion-rendering pipeline. Since differences in evaluation outcomes between systems now are solely attributable to differences between the motion-generation methods, this enables benchmarking recent approaches against one another in order to get a better impression of the state of the art in the field. This paper reports on the purpose, design, results, and implications of our challenge.


page 4

page 7


The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation

This paper reports on the second GENEA Challenge to benchmark data-drive...

A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents

Embodied Conversational Agents (ECA) take on different forms, including ...

It's A Match! Gesture Generation Using Expressive Parameter Matching

Automatic gesture generation from speech generally relies on implicit mo...

SGToolkit: An Interactive Gesture Authoring Toolkit for Embodied Conversational Agents

Non-verbal behavior is essential for embodied agents like social robots,...

Analyzing Input and Output Representations for Speech-Driven Gesture Generation

This paper presents a novel framework for automatic speech-driven gestur...

A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents

Embodied conversational agents (ECAs) benefit from non-verbal behavior f...