To Rate or Not To Rate: Investigating Evaluation Methods for Generated Co-Speech Gestures

08/12/2021
by   Pieter Wolfert, et al.
0

While automatic performance metrics are crucial for machine learning of artificial human-like behaviour, the gold standard for evaluation remains human judgement. The subjective evaluation of artificial human-like behaviour in embodied conversational agents is however expensive and little is known about the quality of the data it returns. Two approaches to subjective evaluation can be largely distinguished, one relying on ratings, the other on pairwise comparisons. In this study we use co-speech gestures to compare the two against each other and answer questions about their appropriateness for evaluation of artificial behaviour. We consider their ability to rate quality, but also aspects pertaining to the effort of use and the time required to collect subjective data. We use crowd sourcing to rate the quality of co-speech gestures in avatars, assessing which method picks up more detail in subjective assessments. We compared gestures generated by three different machine learning models with various level of behavioural quality. We found that both approaches were able to rank the videos according to quality and that the ranking significantly correlated, showing that in terms of quality there is no preference of one method over the other. We also found that pairwise comparisons were slightly faster and came with improved inter-rater reliability, suggesting that for small-scale studies pairwise comparisons are to be favoured over ratings.

READ FULL TEXT

page 4

page 5

research
10/05/2021

DNSMOS P.835: A Non-Intrusive Perceptual Objective Speech Quality Metric to Evaluate Noise Suppressors

Human subjective evaluation is the gold standard to evaluate speech qual...
research
01/11/2021

A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents

Embodied Conversational Agents (ECA) take on different forms, including ...
research
05/24/2022

How Human is Human Evaluation? Improving the Gold Standard for NLG with Utility Theory

Human ratings are treated as the gold standard in NLG evaluation. The st...
research
01/28/2021

HEMVIP: Human Evaluation of Multiple Videos in Parallel

In many research areas, for example motion and gesture generation, objec...
research
10/28/2021

IMDB-WIKI-SbS: An Evaluation Dataset for Crowdsourced Pairwise Comparisons

Today, comprehensive evaluation of large-scale machine learning models i...
research
03/13/2017

Users prefer Guetzli JPEG over same-sized libjpeg

We report on pairwise comparisons by human raters of JPEG images from li...
research
08/04/2017

Speech-driven Animation with Meaningful Behaviors

Conversational agents (CAs) play an important role in human computer int...

Please sign up or login with your details

Forgot password? Click here to reset