Log In Sign Up

To Rate or Not To Rate: Investigating Evaluation Methods for Generated Co-Speech Gestures

While automatic performance metrics are crucial for machine learning of artificial human-like behaviour, the gold standard for evaluation remains human judgement. The subjective evaluation of artificial human-like behaviour in embodied conversational agents is however expensive and little is known about the quality of the data it returns. Two approaches to subjective evaluation can be largely distinguished, one relying on ratings, the other on pairwise comparisons. In this study we use co-speech gestures to compare the two against each other and answer questions about their appropriateness for evaluation of artificial behaviour. We consider their ability to rate quality, but also aspects pertaining to the effort of use and the time required to collect subjective data. We use crowd sourcing to rate the quality of co-speech gestures in avatars, assessing which method picks up more detail in subjective assessments. We compared gestures generated by three different machine learning models with various level of behavioural quality. We found that both approaches were able to rank the videos according to quality and that the ranking significantly correlated, showing that in terms of quality there is no preference of one method over the other. We also found that pairwise comparisons were slightly faster and came with improved inter-rater reliability, suggesting that for small-scale studies pairwise comparisons are to be favoured over ratings.


page 4

page 5


DNSMOS P.835: A Non-Intrusive Perceptual Objective Speech Quality Metric to Evaluate Noise Suppressors

Human subjective evaluation is the gold standard to evaluate speech qual...

A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents

Embodied Conversational Agents (ECA) take on different forms, including ...

How Human is Human Evaluation? Improving the Gold Standard for NLG with Utility Theory

Human ratings are treated as the gold standard in NLG evaluation. The st...

HEMVIP: Human Evaluation of Multiple Videos in Parallel

In many research areas, for example motion and gesture generation, objec...

IMDB-WIKI-SbS: An Evaluation Dataset for Crowdsourced Pairwise Comparisons

Today, comprehensive evaluation of large-scale machine learning models i...

Users prefer Guetzli JPEG over same-sized libjpeg

We report on pairwise comparisons by human raters of JPEG images from li...

Speech-driven Animation with Meaningful Behaviors

Conversational agents (CAs) play an important role in human computer int...

1. Introduction

When we interact with embodied conversational agents, we expect a similar manner of nonverbal communication as when interacting with humans. One way to achieve more human-like nonverbal behaviour in conversational agents is through the use of data-driven methods, which learn model parameters from data and gained in popularity over the past few years (Kucherenko et al., 2020; Ahuja et al., 2020; Yoon et al., 2020; Kucherenko et al., 2021b). Data-driven methods have been used to generate lips synchronisation, eye gaze or facial expressions, however in this work we take co-speech gestures as a test bed for comparing evaluation methods. Data-driven methods are able to generate a wider range of gestures and behaviours, as behaviour is no longer restricted to pre-coded animation or procedurally generated behaviour, but instead are generated from models trained on large amounts of data of human movement. These behaviours are often used to drive conversational agents in both virtual and physical agents, as these improve interaction (Salem et al., 2013; Ham et al., 2015; Chidambaram et al., 2012; Lucca and Wilbourn, 2018; Prieto Vives et al., 2017). Co-speech gestures are traditionally divided into four dimensions: iconic gestures, beat gestures, deictic gestures, and metaphoric gestures (McNeill, 1992). The approach to produce each of these categories often differs but, using data-driven methods, it becomes possible to generate multiple categories of gestures with a single model.

The quality of generated human-like behaviour can be assessed using objective or subjective measures. Objective measures rely on an algorithmic approach to return a quantitative measure of the quality of the behaviour and are entirely automated, while subjective measures instead rely on ratings by human observers. Most recent papers on co-speech gesture generation report objective measures to assess the quality of the generated behaviour, with measures such as velocity diagrams or average jerk being popular (Kucherenko et al., 2021a; Ahuja et al., 2020; Yoon et al., 2020). These measures not only are easy to automate, but also allow comparisons across models. For example, Yoon et al. (Yoon et al., 2020) trained models from other authors on the same dataset to compare objective metric results. For this reason, objective measures are often preferred over subjective evaluations, as the latter are harder to compare due to their potentially high variability. Yet, subjective evaluations are crucial when evaluating the behaviour of agents interacting with humans. This is because social communication is much more complex than current objective measures are capable of capturing, and subjective evaluations are still considered to be the gold standard. There may also be a large subjective component to how observers interpret generated behaviour, which we would like to capture. Thus, the “final stretch” in quality evaluations still relies heavily on subjective evaluations (Wagner et al., 2019).

While the value of subjective evaluations is widely accepted, there is little consensus on how to collect and analyse such evaluations in relation to the evaluation of data-driven generated stimuli. Recently, several authors working on data-driven methods for nonverbal behaviour generation moved from rating scales (e.g., having observers rate how human-like two generated stimuli are from 1-to-5) to the use of pairwise comparisons (e.g., having observers select which of two generated stimuli is more human-like) (Kucherenko et al., 2020; Pérez-Mayos et al., 2019; Yoon et al., 2020; Wolfert et al., 2019). For example, Yoon et al. (Yoon et al., 2020) argued that “co-speech gestures are so subtle, so participants would have struggled to rate them on a five- or seven-point scale” and promoted the use of pairwise comparisons over rating scales. Relatively little empirical attention has been devoted to this methodological topic in regard to the evaluation of data-driven generated stimuli, however, and it is still unknown how much the methods actually differ in terms of usability and informativeness.

In the current study, we seek to explore the similarities and differences between the rating scale and pairwise comparison approaches. We take generated co-speech gestures as a test bed for our evaluations but note that these findings may also apply to stimulus evaluation in other areas. Our hypotheses, design, and methodology were pre-registered before the data was gathered111 We present short video clips to human participants, with each video clip showing an avatar displaying combined verbal and nonverbal behaviour. The movements are generated using three data-driven methods of varying quality and we expect the subjective evaluations to clearly reflect this difference. In order to gain more insight into the effectiveness of the two subjective evaluation methods, we formulated the following five hypotheses.

  1. The rank-order of stimuli implied by the pairwise comparisons and rating scales will be different.

  2. Pairwise comparisons will have higher inter-rater agreement than rating scales.

  3. Pairwise comparisons and rating scales will differ in terms of time-efficiency (e.g., the time it takes for a single participant to finish a single evaluation).

  4. Pairwise comparisons and rating scales will differ in terms of participant usage preference and usability (both qualitative and quantitative).

  5. Pairwise comparisons and rating scales will both find a difference between stimuli that have a pronounced quality difference, but will not have enough resolution to find a difference between stimuli that differ slightly in quality.

Our aim is to quantify the pros and cons of these subjective evaluation methods and to provide empirical recommendations for the community working on gesture generation on when to use each method. Although the concept of comparing these evaluation strategies is not novel on its own (Fisher et al., 1968), it is novel in relation to the evaluation of gesture generation for ECAs. We hope that this work can both highlight similarities and differences between the evaluated methods, and function as a bridge between the different fields of psycho-metrics and gesture generation researchers.

2. Related Work

In this section, we cover work that compared rating scale evaluations with pairwise comparisons, and look at their specific use in the field of gesture generation for embodied conversational agents. To our knowledge, there has not been a comparison between rating scales and pairwise comparisons for the evaluation of co-speech gestures in Embodied Conversational Agents (ECAs) in particular. However, subjective evaluation methods have been studied and compared in several other fields. We want to highlight that we believe that there is a difference in the type of stimuli that are evaluated: we consider data-driven behaviour in virtual agents. We are aware of the overlap there is with psychology and psychometrics, but want to zoom in specifically on the use of both rating and pairwise assessments of generated gesticular behaviour for ECAs.

There is a rich history of work in psychology on related topics. DeCoster et al. (DeCoster et al., 2009) compared analysing continuous variables directly with analysing them after dichotomisation (e.g., re-coding them as two-class variables such as high-or-low). Although there were a few edge cases where dichotomisation was similar to direct analysis, they demonstrated that dichotomisation throws away important information and concluded that the use of the original continuous variables is to be preferred in most circumstances. Simms et al. (Simms et al., 2019) randomly assigned participants to complete the same personality rating scales with different numbers of response options ranging from two to eleven. They found that including four or fewer response options often attenuates psychometric precision, and including more than six response options generally provides no improvements in precision. Finally, Rhemtulla et al. (Rhemtulla et al., 2012)

demonstrated that treating rating scale data as continuous can be problematic (i.e., can result in biased estimates) for scales with fewer than five response options, which tend to be quite non-normally distributed. Such data thus requires specialised ordinal methods to analyse properly. Overall, the psychological literature thus suggests that rating scales with between five and seven response options would be preferable to rating scales with fewer response options. If we consider the pairwise comparison approach to be similar to a rating scale with two response options (e.g., better or worse), this would raise concerns about the approach’s psychometric precision and normality.

Although not covered in this paper, another way of evaluating stimuli on a continuous scale is by using visually-aided rating (VAR) (Janhunen, 2012). Visually, categories are still used as anchors in VAR, but specific scores are not visualised in comparison to Likert scales. This enables participants to quantify an ordering, from which it is still possible to derive a quantifiable rating. VAS-RRP is congruent to VAR, except that in VAR the rating scale is placed vertically, and in VAS-RRP horizontally (Sung and Wu, 2018).

However, there have also been impassioned arguments in favour of ordinal and rank-based approaches (of which the pairwise comparison approach can be considered a simple variant) within the affective computing community in recent years (Martinez et al., 2014; Yannakakis and Martínez, 2015; Yannakakis et al., 2021). The argument is that many subjective evaluations are inherently ordinal and cannot be adequately treated as continuous numbers or nominal categories and should instead be handled using rankings. If this argument is accurate, then the pairwise comparison approach would be preferable to the rating scale approach on theoretical grounds. There is also evidence that rank-based approaches might have some practical benefits over rating scale approaches, such as being faster to administer and more reliable over time. For example, Clark et al. (Clark et al., 2018) evaluated the perception of physical strength from images of male bodies using both pairwise comparisons and rating scales and found that the scores were closely correlated but that the pairwise comparisons were completed 67% faster. Other examples, like Elliot et al. (Elliott, 1958) and Mueser et al.(Mueser et al., 1984) found high correlations between rankings resulting from the evaluation of physical features in humans. Liang et al. (Liang et al., 2020) proposes a model to ‘calibrate’ self-reported user ratings for dialogue systems due to issues with validity and bias. In relation to biomedical image assessments, where evaluation considers the visual quality of the stimuli, Phelps et al. (Phelps et al., 2015) found that pairwise comparisons and ranked Likert scores made for more accurate assessments in comparison to the use of non-ranked Likert scores. Burton et al. (Burton et al., 2019) compared rating scales with best-worst scaling, another variant of the rank-based approach. In this study, participants were asked to select the most attractive and least attractive faces in a series of images. The best-worst scaling approach showed better test-retest reliability than the rating scale approach.

One of the reasons the community would benefit from greater standardisation of subjective evaluations methods can be found in the recently organised GENEA gesture-generation challenge (Kucherenko et al., 2021b). Invited researchers were asked to submit models trained on the same dataset containing human speech and co-speech gestures. All submissions were then compared using crowd-sourced subject evaluation, in which online participants were asked to rate each clip with a score between 1 and 100. The benefit of this method is that generation models from different authors can be tested at once, within the same framework and participant pool. Sticking to a single evaluation strategy makes it possible to compare work across models, such as in the GENEA challenge, and also across time.

Finally, a recently published preprint reviewed the literature on evaluation of gesture-generation systems (Wolfert et al., ). The review found that stimuli were often evaluated using very different rating scales, such as likeability, naturalness, and gesture-timing. This variability makes comparisons difficult across papers and time.

3. Methods

Figure 1. Interface for pairwise comparison evaluation

3.1. Experimental Design

In this study, we used 30 video stimuli222 showing a gesticulating avatar provided by Kucherenko et al. (Kucherenko et al., 2020), the stimuli are already available and have been used by other researchers (Jonell et al., 2020). The videos had a resolution of pixels and a frame rate of 30 frames per second. Three types of videos were used: Full, NoSpeech and NoText. The Full videos were generated by a model trained on motion of a human actor with the model having access to both the audio speech and transcribed text; the NoSpeech videos were generated from a model only trained on motion and transcribed text; and the NoText videos were generated by a model trained on motion and speech audio only. Thirty videos were created per type and, in each triplet of videos (across type), the avatar spoke the same sentence to facilitate comparison. We have two study conditions: Full versus NoSpeech (which we denote Low Difference) and Full versus NoText (High Difference). We denote them this way because the former showed a small difference in the original study (Kucherenko et al., 2020), while the latter showed a large difference. These conditions (Full. vs NoSpeech and Full. vs NoText) turned out to show significant differences in quality, and we assume that our subjective evaluations will reflect this.

Each participant in the current study was assigned to either the LowDiff or HighDiff condition. Following that, the participant was assigned to one of two ordering conditions:

  1. Pairwise Comparison approach for 10 videos drawn from a set of 30 videos, followed by the Rating Scale approach for the same 10 videos.

  2. Rating Scale approach for 10 videos drawn from a set of 30, and then Pairwise Comparison approach for the same 10 videos.

3.2. Participants

For this study, 130 participants were recruited on Prolific333 To ensure data quality, participants have to be a native speaker of English, have at least a 90% approval rating on the platform, and have participated in at least 100 other studies on the platform. Participants were assigned to conditions using block randomisation in order to maintain balanced conditions.

3.3. Technical Setup

Figure 2. Interface for rating scale evaluation

From Prolific, participants were forwarded to a web application to evaluate the stimuli. This application was based on HEMVIP (Jonell et al., 2021), which in turn was based on WebMushra (Schoeffler et al., 2018) but adapted to work with video files. Since two evaluation strategies were evaluated, there were two interface versions.

The pairwise comparison interface (Figure 1) displays two videos side by side, with three options for evaluation displayed below the videos. For all conditions, the question was: ‘In which video are the character’s movements most human-like?’ The three response options were: left, right, and equal. Participants were able to play both videos at the same time, but it is not explicitly mentioned in the instructions. After the participants watched both videos and selected a response option, they could continue to the next page.

The rating scale interface (Figure 2) displays a single video at a time, with a rating scale displayed below. For all conditions, the question was: ‘How human-like was the agent in this video?’ Response options ranged from 1 to 5 and were labelled not at all, slightly, somewhat, moderately, and extremely. Videos could only be watched one-at-a-time, and participants were only able to advance to the next page when both videos had been played and rated.

3.4. Experimental Procedure

After participants were assigned to the task on Prolific, they were forwarded to the online evaluation system. Here, they were assigned an internal participant ID that corresponds to a configuration file containing the stimuli and order of stimuli to show to the participant, and when to run attention checks. Each participant evaluated a total of 22 video pairs. These 22 video pairs correspond to 10 videos evaluated in a pairwise comparison approach, and 10 in a rating style approach. Two of the 22 video pairs contained an attention check. The order of evaluation (pairwise comparison vs. rating approach) was based on the assigned ordering condition. The position of the attention checks in the series of evaluation pairs was randomised, and there were two types of attention checks: one in which the response option to select was provided visually and one in which it was provided acoustically.

After evaluating the 22 video pairs, participants were presented with a questionnaire collecting their age, gender, nationality, level of education and experience with computers. This was followed with open questions related to the procedure they just completed, and whether they had a preference for pairwise comparison or rating scale evaluations. Once done with the study, successful participants were rewarded with 2.50 GBP (pay on average was 7.23 GBP per hour when taking into account the average duration of the task). The time each participant spent on each page of the experiment (and overall) was also recorded to allow us to evaluate efficiency.

4. Analyses

4.1. Hypothesis 1

To test the hypothesis that the two comparison methods would result in different rank-orderings of stimuli, we used a correlational approach. We first calculated each stimulus’ average score across participants for each comparison method. Average scores using the rating scale method ranged from 1 to 5, and average scores for the pairwise approach ranged from –1 to 1 (on a scale where = the stimulus was preferred over the alternative, = the stimulus and alternative were equal, and = the alternative was preferred over the stimulus). We then estimated the Kendall Rank-Order correlation (Kendall, 1938) between these two series.

4.2. Hypothesis 2

To test the hypothesis that the pairwise comparison method would have higher inter-rater agreement than the rating scale method, we used two statistical approaches. First, we estimated intraclass correlation coefficients (ICCs) using Model 2A (McGraw and Wong, 1996) and calculated the absolute agreement of the average of 12 participants (i.e., the minimum number of participants assigned to any comparison). This approach estimates the reliability of the average of multiple participants’ responses (which is what is used to compare video-generating methods), but assumes that the data approximates a continuous distribution (which is not the case for the pairwise method). As such, we also estimated chance-adjusted categorical agreement using quadratic-weighted kappa coefficients (Gwet, 2014). This approach is overly pessimistic in this case because it estimates the reliability of a single randomly selected participant’s response, but it has the benefit of not assuming continuous data. In both cases, 2000 iterations of non-parametric bootstrapping (Efron and Tibshirani, 1993)

(with percentile-based confidence intervals and

-values) were used to compare the two approaches’ inter-rater reliability.

4.3. Hypothesis 3

To test the hypothesis that the two comparison methods would differ in terms of time-efficiency (i.e., the time it takes a participant to complete a single comparison/page), we used a linear mixed effects modelling approach (Gałecki and Burzykowski, 2013)

. We estimated a model in which each page’s completion time (in seconds) was regressed on a binary variable representing the comparison method. To control for practice and fatigue effects, we also regressed the completion time variable on a binary variable representing whether the comparison was during the first or second half of the experiment, and the method-by-half interaction effect to allow the difference between comparison methods to differ between the first and second half of the experiment. Finally, to account for the clustering/nesting of comparisons within participants and videos, we included random intercepts for these variables and used Satterthwaite’s approximation

(Kuznetsova et al., 2017)

to correct model degrees of freedom for small clusters.

4.4. Hypothesis 4

To test the hypothesis that participants would be more likely to prefer the pairwise comparison approach than the rating approach, we estimated an intercept-only logistic regression model to predict a binary variable representing whether each participant preferred the pairwise comparison approach over the rating comparison approach. We then back-transformed the intercept to probability units and tested whether it was significantly different from an equal preference of 50%.

4.5. Hypothesis 5

To test the hypothesis that the two comparison methods (i.e. rating scale and pairwise) would both find a difference in the case of a large difference in the quality of generated behaviour (i.e., Full vs. NoText stimuli) but not in the case of a small difference in the quality of generated behaviour (i.e. Full vs. NoSpeech stimuli), we used a linear mixed effects modelling approach (Gałecki and Burzykowski, 2013). We estimated a model in which the choice for the Full stimuli was regressed on other (NoText or NoSpeech) and order.

5. Results

130 participants were recruited, of which 100 participants passed the attention checks. Of these, the mean age was 35.01 (SD=12.64), 55 identified as female, 45 as male. 68 of the participants were UK nationals, 22 were from the USA, 4 participants were Canadian, 2 Irish, 1 Australian, 1 Bulgarian, 1 Indian and 1 from New Zealand.

5.1. Hypotheses

5.1.1. Hypothesis 1

In Figure 3, we can see the relationship between the average pairwise scores and the average rating scores. We quantified the magnitude of this relationship using Kendall’s Rank-Order Correlation. When we excluded trials where the two stimuli being compared were rated as equally human-like, we found a rank correlation of 0.44, 95% CI: [0.32, 0.55], . When we included trials where the two stimuli being compared were rated as equally human-like and assigned a pairwise score of 0, this correlation became 0.46, 95% CI: [0.35, 0.57], . Thus, although the two methods did not have exactly the same rank-ordering of stimuli, their rank-orderings were positively correlated (i.e., similar) to a high degree.

Figure 3. Relationship between average rating and pairwise scores. The two are positively correlated.

5.1.2. Hypothesis 2

Using the intraclass correlation approach, the inter-rater reliability coefficient was 0.62, 95% CI: for the rating scale method and 0.77, 95% CI: for the pairwise method; this difference was statistically significant , 95% CI: , . Using the chance-adjusted categorical agreement approach, the quadratic-weighted kappa coefficient was 0.14, 95% CI: for the rating scale method and 0.23, 95% CI: for the pairwise method; this difference was statistically significant , 95% CI: , .

5.1.3. Hypothesis 3

The main effect of comparison method was significantly greater than zero, , 95% CI: , (see Figure 4). The unstandardised slope estimate of 6.07 means that pages were completed an average of around 6 seconds faster for the pairwise approach than for the rating approach. The main effect of ordering was not significantly different from zero and the type-by-ordering interaction effect was also not significantly different from zero , which means that completion time did not significantly differ between the first and second half of the experiment and that the difference between comparison methods did not depend on which came first or second in the experiment.

If we want to know what the time difference would be for an entire experiment, we can multiply this page-level effect by the number of pages shown to participants. For 10 pages, as we did in this study, the experiment-level difference would be around 60 seconds.

Figure 4. Completion time across conditions (error bars are 95% CIs), showing that the pairwise method is approximately 6 seconds faster per page than the rating method.

5.1.4. Hypothesis 4

The intercept for preference for the pairwise method was estimated at 56.0%, 95% CI: and was not significantly different from an equal preference of 50% . Thus, we cannot conclude that participants reliably preferred one method over the other.

5.1.5. Hypothesis 5

Figure 5. Comparison of generation methods by condition and evaluation method (error bars are 95% CIs)

For the rating scale method, the main effect of other was significantly greater than zero, , 95% CI: , . This means that the extent to which the Full stimuli were rated higher than the other stimuli was greater for the HighDiff stimuli than for the LowDiff stimuli. In this model, neither the main effect of ordering () nor the other-by-ordering interaction effect () were significant. For the pairwise method, the main effect of other was significantly greater than zero, , 95% CI: , . This means that the probability of preferring the full stimulus over the other stimulus was greater for the HighDiff stimuli than for the LowDiff stimuli. In this model, neither the main effect of ordering () nor the other-by-ordering interaction effect () were significant. Despite different scaling, the two methods had very similar results that matched our hypotheses and also matched the results from the original study we were reproducing (Kucherenko et al., 2020) (see Figure 5).

6. Discussion

In this study, we explored the differences in evaluating gesture motion stimuli with both pairwise comparisons and rating scales. Our aim was to gain a deeper understanding of when to use each approach. For this, we looked at the stimulus rankings both methods provided, their inter-rater reliability, the time it took participants to complete evaluations, participant preferences, and the conclusions both methods would yield regarding the comparison of gesture generation methods with high and low differences in quality.

The rank-ordering of stimuli between the pairwise comparisons and rating scales had a moderate positive correlation. We can conclude that in order to rank stimuli, in this instance co-speech gestures, there is not one approach that is preferred over the other; both are able to subjectively distinguish bad from good stimuli and this can be used to establish an order of quality.

When we take a look at the inter-rater reliability, we see a higher reliability for the pairwise method. This suggests that the pairwise method might be preferred over the rating scale method in terms of reliability.

When we look at which approach is faster, we can conclude that each comparison using the pairwise method was, on average, 6 seconds faster (25s instead of 31s) than each comparison using the rating scale method, which aligns with the findings of previous studies (Clark et al., 2018). Although this difference was statistically significant (i.e., reliable), a difference of 6 seconds per comparison is likely too small to make much of a practical difference unless the number of comparisons being made by each participant was large (e.g., 100 or more).

Whether participants reliably preferred one comparison method over the other depended on which method they were assigned to use first. Those participants who used the rating scale method and then the pairwise method significantly preferred the pairwise method. However, those who used the pairwise method and then the rating scale method did not show a reliable preference for either method. This provides tentative evidence that the pairwise method may be more user-friendly.

In line with a previous study (Kucherenko et al., 2020), we found that a high qualitative difference is indeed picked up by subjective evaluations. Not only does this hold for pairwise comparisons, but also for the rating scale approach. Both methods can provide similar results and are equivalent when comparing two or more conditions, for example two different models used to generate behaviour.

6.1. Limitations

For this study we gathered 2200 evaluations submitted by 100 participants. Due to the random drawing of stimuli, stimuli did not have the same number of responses. A pseudo-random spreading of stimuli over participants could have avoided this. The fact that participants could watch the videos simultaneously in the pairwise comparison interface but not in the rating scale interface may have contributed to the difference in average completion time between the two methods. We only considered ‘human-likeness’ in terms of assessing the quality of the generated gestures assessed by the participants, and are aware how limiting this question is in relation to the full spectrum of possible questions in relation to the evaluation of these stimuli. We opted for this strategy as the aim of this work was not to demonstrate which questions are most appropriate for the evaluation, but to compare the outcomes of two different evaluation strategies.

6.2. Recommendations

Based on our results, we have found no strong evidence to prefer one evaluation method over the other. The study does however allow us to make a number of recommendations for each method in relation to the domain of gesture generation, taking into account previous studies in other domains.

Pairwise comparisons may be better suited when a large number of stimuli are to be evaluated, as this not only results in a shorter study but is likely to avoid fatigue in participants. If only a small number of conditions are under consideration, then pairwise comparisons of conditions is practical, but as the number of combinations grows with the faculty of the number of conditions , with the number of conditions) pairwise comparisons tend to become unwieldy for 4 or more conditions if we want to compare all versus all.

Rating scales may be more appropriate when fine-grained evaluations are needed, as ratings can not only be used to rate stimuli between conditions, but can also be used to rank stimuli within conditions. Ratings are also recommended when more than 3 conditions are under considerations, as the number of required ratings grows linearly with the number of conditions and stimuli. We would however like to emphasise the importance of providing anchors/labels for each response option in the rating scales (Weijters et al., 2010). When using rating scales, it is also recommended to calibrate participants’ judgements by showing the participants poor and excellent stimuli during a brief training session. While the lack of calibration can somewhat be addressed by normalising participants’ ratings, resolution and reliability are lost when participants are not properly trained before starting their rating task.

Finally, it is important to consider the type of information provided by each evaluation method. Rating scales provide information about the quality of each stimulus on an absolute scale, whereas pairwise comparisons provide information on a relative scale. Thus, you could use the pairwise comparison method to establish whether one method of generating human-like behaviour was reliably preferred over another. However, being ‘better’ is not always the same as being ‘good’. For instance, one method could be considered ‘poor’ and the other ‘very poor’; this would likely result in a big difference in pairwise comparisons, but it would be a mistake to conclude that the former was therefore high quality in absolute terms. This is where carefully crafted rating scales (and qualitative methods, such as interviews and free response boxes) can provide additional information about quality in general.

7. Conclusion

Objective evaluation measures of generated human-like behaviour often provide insufficient information to fully assess the quality of the behaviour. As such, these measures are often supplemented with subjective evaluations. However, the field of gesture generation currently overlooks the amount of work that has been done in other fields that deal with subjective evaluations. This paper compared two popular methods, pairwise comparisons and rating scales, and found that both were equally effective to assess the quality of generated behaviour and provided surprisingly similar results in terms of rank-ordering of stimuli, inter-rater reliability, participant usability preferences, and the conclusions they yielded regarding the comparison of different stimuli-generation methods. We found that pairwise comparisons were slightly faster and showed somewhat higher inter-rater reliability, whereas the rating scale approach provided information on both absolute and relative quality and is capable of scales better when comparing more than two stimuli at a time. These insights are increasingly relevant in a time where quantitative quality measures are used to drive research and development, especially when the use of data-driven methods tends to put draw attention away from subjective measures in favour of objective loss functions. Subjective measures are likely to remain the gold standard in evaluation studies and a better understanding of their capabilities benefits the study of multimodal behaviour generation.

8. Acknowledgment

This research received funding from the Flemish Government (AI Research Program), was supported by the Flemish Research Foundation grant no. 1S95020N and the Swedish Foundation for Strategic Research contract no. RIT15-0107 (EACare).


  • C. Ahuja, D. W. Lee, R. Ishii, and L. Morency (2020) No gestures left behind: learning relationships between spoken language and freeform gestures. In Findings of the Association for Computational Linguistics: EMNLP 2020, External Links: Document Cited by: §1, §1.
  • N. Burton, M. Burton, D. Rigby, C. A. Sutherland, and G. Rhodes (2019) Best-worst scaling improves measurement of first impressions. Cognitive research: principles and implications 4 (1), pp. 1–10. Cited by: §2.
  • V. Chidambaram, Y. Chiang, and B. Mutlu (2012) Designing persuasive robots: how robots might persuade people using vocal and nonverbal cues. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pp. 293–300. Cited by: §1.
  • A. P. Clark, K. L. Howard, A. T. Woods, I. S. Penton-Voak, and C. Neumann (2018) Why rate when you could compare? using the “elochoice” package to assess pairwise comparisons of perceived physical strength. PloS one 13 (1), pp. e0190393. Cited by: §2, §6.
  • J. DeCoster, A. R. Iselin, and M. Gallucci (2009) A conceptual and empirical examination of justifications for dichotomization. Psychological Methods 14 (4), pp. 349–366. External Links: Document Cited by: §2.
  • B. Efron and R. J. Tibshirani (1993) An introduction to the bootstrap. Chapman and Hall, New York, NY. Cited by: §4.2.
  • L. L. Elliott (1958) Reliability of judgments of figural complexity.. Journal of experimental psychology 56 (4), pp. 335. Cited by: §2.
  • S. T. Fisher, D. J. Weiss, and R. V. Dawis (1968) A comparison of likert and pair comparisons techniques in multivariate attitude scaling. Educational and Psychological Measurement 28 (1), pp. 81–94. Cited by: §1.
  • A. Gałecki and T. Burzykowski (2013) Linear mixed-effects model. In Linear Mixed-Effects Models Using R, pp. 245–273. Cited by: §4.3, §4.5.
  • K. L. Gwet (2014) Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters. Fourth edition, Advanced Analytics, Gaithersburg, MD. Cited by: §4.2.
  • J. Ham, R. H. Cuijpers, and J. Cabibihan (2015) Combining robotic persuasive strategies: the persuasive power of a storytelling robot that uses gazing and gestures. International Journal of Social Robotics 7 (4), pp. 479–487. Cited by: §1.
  • K. Janhunen (2012) A comparison of likert-type rating and visually-aided rating in a simple moral judgment experiment. Quality & Quantity 46 (5), pp. 1471–1477. Cited by: §2.
  • P. Jonell, T. Kucherenko, I. Torre, and J. Beskow (2020) Can we trust online crowdworkers? comparing online and offline participants in a preference test of virtual agents. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, pp. 1–8. Cited by: §3.1.
  • P. Jonell, Y. Yoon, P. Wolfert, T. Kucherenko, and G. E. Henter (2021) HEMVIP: human evaluation of multiple videos in parallel. In Proceedings of the International Conference on Multimodal Interaction, Cited by: §3.3.
  • M. G. Kendall (1938) A new measure of rank correlation. Biometrika 30 (1), pp. 81–93. External Links: Document Cited by: §4.1.
  • T. Kucherenko, D. Hasegawa, N. Kaneko, G. E. Henter, and H. Kjellström (2021a) Moving fast and slow: analysis of representations and post-processing in speech-driven automatic gesture generation. Int. J. Hum. Comput. Interact.. External Links: Document Cited by: §1.
  • T. Kucherenko, P. Jonell, S. van Waveren, G. E. Henter, S. Alexandersson, I. Leite, and H. Kjellström (2020) Gesticulator: a framework for semantically-aware speech-driven gesture generation. In Proceedings of the International Conference on Multimodal Interaction, pp. 242–250. Cited by: §1, §1, §3.1, §5.1.5, §6.
  • T. Kucherenko, P. Jonell, Y. Yoon, P. Wolfert, and G. E. Henter (2021b) A large, crowdsourced evaluation of gesture generation systems on common data: the genea challenge 2020. In 26th International Conference on Intelligent User Interfaces, pp. 11–21. Cited by: §1, §2.
  • A. Kuznetsova, P. B. Brockhoff, and R. H. B. Christensen (2017) lmerTest Package: tests in linear mixed effects models. Journal of Statistical Software 82 (13), pp. 1–26. External Links: Document Cited by: §4.3.
  • W. Liang, J. Zou, and Z. Yu (2020) Beyond user self-reported likert scale ratings: a comparison model for automatic dialog evaluation. arXiv preprint arXiv:2005.10716. Cited by: §2.
  • K. Lucca and M. P. Wilbourn (2018) Communicating to learn: infants’ pointing gestures result in optimal learning. Child development 89 (3), pp. 941–960. Cited by: §1.
  • H. Martinez, G. Yannakakis, and J. Hallam (2014)

    Don’t classify ratings of affect; rank them!

    IEEE Transactions on Affective Computing 3045 (c), pp. 1–1. External Links: Document Cited by: §2.
  • K. O. McGraw and S. P. Wong (1996) Forming inferences about some intraclass correlation coefficients. Psychological Methods 1 (1), pp. 30–46. External Links: Document Cited by: §4.2.
  • D. McNeill (1992) Hand and mind: what gestures reveal about thought. University of Chicago press. Cited by: §1.
  • K. T. Mueser, B. W. Grau, S. Sussman, and A. J. Rosen (1984) You’re only as pretty as you feel: facial expression as a determinant of physical attractiveness.. Journal of Personality and Social Psychology 46 (2), pp. 469. Cited by: §2.
  • L. Pérez-Mayos, M. Farrús, and J. Adell (2019) Part-of-speech and prosody-based approaches for robot speech and gesture synchronization. Journal of Intelligent & Robotic Systems, pp. 1–11. Cited by: §1.
  • A. S. Phelps, D. M. Naeger, J. L. Courtier, J. W. Lambert, P. A. Marcovici, J. E. Villanueva-Meyer, and J. D. MacKenzie (2015) Pairwise comparison versus likert scale for biomedical image assessment. American Journal of Roentgenology 204 (1), pp. 8–14. Cited by: §2.
  • P. Prieto Vives, A. Igualada Pérez, and N. Esteve Gibert (2017) Beat gestures improve word recall in 3-to 5-year-old children. Journal of Experimental Child Psychology. 2017 Apr; 156: 99-112. Cited by: §1.
  • M. Rhemtulla, t. link will open in a new window Link to external site, P. É. Brosseau-Liard, and V. Savalei (2012)

    When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions

    Psychological Methods 17 (3), pp. 354–373. External Links: Document Cited by: §2.
  • M. Salem, F. Eyssel, K. Rohlfing, S. Kopp, and F. Joublin (2013) To err is human (-like): effects of robot gesture on perceived anthropomorphism and likability. International Journal of Social Robotics 5 (3), pp. 313–323. Cited by: §1.
  • M. Schoeffler, S. Bartoschek, F. Stöter, M. Roess, S. Westphal, B. Edler, and J. Herre (2018) WebMUSHRA—a comprehensive framework for web-based listening tests. Journal of Open Research Software 6 (1). Cited by: §3.3.
  • L. J. Simms, K. Zelazny, T. F. Williams, and L. Bernstein (2019) Does the number of response options matter? Psychometric perspectives using personality questionnaire data. Psychological Assessment 31 (4), pp. 557–566. External Links: Document Cited by: §2.
  • Y. Sung and J. Wu (2018) The visual analogue scale for rating, ranking and paired-comparison (vas-rrp): a new technique for psychological measurement. Behavior research methods 50 (4), pp. 1694–1715. Cited by: §2.
  • P. Wagner, J. Beskow, S. Betz, J. Edlund, J. Gustafson, G. E. Henter, S. Le Maguer, Z. Malisz, É. Székely, C. Tånnander, and J. Voße (2019) Speech synthesis evaluation – state-of-the-art assessment and suggestion for a novel research program. In Proc. SSW, Vol. 10, Vienna, Austria, pp. 105–110. External Links: Document Cited by: §1.
  • B. Weijters, E. Cabooter, and N. Schillewaert (2010) The effect of rating scale format on response styles: the number of response categories and response category labels. International Journal of Research in Marketing 27 (3), pp. 236–247. Cited by: §6.2.
  • P. Wolfert, T. Kucherenko, H. Kjellström, and T. Belpaeme (2019) Should beat gestures be learned or designed?: a benchmarking user study. In ICDL-EPIROB 2019 Workshop on Naturalistic Non-Verbal and Affective Human-Robot Interactions, Cited by: §1.
  • [37] P. Wolfert, N. Robinson, and T. Belpaeme A review of evaluation practices of gesture generation in embodied conversational agents. arXiv preprint arXiv:2101.03769. Cited by: §2.
  • G. Yannakakis, R. Cowie, and C. Busso (2021) The ordinal nature of emotions: an emerging approach. IEEE Transactions on Affective Computing 12 (1), pp. 16–35. External Links: Document Cited by: §2.
  • G. Yannakakis and H. P. Martínez (2015) Grounding truth via ordinal annotation. In 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 574–580. External Links: ISSN 2156-8111, Document Cited by: §2.
  • Y. Yoon, B. Cha, J. Lee, M. Jang, J. Lee, J. Kim, and G. Lee (2020) Speech gesture generation from the trimodal context of text, audio, and speaker identity. ACM Transactions on Graphics (TOG) 39 (6), pp. 1–16. Cited by: §1, §1, §1.