Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

08/30/2023
by   Matthew Barthet, et al.
0

The laborious and costly nature of affect annotation is a key detrimental factor for obtaining large scale corpora with valid and reliable affect labels. Motivated by the lack of tools that can effectively determine an annotator's reliability, this paper proposes general quality assurance (QA) tests for real-time continuous annotation tasks. Assuming that the annotation tasks rely on stimuli with audiovisual components, such as videos, we propose and evaluate two QA tests: a visual and an auditory QA test. We validate the QA tool across 20 annotators that are asked to go through the test followed by a lengthy task of annotating the engagement of gameplay videos. Our findings suggest that the proposed QA tool reveals, unsurprisingly, that trained annotators are more reliable than the best of untrained crowdworkers we could employ. Importantly, the QA tool introduced can predict effectively the reliability of an affect annotator with 80 maximizing the reliability of labels solicited in affective corpora. The introduced QA tool is available and accessible through the PAGAN annotation platform.

READ FULL TEXT

page 2

page 3

research
07/01/2019

PAGAN: Video Affect Annotation Made Easy

How could we gather affect annotations in a rapid, unobtrusive, and acce...
research
11/08/2019

Crowdsourcing a High-Quality Gold Standard for QA-SRL

Question-answer driven Semantic Role Labeling (QA-SRL) has been proposed...
research
05/04/2023

NeRF-QA: Neural Radiance Fields Quality Assessment Database

This short paper proposes a new database - NeRF-QA - containing 48 video...
research
10/07/2020

Learning a Cost-Effective Annotation Policy for Question Answering

State-of-the-art question answering (QA) relies upon large amounts of tr...
research
03/06/2020

Practical Annotation Strategies for Question Answering Datasets

Annotating datasets for question answering (QA) tasks is very costly, as...
research
04/09/2022

Extending the Scope of Out-of-Domain: Examining QA models in multiple subdomains

Past works that investigate out-of-domain performance of QA systems have...
research
04/16/2018

Assessing the reliability of ensemble forecasting systems under serial dependence

The problem of testing the reliability of ensemble forecasting systems i...

Please sign up or login with your details

Forgot password? Click here to reset