In Search of Ambiguity: A Three-Stage Workflow Design to Clarify Annotation Guidelines for Crowd Workers

12/04/2021
by   Vivek Krishna Pradhan, et al.
0

We propose a novel three-stage FIND-RESOLVE-LABEL workflow for crowdsourced annotation to reduce ambiguity in task instructions and thus improve annotation quality. Stage 1 (FIND) asks the crowd to find examples whose correct label seems ambiguous given task instructions. Workers are also asked to provide a short tag which describes the ambiguous concept embodied by the specific instance found. We compare collaborative vs. non-collaborative designs for this stage. In Stage 2 (RESOLVE), the requester selects one or more of these ambiguous examples to label (resolving ambiguity). The new label(s) are automatically injected back into task instructions in order to improve clarity. Finally, in Stage 3 (LABEL), workers perform the actual annotation using the revised guidelines with clarifying examples. We compare three designs for using these examples: examples only, tags only, or both. We report image labeling experiments over six task designs using Amazon's Mechanical Turk. Results show improved annotation accuracy and further insights regarding effective design for crowdsourced annotation tasks.

READ FULL TEXT

page 1

page 22

research
05/23/2021

Wisdom for the Crowd: Discoursive Power in Annotation Instructions for Computer Vision

Developers of computer vision algorithms outsource some of the labor inv...
research
09/03/2023

How Crowd Worker Factors Influence Subjective Annotations: A Study of Tagging Misogynistic Hate Speech in Tweets

Crowdsourced annotation is vital to both collecting labelled data to tra...
research
04/10/2022

Re-Examining Human Annotations for Interpretable NLP

Explanation methods in Interpretable NLP often explain the model's decis...
research
09/30/2022

Improve learning combining crowdsourced labels by weighting Areas Under the Margin

In supervised learning – for instance in image classification – modern m...
research
05/17/2020

DEXA: Supporting Non-Expert Annotators with Dynamic Examples from Experts

The success of crowdsourcing based annotation of text corpora depends on...
research
12/13/2017

Learning From Noisy Singly-labeled Data

Supervised learning depends on annotated examples, which are taken to be...
research
03/15/2012

Hybrid Generative/Discriminative Learning for Automatic Image Annotation

Automatic image annotation (AIA) raises tremendous challenges to machine...

Please sign up or login with your details

Forgot password? Click here to reset