A Survey of NLP-Related Crowdsourcing HITs: what works and what does not

11/09/2021
by   Jessica Huynh, et al.
0

Crowdsourcing requesters on Amazon Mechanical Turk (AMT) have raised questions about the reliability of the workers. The AMT workforce is very diverse and it is not possible to make blanket assumptions about them as a group. Some requesters now reject work en mass when they do not get the results they expect. This has the effect of giving each worker (good or bad) a lower Human Intelligence Task (HIT) approval score, which is unfair to the good workers. It also has the effect of giving the requester a bad reputation on the workers' forums. Some of the issues causing the mass rejections stem from the requesters not taking the time to create a well-formed task with complete instructions and/or not paying a fair wage. To explore this assumption, this paper describes a study that looks at the crowdsourcing HITs on AMT that were available over a given span of time and records information about those HITs. This study also records information from a crowdsourcing forum on the worker perspective on both those HITs and on their corresponding requesters. Results reveal issues in worker payment and presentation issues such as missing instructions or HITs that are not doable.

READ FULL TEXT
research
10/01/2021

Quantifying the Invisible Labor in Crowd Work

Crowdsourcing markets provide workers with a centralized place to find p...
research
12/20/2022

Needle in a Haystack: An Analysis of Finding Qualified Workers on MTurk for Summarization

The acquisition of high-quality human annotations through crowdsourcing ...
research
02/19/2015

Approval Voting and Incentives in Crowdsourcing

The growing need for labeled training data has made crowdsourcing an imp...
research
02/03/2015

Cheaper and Better: Selecting Good Workers for Crowdsourcing

Crowdsourcing provides a popular paradigm for data collection at scale. ...
research
04/26/2021

Recurring Turking: Conducting Daily Task Studies on Mechanical Turk

In this paper, we present our system design for conducting recurring dai...
research
07/19/2023

LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs

LLMs have shown promise in replicating human-like behavior in crowdsourc...
research
10/23/2018

Working in Pairs: Understanding the Effects of Worker Interactions in Crowdwork

Crowdsourcing has gained popularity as a tool to harness human brain pow...

Please sign up or login with your details

Forgot password? Click here to reset