CrowdHub: Extending crowdsourcing platforms for the controlled evaluation of tasks designs

09/06/2019
by   Jorge Ramirez, et al.
0

We present CrowdHub, a tool for running systematic evaluations of task designs on top of crowdsourcing platforms. The goal is to support the evaluation process, avoiding potential experimental biases that, according to our empirical studies, can amount to 38 dataset in uncontrolled settings. Using CrowdHub, researchers can map their experimental design and automate the complex process of managing task execution over time while controlling for returning workers and crowd demographics, thus reducing bias, increasing utility of collected data, and making more efficient use of a limited pool of subjects.

READ FULL TEXT

page 1

page 2

page 3

research
11/05/2020

Challenges and strategies for running controlled crowdsourcing experiments

This paper reports on the challenges and lessons we learned while runnin...
research
03/20/2022

Crowdsourcing Creative Work

This article-based doctoral thesis explores the stakeholder perspectives...
research
07/05/2023

Power-up! What Can Generative Models Do for Human Computation Workflows?

We are amidst an explosion of artificial intelligence research, particul...
research
07/05/2021

Nobody of the Crowd: An Empirical Evaluation on Worker Clustering in Topcoder

Context: Software crowdsourcing platforms typically employ extrinsic rew...
research
06/26/2020

WorkerRep: Immutable Reputation System For Crowdsourcing Platform Based on Blockchain

Crowdsourcing is a process wherein an individual or an organisation util...
research
02/09/2018

Crowdsourcing: a new tool for policy-making?

Crowdsourcing is rapidly evolving and applied in situations where ideas,...
research
10/18/2021

Demographic Biases of Crowd Workers in Key Opinion Leaders Finding

Key Opinion Leaders (KOLs) are people that have a strong influence and t...

Please sign up or login with your details

Forgot password? Click here to reset