Distinguishing Question Subjectivity from Difficulty for Improved Crowdsourcing

02/12/2018
by   Yuan Jin, et al.
0

The questions in a crowdsourcing task typically exhibit varying degrees of difficulty and subjectivity. Their joint effects give rise to the variation in responses to the same question by different crowd-workers. This variation is low when the question is easy to answer and objective, and high when it is difficult and subjective. Unfortunately, current quality control methods for crowdsourcing consider only the question difficulty to account for the variation. As a result,these methods cannot distinguish workers personal preferences for different correct answers of a partially subjective question from their ability/expertise to avoid objectively wrong answers for that question. To address this issue, we present a probabilistic model which (i) explicitly encodes question difficulty as a model parameter and (ii) implicitly encodes question subjectivity via latent preference factors for crowd-workers. We show that question subjectivity induces grouping of crowd-workers, revealed through clustering of their latent preferences. Moreover, we develop a quantitative measure of the subjectivity of a question. Experiments show that our model(1) improves the performance of both quality control for crowd-sourced answers and next answer prediction for crowd-workers,and (2) can potentially provide coherent rankings of questions in terms of their difficulty and subjectivity, so that task providers can refine their designs of the crowdsourcing tasks, e.g. by removing highly subjective questions or inappropriately difficult questions.

READ FULL TEXT
research
09/03/2019

Prospect Theory Based Crowdsourcing for Classification in the Presence of Spammers

We consider the M-ary classification problem via crowdsourcing, where cr...
research
04/26/2022

Treating Crowdsourcing as Examination: How to Score Tasks and Online Workers?

Crowdsourcing is an online outsourcing mode which can solve the current ...
research
09/11/2018

Reducing Uncertainty of Schema Matching via Crowdsourcing with Accuracy Rates

Schema matching is a central challenge for data integration systems. Ins...
research
03/23/2017

Unifying Framework for Crowd-sourcing via Graphon Estimation

We consider the question of inferring true answers associated with tasks...
research
12/20/2020

Exploring Effectiveness of Inter-Microtask Qualification Tests in Crowdsourcing

Qualification tests in crowdsourcing are often used to pre-filter worker...
research
01/12/2021

Toward Effective Automated Content Analysis via Crowdsourcing

Many computer scientists use the aggregated answers of online workers to...
research
11/06/2017

Sequential Multi-Class Labeling in Crowdsourcing

We consider a crowdsourcing platform where workers' responses to questio...

Please sign up or login with your details

Forgot password? Click here to reset