Achieving Budget-optimality with Adaptive Schemes in Crowdsourcing

02/10/2016
by   Ashish Khetan, et al.
0

Crowdsourcing platforms provide marketplaces where task requesters can pay to get labels on their data. Such markets have emerged recently as popular venues for collecting annotations that are crucial in training machine learning models in various applications. However, as jobs are tedious and payments are low, errors are common in such crowdsourced labels. A common strategy to overcome such noise in the answers is to add redundancy by getting multiple answers for each task and aggregating them using some methods such as majority voting. For such a system, there is a fundamental question of interest: how can we maximize the accuracy given a fixed budget on how many responses we can collect on the crowdsourcing system. We characterize this fundamental trade-off between the budget (how many answers the requester can collect in total) and the accuracy in the estimated labels. In particular, we ask whether adaptive task assignment schemes lead to a more efficient trade-off between the accuracy and the budget. Adaptive schemes, where tasks are assigned adaptively based on the data collected thus far, are widely used in practical crowdsourcing systems to efficiently use a given fixed budget. However, existing theoretical analyses of crowdsourcing systems suggest that the gain of adaptive task assignments is minimal. To bridge this gap, we investigate this question under a strictly more general probabilistic model, which has been recently introduced to model practical crowdsourced annotations. Under this generalized Dawid-Skene model, we characterize the fundamental trade-off between budget and accuracy. We introduce a novel adaptive scheme that matches this fundamental limit. We further quantify the fundamental gap between adaptive and non-adaptive schemes, by comparing the trade-off with the one for non-adaptive schemes. Our analyses confirm that the gap is significant.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2018

Efficient Crowdsourcing via Proxy Voting

Crowdsourcing platforms offer a way to label data by aggregating answers...
research
10/17/2011

Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems

Crowdsourcing systems, in which numerous tasks are electronically distri...
research
01/30/2017

Dynamic Task Allocation for Crowdsourcing Settings

We consider the problem of optimal budget allocation for crowdsourcing p...
research
06/27/2012

How To Grade a Test Without Knowing the Answers --- A Bayesian Graphical Model for Adaptive Crowdsourcing and Aptitude Testing

We propose a new probabilistic graphical model that jointly models the d...
research
03/08/2023

Estimation of the qualification and behavior of a contributor and aggregation of his answers in a crowdsourcing context

Crowdsourcing is the outsourcing of tasks to a crowd of contributors on ...
research
11/16/2017

HodgeRank with Information Maximization for Crowdsourced Pairwise Ranking Aggregation

Recently, crowdsourcing has emerged as an effective paradigm for human-p...
research
03/31/2015

Crowdsourcing Feature Discovery via Adaptively Chosen Comparisons

We introduce an unsupervised approach to efficiently discover the underl...

Please sign up or login with your details

Forgot password? Click here to reset