1 Introduction
The rationale of crowdsourcing is to leverage the “wisdom of the crowd” in soliciting some kind of response or suggestion. Crowdsourcing allows for the annotation of large research corpuses for training dataintensive models such as deep neural networks. Platforms such as Mechanical Turk allow for the systematic and programmatic implementation and assignment of crowdsourcing tasks. The crowdsourcing estimation problem is especially difficult because both the reliabilities of users and the true answers to the questions are unknown.
1.1 Previous Work
Dawid and Skene studied this estimation problem in 1979 in the context of responses to surveys [4
]. In the DawidSkene stochastic model, each user has an associated reliability score dictating the probability with which the user answers a binary question correctly. The joint estimation of user reliabilities and correct answers in this model is very well studied in statistics, even for the case of
ary questions. Popular approaches use Expectation Maximization to find the true labels maximizing the likelihood of estimated answer labels and reliabilities. Recent work uses spectral methods (based on eigenvalues of the assignment graph) to initialize the iterative expectation maximization algorithm, proving optimality convergence rates of such a scheme [
7]. Another approach achieving stateoftheart performance finds the labeling via a minimax conditional entropy approach [8].Most analysis of the estimation algorithms assume random sampling. However, conceivably, intermediate estimates of the reliabilities of workers can inform better workertask assignments which lead to higher quality final estimates. A 2011 paper by Karger, Oh, and Shah on budgetoptimal crowdsourcing tracks multiple instances of distinct assignment graphs but assumes a highly specific ”spammerhammer” model where all users either answer randomly or correctly. Another approach models the assignment problem as a Markov Decision Process and derives the Optimistic Knowledge Gradient, which computes the conditional expectation of choosing certain workers, assuming a BetaBernoulli prior on the reliability of each worker [
2]. However the assignment scheme requires extensive recomputation and updating between each individual questionworker assignment, an unrealistic frequency of model updating.Classical DawidSkene Model In the classical DawidSkene model, abbreviated “DS”, there are users and questions with correct answers . (Without loss of generality we may map the answers to .) Let denote a bipartite matrix indicating if the user answered the question. This is typically called the assignment matrix. Additionally, let
be a vector which captures the correct answer for each of the
questions. Let be a vector denoting each user’s reliability, for users. Let be a stochastic answer matrix such that2 Method for QuasiOnline Task Allocation
We derive an improved task allocation scheme and model the crowdsourcing problem as an optimization problem with an informationtheoretic objective, since we want to assign workers to tasks to gain the most information about the true label of the question.
Previous work (unpublished) by the same authors (Cabreros, 2015) proposes a twostep estimation method that uses a budget parameter, , in two stages. Running a crowdsourcing estimation algorithm when half of the budget has been allocated yields estimates of the true answers, which may be used with a mixture model on the topics of questions to estimate reliabilities of each user, . Then sample users more likely to provide correct answers for specific questions during the remaining portion of the budget to arrive at a final A. The final answers matrix A is then used as input for the same blackbox estimation algorithm to arrive at the final estimates, as depicted in Figure 1.
How do we decide which questions require more budget allocated? We consider informationtheoretic metrics: mutual information sums over all possible outcomes of the random variables X, Y, and measures the mutual dependence between two variables by quantifying the “amount of information” obtained about one random variable by observing another
(Cover, 2006). A variant of the mutual information between realized outcomes of the random variables is the pointwise mutual informationwhich has found applications in statistical natural language processing. We define another variant of mutual information,
partial mutual information, between a random variable on the source channel, , and observed outcomes of , where we only integrate over the randomness of the unknown source channel. Partial Mutual Information, pMI(X; Y), is defined between random variable and outcomes of random variable .In our notation:
However, solving the cardinalityconstrained integer program to assign users to questions is most likely NPhard, as (Krause, 2012) reduce a similar formulation of an entropy minimization problem to independent set. The best thing to do in a ‘oneshot’ setting if our remaining budget is less than the number of questions is to choose the best relative improvements in the partial mutual information, which we denote as for the improvement with respect to querying user . Estimating has already computed a “nested optimization” and found which user is the best to assign to a certain question.
3 Results
We evaluate the error rates of three methods: (1) running DawidSkene estimation on a budget randomly sampled, and (2) running DawidSkene estimation on a budget randomly sampled and assigning the remaining budget via the oneshot allocation, and (3) method (2) but assigning budget via the dynamic task allocation. We evaluate the error rates from 20 random samples of questions and user reliabilities for 1000 users and 100 questions, assuming a mixture of 2 topics, over 10 different budgets from assigning of users to each question to coverage in Figure 2.
We also consider how the dimensions of the estimation problem impact the performance of these sampling policies in Figure 3: when users outnumber questions, the oneshot allocation does poorly while the new dynamic user allocation is still robust.
However, examining the intermediate accuracies yielded by the dynamic task allocation method indicate that the strengths of this method lie in the ability to achieve better estimation accuracy performance with fewer samples, and terminate the estimation process early. Can we use the estimates of mutual information in an optimal stopping framework to develop more dataefficient crowdsourcing estimation algorithms? “Efficiency” will be measured with regards to random sampling, where we want to show our method is efficient using samples as needed by random sampling, for any chosen . We will consider this question in the full paper and conjecture that one point of connection between the channel capacity is with the Fisher Information. Analysis would proceed by considering the minimax convergence rates of the DS estimator, proved in (Gao, 2013), to analyze the disadvantage of using fewer samples.
4 Conclusions
We model task assignment in the crowdsourcing problem and develop a probabilistic model for a “partial mutual information” criterion which yields a onestep lookahead policy of which questions to ask next. We implement a batch estimation policy which exhausts the budget by querying an additional label for each question, reestimating
, and using these updated estimates to recompute the label estimates. We are able to show significant improvement in estimation accuracy for small budgets. Our dynamic task allocation scheme is also robust for estimation schemes with higher ratios of questions to users, where the oneshot learning policy suffers higher error than random sampling. The advantage of such a scheme is that the mutual information heuristic we develop may be extended to evaluate early termination of the estimation process.
References
 Cabreros (2015) Cabreros, Irineo; Singh, Karan; Zhou Angela. Mixture model for crowdsourcing. 2015.

Chen (2013)
Chen, Xi; Lin, QIhang; Zhou Dengyong.
Optimistic knowledge gradient policy for optimal allocation in
crowdsourcing.
Journal of Machine Learning Research
, 2013.  Cover (2006) Cover, Thomas; Thomas, Joy. Elements of Information Theory. Wiley, 2006.
 Dawid (1979) Dawid, A.P; Skene, A.M. Maximum likelihood estimation of observer errorrates using the em algorithm. Journal of the Royal Statistical Society, 1979.
 Gao (2013) Gao, Chao; Zhou, Dengyong. Minimax optimal convergence rates for estimating ground truth from crowdsourced labels. MSR Technical Report, 2013.
 Krause (2012) Krause, Andreas; Golovin, Daniel. Submodular function maximization. 2012.
 Zhang (2014) Zhang, Yuchen; Chen, Xi; Zhou Dengyong; Jordan Michael. Spectral methods meet em: A provably optimal algorithm for crowdsourcing. NIPS, 2014.
 Zhou (2014) Zhou, D; Liu, Q; Platt J.C; Meek C. Aggregating ordinal labels from crowds by minimax conditional entropy. Proceedings of ICML, 2014.
Comments
There are no comments yet.