1 Introduction
Crowdsourcing is a popular method for collecting data but the collected data is often noisy and of low quality. The quality gets significantly degraded if solving the tasks require some costly effort. The problem can be addressed by rewarding the workers with a performance based bonus. One way to reward the workers is to use the peer based mechanisms [Dasgupta and Ghosh2013, Radanovic, Faltings, and Jurca2016]. In these mechanisms, the reward of a worker depends on her own answers and of other workers. The mechanisms admit honesty as an equilibrium strategy (i.e. if other workers do high quality work, the best response for any worker is also to do the same). But the mechanisms also admit other dishonest equilibria. The other way to reward the workers is to use gold standard tasks [Oleson et al.2011]. A common technique is to randomly mix some gold tasks in the batch of tasks solved by every
worker. Workers are then rewarded based on their performance on the gold tasks. This incentivizes the workers for performing high quality work as a dominant strategy (i.e. regardless of other workers’ answers). However, since every worker solves gold tasks and the requester already knows the correct answers of the gold tasks, it leads to a waste of the useful task budget of the requester. Moreover, to reduce the variance in the rewards, one may require a sufficient number of gold tasks to be solved by every worker. Also, this technique only works if the gold tasks are not identified but as shown by
[Checco, Bates, and Demartini2018], the gold tasks can be easily leaked because of their repeated use for all workers. Another technique is to assign gold tasks to a worker only with a small (constant) probability and give a fixed reward (independent of the quality of work) otherwise. While it solves many of the problems with the first technique, the scalability of the constant probability technique remains limited because the number of workers who need to be assigned gold tasks, grows as the total number of workers grows. Otherwise, this technique assumes that the rewards of the workers solving gold tasks can be made arbitrarily large to compensate for the smaller probability
^{1}^{1}1This can be understood using the example of penalty mechanism in public transport systems. If tickets are checked very rarely, then the penalty for being found without ticket has to be made very high to discourage rational people from traveling without tickets.. In many practical settings, this is undesired or not possible.In this paper, we introduce a scalable incentive mechanism (the Deep Bayesian Trust Mechanism) which guarantees strong incentive compatibility while assigning gold tasks only to a small (constant) number of workers and rewarding the rest using the answers provided by peer workers. Since we ensure that gold tasks are assigned to a constant number of workers and not just with a constant probability
, our mechanism is also suitable for very large scale settings. Moreover, this mechanism doesn’t suffer from the problem of arbitrarily large payments discussed earlier. When there is a nonzero cost of effort for solving the tasks, the mechanism still ensures the desired theoretical properties if the payments are scaled appropriately only to cover the cost of effort. This scaling constant doesn’t depend on the probability of assigning gold tasks. The mechanism is based on the observation that in large scale crowdsourcing settings, every worker reports answers to many similar tasks and hence the joint distribution of the answers of any two workers can be used to infer the accuracy of one worker given the accuracy of the other worker. It starts by rewarding a small set of workers based on gold tasks and then uses the answers provided by the workers on nongold tasks as
contributed gold tasks to reward more workers. It continues this deep chain of trust to an arbitrary depth, until all the tasks have been solved by the required number of workers.As fairness of algorithms affecting humans is becoming a critical issue, it is important to justify the fairness of algorithms that determine payments of the workers. We, for the first time, address the issue of fairness of crowdsourcing incentive mechanisms in a principled manner and show that our mechanism ensures fair rewards to the workers. The summary of our main contributions is as follows:

We propose a dominant uniform strategy incentive compatible (DUSIC) mechanism, called the Deep Bayesian Trust Mechanism, which rewards a constant number of workers with gold tasks and the rest using peer answers.

On one hand, our mechanism addresses the issues with existing gold tasks based mechanisms and on the other hand, it also shows how the limitations of purely peer based incentive mechanisms can be overcome in some cases by assigning gold tasks to a few workers. Thus, it is also of interest for the peerprediction community.

We define a notion of fairness of rewards in crowdsourcing and show that our mechanism ensures fairness.

Through numerical experiments, we show the robustness of our mechanism under various reporting strategies of the workers. In a preliminary study conducted on Amazon Mechanical Turk, we observe that the mechanism helps in eliciting effort and improving the quality of responses.
The supplementary material for this paper is available on authors’ website.
2 Related Work
The research on crowdsourcing incentive mechanisms is mainly divided into two categories. The first category of work assumes that some spot checking option (for example, gold standard tasks) is available. The constant probability mechanism, discussed in Section 1, is analyzed formally by [Gao, Wright, and LeytonBrown2016]. This mechanism randomly selects a few workers and spot checks only those workers with an oracle. The rest of the workers are given a constant amount of reward (independent of the quality of their work). The scalability of this mechanism is limited because the number of workers who need to be spot checked, grows as the total number of workers grows. Otherwise, in order to compensate for the smaller probability of spot checking, the mechanism allows the payments of the spot checked workers to be arbitrarily large.
The second category of work assumes no gold tasks to be available and uses only peer answers. Such mechanisms are called the peerconsistency (or peerprediction) mechanisms. The early mechanisms in this category were either not detailfree (required knowledge about the beliefs of the workers) [Miller, Resnick, and Zeckhauser2005] or not minimal (required workers to also submit some additional information other than their answers on the tasks) [Prelec2004, Radanovic and Faltings2013, Witkowski and Parkes2012]. On the other hand, a simple outputagreement mechanism [Waggoner and Chen2014] works only under strong assumptions on the correlation structure of workers’ observations. A seminal work in the category of minimal, detailfree mechanisms for crowdsourcing is [Dasgupta and Ghosh2013], which ensures that truthtelling is a focal equilibrium in binary answer spaces. The Correlated Agreement mechanism [Shnayder et al.2016] generalizes the mechanism of [Dasgupta and Ghosh2013] to nonbinary answer spaces with moderate assumptions on the correlation structure of workers’ observations. Both these mechanisms require that workers solve multiple tasks. The Logarithmic Peer Truth Serum [Radanovic and Faltings2015], which is based on an information theoretic principle, requires no such assumptions and ensures strongtruthfulness in nonbinary answer spaces. The guarantees of the mechanism are ensured in the limit (when every task is solved by an infinite number of workers). The Peer Truth Serum (PTSC) of [Radanovic, Faltings, and Jurca2016] doesn’t require even this assumption for the theoretical guarantees and works with a bounded number of tasks overall. In theory, these peerconsistency mechanisms offer comparatively weaker incentive compatibility than the gold tasks based mechanisms. They make truthtelling an equilibrium strategy for the workers but also admit some nontruthful equilibria. While the noeffort or the heuristic equilibria exist in these mechanisms, the equilibria are not attractive since they pay zero reward to the workers. The mechanisms also admit the permutation equilibria, which give the same payoff as the truthful equilibrium. [Liu and Chen2018] avoid this issue in binary answer space by using ground truth of the answer statistics. As shown by [Gao, Wright, and LeytonBrown2016], it is possible to eliminate the undesired equilibria in the peer based mechanisms if the center can employ a limited amount of spot checking. When spot checking is not possible, it is enough that there exist a small fraction of honest workers. Either of these options work if the rewards are scaled appropriately to compensate for a low probability of spot checking and a low fraction of honest agents respectively.
Finally, the mechanism of [de Alfaro et al.2016] combines ideas from the two categories of work. It arranges the workers in a hierarchy. A constant number of workers in the top level of the hierarchy are evaluated by an oracle. The workers below that level are evaluated by the workers (peers) in one level above them. The mechanism solves the scalability issue of the gold tasks based mechanisms. Though it offers comparatively weaker incentive compatibility (unique Nash equilibrium) as compared to the gold tasks based mechanisms but eliminates all the undesired equilibria that exist in peer based mechanisms. However, it requires that workers are informed of their level in the hierarchy. The mechanism is also exante unfair towards the workers in the sense that workers in the top level of hierarchy are evaluated more correctly than the workers in the lower levels. Similar to [de Alfaro et al.2016], our work is also at the intersection of the two categories of works. However, our mechanism doesn’t suffer from the issues (level information requirement and unfairness) that their mechanism has and also guarantees stronger incentive compatibility.
3 Model
We consider large scale crowdsourcing settings in which workers provide answers of many microtasks requiring human intelligence. The tasks have a discrete answer space of size . We will use
to denote this space. For any task, our model has 3 random variables. The first is the unknown
ground truth answer for the task. The second is the worker ’s observed answer that she obtains on solving the task. is worker’s private information. The third is the worker’s reported answer that she actually reveals as her answer for the task. We use to denote realizations of these random variables and will drop the subscript , when the context is clear.Definition 1 (Effort Strategy).
The effort strategy of a worker
is a binary variable
. If the worker invests effort in solving a task, is and is otherwise.The effort strategy captures the standard binary effort model of the incentive mechanisms literature [Dasgupta and Ghosh2013]. Whenever , the worker incurs a strictly positive finite cost.
Definition 2 (Reporting Strategy).
When , the reporting strategy of a worker is a
right stochastic matrix, where
() is the probability of her reported answer on a task being given that her observed answer is . When , the reporting strategy of a worker is adimensional probabilistic vector, where
() is the probability of her reported answer on a task being .The effort and the reporting strategy together model possible strategies that a worker may play in obtaining and reporting her answer and is a standard model in the literature [Shnayder et al.2016]. Two common strategies are truthful and heuristic.
Definition 3 (Truthful Strategy).
A worker ’s strategy is called truthful if and
is an identity matrix.
In a truthful strategy, a worker solves a task and reports her answer as obtained.
Definition 4 (Heuristic Strategy).
A worker ’s strategy is called heuristic either if or if and all rows of are identical.
In a heuristic strategy, a worker either doesn’t solve the tasks () or solves the tasks () but reports independently of the obtained answer. Note that a common colluding heuristic strategy, in which workers collude using a “default” answer, is included in the model. For example, in binary case, when , a probabilistic vector means that worker always answers . Similarly, when , a matrix with both rows equal to means the same. It is also easy to see that the model also includes mixed strategies since mixed strategies can be written as convex combination of the pure strategies.
Definition 5 (Proficiency Matrix).
The proficiency matrix for a worker is a right stochastic matrix, where () is the probability of her obtained answer on a task being given that the ground truth is .
This definition is due to [Dawid and Skene1979], which is a widely accepted model in crowdsourcing literature. The proficiency matrix models the ability of a worker to obtain correct answers, when she invests effort. Every worker can have a different proficiency matrix.
Definition 6 (Trustworthiness Matrix).
The trustworthiness matrix of worker is a right stochastic matrix, where () is probability of her reported answer on a task being given that ground truth is .
Note the difference between the proficiency and trustworthiness matrices. Proficiency models worker’s ability while trustworthiness is a function of her ability and honesty.
Proposition 1.
If , the trustworthiness matrix of a worker is given by . If , is a matrix with all rows equal to reporting strategy vector .
Our model is summarized in Figure 1.
Until now, we defined the strategy space for the settings in which workers solve one task each. In our work, we consider settings, in which workers solve multiple tasks and next define the strategy space for multitask settings.
Definition 7 (Uniform Strategies for Multitask Settings).
A worker’s strategy (effort and reporting) in multitask settings is called uniform if the strategy is the same on all tasks in a given batch solved by the worker.
The uniform strategies are sometimes also called consistent strategies in the literature. It is important to note that the space of uniform strategies DOES INCLUDE mixed strategies. The motivation for considering only uniform strategies in the multitask literature is that the tasks of similar nature can be grouped in batches so that workers don’t strategically distinguish between tasks assigned to them.
Definition 8 (Dominant Uniform Strategy Incentive Compatibility).
Given that workers can play any strategy from the space of uniform strategies, an incentive mechanism is called dominant uniform strategy incentive compatible (DUSIC) if the expected reward of any worker is strictly maximized by playing a truthful strategy, no matter what uniform strategies other workers use.
In this notion of incentive compatibility, the reward of a worker is strictly maximized in a truthful strategy even if others are not truthful. Thus, the truthful strategy dominates any heuristic strategy of not solving the tasks and nontruthful strategies of solving the tasks but reporting nontruthfully. It also dominates any mixed uniform strategy.
Definition 9 (Oracle).
An agent is called an oracle if her trustworthiness matrix is known and doesn’t have identical rows.
For example, if the oracle is the source of gold standard answers, then by definition of gold tasks, oracle’s trustworthiness matrix is an identity matrix.
We use
to denote the prior probability of the ground truth answer of any randomly selected task being
. It is assumed to be known and fully mixed (). It can also be estimated from the gold standard answers.
4 Finding Trustworthiness Transitively
In this section, we first explain the main building block of our mechanism: the process of finding the trustworthiness of a worker, given the trustworthiness of another worker by using the joint distribution of their answers on shared tasks.
Definition 10 (Peer).
For a worker , the mechanism assigns another worker as her peer. Workers and are assigned sets of tasks and respectively such that .
The definition requires that some tasks are solved by both the worker and her peer. Both workers also solve some other tasks that are not shared. It may be noted that eliciting answers of multiple workers on same tasks is the central idea in crowdsourcing [Surowiecki2005] and is not a new requirement introduced in our paper.
Let be the known trustworthiness matrix of worker and let be the peer of another worker , whose trustworthiness matrix is not known. We want to find the unknown using the answers given by the two workers and the known . Since the worker and her peer solve some shared tasks by definition, their reported answers on these shared tasks provide the mechanism with an empirical joint distribution of their answers. We use to denote this conditional empirical distribution and to denote the empirical distribution of answers of peer only.
Lemma 1.
As , the following holds w.h.p.
(1) 
and .
The proof of Lemma 1 is provided in the supplementary material. The LHS in Equation 1 is the conditional probability in the limit. When we apply Bayes’ rule to write this conditional probability in terms of other model probabilities, we get the RHS of Equation 1. This assumes that the answers of workers and are conditionally independent given the ground truth.
In the linear system of Equations 1, are unknowns. Since the matrix is also right stochastic, we have as many equations as the number of unknowns. This system can be solved for , provided the system is welldefined. This requires that and for a unique solution, the coefficient matrix of this linear system must have linearly independent rows. This system of linear equations can be solved analytically. In practice, many libraries are also available for computing the solution efficiently. We now use this transitive method of finding unknown to develop our mechanism in the next section.
5 The Deep Bayesian Trust Mechanism
The Deep Bayesian Trust mechanism is summarized in Mechanism 1 on the next page. It maintains a pool of workers’ answers which are “informative” for evaluating other workers. The meaning of the term informative will be explained later. The pool is initialized with some tasks and their answers given by the oracle. In crowdsourcing terminology, these are the gold taskanswer pairs. The trustworthiness matrix of the oracle is initialized to be . Since by definition, gold tasks are the tasks whose correct answers are known, is an identity matrix. The mechanism then publishes several batches of tasks on the platform such that each batch has some tasks in common with the tasks solved by the oracle and some unique new tasks in each batch. Workers selfselect themselves to solve one batch each and report their answers for respective batches. Thus, the oracle becomes the peer of each of these workers.
Let’s assume that the oracle solves number of tasks. The mechanism publishes batches of tasks such that there are tasks in common with the oracle and unique new tasks in each batch. Thus, it publishes tasks that are new (not solved by the oracle already) and also instances of the same tasks that are already solved by the oracle. becomes a hyperparameter of the mechanism and becomes the size of the batches solved by every worker. As the workers start submitting their respective batches, the mechanism also starts rewarding the workers for their answers, asynchronously (without waiting for other workers). To calculate the reward, the mechanism uses Lemma 1 for finding the trustworthiness matrix of the answers given by workers. Note that the lemma is applicable because the trustworthiness of the peer (oracle) is known. The reward for worker is given by , where and is a scaling constant. takes the summation of the diagonal entries of the trustworthiness matrix . These are the accuracy parameters of the worker (the probabilities of the workers’ answers being same as the ground truth). further subtracts from this summation for technical reasons that will be exploited later to ensure a desired incentive property. The worker gets her reward and is out of the mechanism. At this stage, the mechanism decides whether to reuse the answers given by the worker for evaluating more workers. If the worker’s answers satisfy a certain “informativeness” criterion, they are added to the pool. If the worker’s answers are added to the pool, the mechanism can immediately publish more batches such that there are some tasks in common with the new (nongold) tasks just solved by the previous worker and some more new tasks in each of the batches. This step is the same as described earlier. The only difference is that now the batches being published have tasks in common with the tasks solved by a worker, not the oracle (i.e. the peers are now other workers, not the oracle). These steps are repeated in parallel and asynchronously until the mechanism has obtained the desired number of answers for all its unsolved tasks.
To summarize (Figure 2), the mechanism starts with an answer pool seeded with the oracle’s answers, uses the answers in the pool to assess trust in other workers’ answers, expands this pool based on the informativeness of the workers’ answers and repeats the process. We emphasize that the mechanism doesn’t assign any permanent reputation to workers. A worker’s answers being added to the pool is not the same thing as a worker being “prescreened” and certified trusted. We just evaluate the answers provided by a worker in any given batch and add them to the answer pool together with an estimate of the trustworthiness of that batch of answers.
Informativeness Criterion
We can now discuss the informativeness criterion for workers’ answers, which was omitted earlier. The purpose of the informativeness criterion is to check whether the answers provided by a worker can be used to estimate the trustworthiness of another worker or not. As discussed in Section 4, this depends on whether the coefficient matrix of the linear system of equations has linearly independent rows or not. For example, assume that the answers of worker are added to the pool and let be another worker who gets as her peer in the future. In that case, the mechanism will solve the following equations to estimate the trustworthiness of :
The coefficients of this linear system don’t depend on the answers given by worker and the mechanism can determine in advance whether the system will be solvable by just looking at these coefficients. If and the coefficient matrix is full rank, the informativeness criterion is said to be satisfied and the answers of worker are added to the pool. It may be noted that the informativeness criterion doesn’t require answers of only truthful or high proficiency workers to be added to the pool. Answers from nontruthful or low proficiency workers can also be added to the pool.
To understand this technical criterion in more depth, note that the coefficients of the linear system are equal to the posterior distribution by Bayes’ rule. Thus, the answers of a worker satisfy the informative criterion if the posterior distributions over for any two different are not identical. One interesting example, where the informative criterion is not satisfied, is when the peer plays a heuristic strategy. In such a case, the reported answers are not correlated with the ground truth and the posterior distributions are same as the prior distribution or in other words, the reported answers are not “informative” of the ground truth. It may be noted that the informativeness criterion is fairly weak for the binary answer spaces. It only requires that the reports of the peer are not independent of the ground truth ().
6 Analysis
We now prove strong gametheoretical properties for our mechanism. In this discussion, we will assume that a worker and her peer solve many shared tasks (). This is not the same as requiring every task to be solved by large number of workers, which would have been inefficient. In later sections, we will also discuss the empirical performance of our mechanism without this assumption. We use to denote the cost of effort required to solve a batch of tasks. Proofs are provided in the supplementary material.
Theorem 1.
If and , then the Deep Bayesian Trust mechanism

[leftmargin=*, label=()]

is dominant uniform strategy incentive compatible (DUSIC) for every worker ;

ensures strictly positive expected reward in the truthful strategy.
Theorem 1 requires a condition on the scaling constant to cover the cost of effort, and reduces to when cost of effort is . The condition required on proficiency matrix (i.e. )^{2}^{2}2For binary answer space, the theorem can also be shown to hold under a weaker condition . can be more easily understood in the case of binary answers. In binary settings, the condition is satisfied if and . This is not a condition on the honesty of the workers but only on their ability. The condition merely ensures that the worker can obtain answers that are positively correlated with the ground truth. Such conditions are common in the literature [Dasgupta and Ghosh2013]. Unlike the literature, the condition here only affects the best strategy of a given worker, not of all the workers. For example, if the condition is not satisfied for a worker, she may find it better to deviate to a nontruthful strategy but it doesn’t affect the dominant strategy of other workers. We note that such informed deviation by a low proficiency worker to increase the accuracy of her reported answers is not bad for the requester.
Corollary 1.
The scaling constant of the Deep Bayesian Trust mechanism is independent of the probability of a worker getting oracle or another truthful worker as peer.
Corollary 1 implies that to ensure incentive compatibility, our mechanism doesn’t need to scale up the rewards of workers if the probability of a worker getting oracle or another truthful worker as peer decreases.
Theorem 2.
In the Deep Bayesian Trust mechanism, a heuristic strategy gives zero expected reward.
It may be noted that the DUSIC result in Theorem 1 already implies that the heuristic strategies are not in equilibrium but Theorem 2 answers the question that what if someone still plays those strategies.
Limitation
If, despite all these guarantees, every single worker chooses to irrationally play a heuristic strategy, then our mechanism will not be able to expand its pool and will be forced to behave like other mechanisms which assign gold tasks to every worker. But (i) such workers don’t gain anything from the mechanism; (ii) the dominant incentive compatibility of the mechanism remains unaffected for any rational workers even in such a degenerate case; and (iii) the heuristic strategy doesn’t become an equilibrium strategy.
Fairness of Rewards
Recently, concerns have been raised about fairness and other ethical considerations in algorithms that affect humans [ACM2017, Podesta et al.2014]. The discussion on fair rewards in crowdsourcing has included issues such as minimum wages and adequate compensation for time and effort [Schmidt2013] but there has not been any principled approach to address the issue of fairness in rewards from a nondiscrimination perspective. For example, if a worker with higher ability gets a lower reward than a worker with lower ability because of the difference in the way they were evaluated, then this is a potential case of unfairness. The unfairness is an unintentional and undesired property of the existing mechanisms. Peer based mechanisms in the literature randomly select peers and reward the workers based on their answers and the answers of their respective peers. The reward of the workers is generally a function of their own ability as well as their peers’ ability, making the rewards unfair. This unfairness issue in the peer based mechanisms was first pointed out by [Kamar and Horvitz2012]. The issue becomes more serious when workers know exante that they are being evaluated using peers with different proficiencies. This is the case, for example, with the mechanism of [de Alfaro et al.2016]. Our mechanism doesn’t need to inform the workers about their peers at all but as we show, the mechanism can satisfy an even stronger definition of fairness.
Definition 11 (Fair Incentive Mechanism).
An incentive mechanism is called fair if the expected reward of any worker is directly proportional to the accuracy of the answers reported by her and independent of the strategy and proficiency of her random peer.
This is a reasonable definition of fairness and is in agreement with the broader theory for individual fairness of algorithms. For example, the pioneering work of [Dwork et al.2012] defines that fair algorithms take similar decisions for individuals with similar relevant attributes. The relevant attribute in our case is the worker’s accuracy. The definition is also nontrivial to satisfy. In existing peer based mechanisms, the rewards also depend on the unknown ability of the peer (even if the peer can be believed to be truthful). For example, in the mechanism of [Dasgupta and Ghosh2013], the reward of a worker in the truthful equilibrium is an increasing function of her proficiency as well as her peer’s proficiency. On the contrary, our mechanism satisfies this definition of fairness. The mechanism carefully uses the peer answers only to find trustworthiness of a worker, which is completely her own accuracy parameter and doesn’t depend on her peer’s proficiency or strategy.
Theorem 3.
The Deep Bayesian Trust Mechanism is fair.
This is perhaps a surprising result because in the existing framework of the peer based mechanisms, one would perhaps reason that it is impossible for the rewards to not depend on the accuracy of the peer.
7 Numerical Simulations
In this section, we evaluate the performance of our mechanism empirically. We simulate the settings in which workers with different proficiencies report answers to different tasks. The proficiency matrices of different workers were generated independently such that the diagonal entries were distributed. The diagonal entries for a given worker are not necessarily the same as they are also independently generated. Rest of the entries are generated randomly such that every row of proficiency matrix sums to .
We consider following strategies that workers may play:

[leftmargin=*]

Truthful Workers obtain answer for any given task based on their respective proficiency matrices and report the answers truthfully.

Heuristic  Workers’ reported answers are generated independently of their proficiency using a common distribution over the answer space.

Permutation  Workers obtain answer for any given task based on their respective proficiency matrices but they apply a common permutation on the answers before reporting it to the mechanism. In a nontruthful permutation deterministic strategy [Shnayder et al.2016], workers solve the tasks, but they apply a permutation mapping on the answers before reporting it to the mechanism. For example, in a ternary answer space (), a permutation can be as follows : , i.e., whenever the obtained answer is , workers report , for , they report and for , they report . In a binary answer space, this corresponds to reporting the opposite of the obtained answer.
In general, the simulations performed in the literature for peer based mechanisms compare the average reward in different equilibria. For example, the average reward of workers when all of them play a truthful strategy may be compared with the average reward when all play a heuristic strategy. This is because such mechanisms only guarantee that different strategies are in equilibria and that one equilibrium is more profitable than the other. But our stronger theoretical result (dominant incentive compatibility) demands stronger simulations. We go beyond comparing just equilibrium rewards and instead compare the rewards of workers playing different strategies against one another at the same time. More precisely, in our simulations, we don’t require every worker to play a common strategy. Any worker can play a heuristic, permutation or truthful strategy with equal probability. Such settings can’t be handled by mechanisms that guarantee only equilibrium results. We will show that in our mechanism, there is a clear distinction between the rewards of workers playing different strategies with truthful workers being nicely rewarded and others being penalized. Workers were simulated to be hired in rounds, with , , and workers in successive rounds. was set to in all the experiments discussed in the paper.
Figure 3 compares the distribution of rewards of workers playing the three strategies. The rewards of the workers playing the heuristic strategy are centered around , as expected from Theorem 2. The reward of workers playing truthful strategy are centered around a strictly positive value as predicted by Theorem 1. On the contrary, the rewards of workers playing the permutation strategy are symmetrically centered around a strictly negative value. It may be noted that in existing peer based mechanisms, permutation strategies (in equilibria) are equally profitable as the truthful strategy, which is clearly not the case with our mechanism. Firstly neither heuristic nor permutation strategies are in equilibrium in our case and even if workers use any of these strategies, they get lower reward than the truthful strategy.
We now show the robustness of our mechanism with respect to the number of shared tasks between workers. We discussed only the asymptotic properties of the mechanism earlier in the theoretical analysis. Hence, this simulation study is important to show the performance of the mechanism with a finite number of shared tasks. Figure 4 compares the average of rewards of the workers (with
distributed proficiencies) playing different strategies under different settings of the number of shared tasks. Error bars show the standard deviation in
repeated runs. The trend discussed in previous experiment can be observed to be very robust to the number of shared tasks. Thus, the Deep Bayesian Trust mechanism is attractive even when the number of shared tasks is not large. This simulation also implies that with only gold tasks (and given only to workers), the mechanism can reward workers.We also simulated the settings in which the diagonal entries
were uniformly distributed in
and repeated the above experiments. Results (with similar observations) are available in the supplementary material.We also conducted a preliminary study on Amazon Mechanical Turk to observe the effect of our mechanism in encouraging workers to invest more effort. Workers were given hard tasks related to natural language understanding, and they had to given a binary (‘Yes’ or ‘No’) answer. The details of the study are in the supplementary material. We observed that in presence of our mechanism, the workers invested more time in solving the tasks and the average accuracy of their responses also improved.
8 Conclusions
We proposed the Deep Bayesian Trust mechanism to incentivize crowdworkers in large scale settings. The mechanism rewards the workers for the correctness of their reports without checking every worker with gold tasks. Instead, it uses the correlation in the answers of the workers and their peers to estimate their accuracy. The mechanism is guaranteed to be game theoretically robust to any strategic manipulation. Thus, it is also suitable even for the settings in which workers of very heterogeneous proficiencies and motivations solve the tasks at the same time. The mechanism also ensures fair rewards to workers, thus contributing towards the bigger movement of making algorithmic decisions fair. Among other issues, our mechanism notably addresses the scalability issues in purely gold tasks based mechanisms, the incentive compatibility issues in purely peer based mechanisms and the information requirement and fairness issues in the mixed mechanisms.
References
 [ACM2017] ACM. 2017. Statement on algorithmic transparency and accountability.
 [Checco, Bates, and Demartini2018] Checco, A.; Bates, J.; and Demartini, G. 2018. All that glitters is gold–an attack scheme on gold questions in crowdsourcing. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing.
 [Dasgupta and Ghosh2013] Dasgupta, A., and Ghosh, A. 2013. Crowdsourced judgement elicitation with endogenous proficiency. In Proceedings of the 22nd international conference on World Wide Web, 319–330. ACM.
 [Dawid and Skene1979] Dawid, A. P., and Skene, A. M. 1979. Maximum likelihood estimation of observer errorrates using the em algorithm. Applied statistics 20–28.
 [de Alfaro et al.2016] de Alfaro, L.; Faella, M.; Polychronopoulos, V.; and Shavlovsky, M. 2016. Incentives for truthful evaluations. arXiv preprint arXiv:1608.07886.
 [Dwork et al.2012] Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; and Zemel, R. 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214–226. ACM.

[Gao, Wright, and LeytonBrown2016]
Gao, A.; Wright, J. R.; and LeytonBrown, K.
2016.
Incentivizing evaluation via limited access to ground truth:
Peerprediction makes things worse.
In
2nd Workshop on Algorithmic Game Theory and Data Science at EC 2016.
 [Kamar and Horvitz2012] Kamar, E., and Horvitz, E. 2012. Incentives for truthful reporting in crowdsourcing. In Proceedings of the 11th international conference on autonomous agents and multiagent systemsvolume 3, 1329–1330. International Foundation for Autonomous Agents and Multiagent Systems.
 [Liu and Chen2018] Liu, Y., and Chen, Y. 2018. Surrogate scoring rules and a dominant truth serum for information elicitation. arXiv preprint arXiv:1802.09158.
 [Miller, Resnick, and Zeckhauser2005] Miller, N.; Resnick, P.; and Zeckhauser, R. 2005. Eliciting informative feedback: The peerprediction method. Management Science 51(9):1359–1373.
 [Oleson et al.2011] Oleson, D.; Sorokin, A.; Laughlin, G. P.; Hester, V.; Le, J.; and Biewald, L. 2011. Programmatic gold: Targeted and scalable quality assurance in crowdsourcing. Human computation 11(11).
 [Podesta et al.2014] Podesta, J.; Pritzker, P.; Moniz, E. J.; Holdren, J.; and Zients, J. 2014. Seizing opportunities and preserving values. Executive Office of the President.
 [Prelec2004] Prelec, D. 2004. A bayesian truth serum for subjective data. science 306(5695):462–466.
 [Radanovic and Faltings2013] Radanovic, G., and Faltings, B. 2013. A robust bayesian truth serum for nonbinary signals. In Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI” 13), 833–839.
 [Radanovic and Faltings2015] Radanovic, G., and Faltings, B. 2015. Incentive schemes for participatory sensing. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, 1081–1089. International Foundation for Autonomous Agents and Multiagent Systems.
 [Radanovic, Faltings, and Jurca2016] Radanovic, G.; Faltings, B.; and Jurca, R. 2016. Incentives for effort in crowdsourcing using the peer truth serum. ACM Transactions on Intelligent Systems and Technology (TIST) 7(4):48.
 [Schmidt2013] Schmidt, F. A. 2013. The good, the bad and the ugly: Why crowdsourcing needs ethics. In Cloud and Green Computing (CGC), 2013 Third International Conference on, 531–535. IEEE.
 [Shnayder et al.2016] Shnayder, V.; Agarwal, A.; Frongillo, R.; and Parkes, D. C. 2016. Informed truthfulness in multitask peer prediction. EC ’16, 179–196. ACM.
 [Surowiecki2005] Surowiecki, J. 2005. The wisdom of crowds. Anchor.
 [Waggoner and Chen2014] Waggoner, B., and Chen, Y. 2014. Output agreement mechanisms and common knowledge. In Second AAAI Conference on Human Computation and Crowdsourcing.
 [Witkowski and Parkes2012] Witkowski, J., and Parkes, D. C. 2012. A robust bayesian truth serum for small populations. In Proceedings of the AAAI Conference on Artificial Intelligence.
Comments
There are no comments yet.