I Introduction
In recent years, crowdsourcing has become one of the most popular distributed problemsolving model in which a crowd of undefined size is engaged to solve a complex problem through an open call [1], enabling numerous applications such as reviewing and voting items at Amazon [2] and Yelp [3], sharing knowledge at Yahoo! Answers [4] and Zhihu [5], creating maps at OpenStreetMap [6], and labeling images with the ESP game [7]. The prevalence of these crowdsourcing applications should give the credit to various intrinsic incentives such as social, service, entertainment and ethical. On the other hand, monetary (extrinsic) incentives are leveraged by general crowdsourcing platforms like Amazon Mechanical Turk (MTurk) [8] and Taskcn [9] to recruit online workers for accomplishing various tasks such as data annotation, text translation, and identifying objects in a photo or video.
The proliferation of mobile sensing devices (e.g., smartphones, wearable devices, invehicle sensing devices) offers a new sensing paradigm as an important branch of “crowdsourcing”, which extends Webbased crowdsourcing to a larger mobile crowd, allowing to perform sensing tasks pervasively at larger scale and easier than traditional static sensor networks. This paradigm is often called as “mobile crowd sensing” (a.k.a. “participatory sensing”, “opportunistic sensing” with similar concepts), which has been adopted for various applications such as Sensorly [10], NoiseTube [11], Common Sense [12] for building largescale urban sensing (network coverage, noise, and air quality) maps, Nericell [13], VTrack [14] for traffic sensing, and FindingNemo [15] for finding a lost child. Several general mobile crowdsourcing platforms such as Gigwalk [16], Jana [17] and Weichaishi [18] have recruited millions of users to participate in various mobile tasks such as conducting consumer research, launching product promotions, and creating consumer loyalty campaigns. It is always indispensable to provide proper incentives for compensating user’s participating costs, including users’ time, various nonnegligible resources (e.g., computation, storage, and battery energy) and potential treats of leaking location privacy.
Extensive research has been conducted to design incentive mechanisms for crowdsourcing [19, 20, 21, 22, 23, 24, 25, 26, 27]. Most of existing mechanisms assume that participants are already in the system and aware of the existence of crowdsourcing tasks. However, they neglect two key facts: first, participants do not exist in the system from the beginning; second, even if there are many registered users in a crowdsourcing platform such as MTurk and Gigwalk, most of these potential participants are hard to timely know the existence of tasks, as many users tend to prohibit automatic task push, or turn a blind eye to tasks for saving precious time, or the platform cannot push tasks to users in right locations if users prohibit reporting their realtime GPS coordinates. On the other hand, it is a reasonable assumption that a small number of users are the first to participate in the task. For example, a user may be browsing task lists actively and decide to participate in. For another, a user may just report his location so that the platform pushes the task to him timely. Thus, a more sensible and effective method is to leverage the “wordofmouth” effect, namely to encourage these early participants to refer other users from their social networks such as Facebook [28], Twitter [29] and WeChat [30], or from their neighboring community by opportunistic networking [31].
Incentive tree (a.k.a. referral tree, multilevel marketing, affiliate marketing, and direct marketing) mechanisms provide an effective way to address the aforementioned requirements. An incentive tree is a treestructured incentive mechanism in which i) each user is rewarded for direct contributions, and in addition, ii) a user who has already participated in can make referrals, and solicit new users to also participate in and make contributions. The mechanism incentivizes solicitations by making a solicitor’s reward depend on the contributions (and also on their further solicitations in a recursive manner) from such solicitees [32]. One infamous incentive tree mechanism is the Pyramid Scheme [33], which offers promising rewards for solicitation although being illegal in many countries. Another wellknown application example is the DARPA Red Balloon Challenge, in which an MIT team won the challenge by using a simple incentive tree mechanism [34]. However, this mechanism has a serious drawback – not robust against sybil attacks. At present, many incentive tree mechanisms have been designed against sybil attacks [35, 36, 37, 38, 32, 39, 40], but most of them lack a budget constraint so that participants have “unbounded reward opportunity” (a property defined in [35, 32]). In fact, the crowdsourcer (i.e., crowdsourcing task organizer) often has a certain budget constraint in realistic scenarios, which also represents the mainstream incentive type in existing crowdsourcing platforms.
In this paper, we aim to design a class of budgetconsistent incentive tree mechanisms satisfying six desirable properties: budget consistency (BC), continuing contribution incentive (CCI), continuing solicitation incentive (CSI), value proportional to contribution (VPC), unprofitable solicitor bypassing (USB), and unprofitable sybil attack (USA). The latter five properties are commonly considered by the existing work [35, 36, 37, 38, 32, 39, 40] so that the mechanism encourages contribution, solicitation, and fair play. Besides, we emphasize the importance of the property BC, which requires that the total payout to all participants should be consistent with the budget announced by the crowdsourcer at the time of task distribution, namely that the total payout is just equal to the budget, rather than less than the budget. Otherwise, if the total payout can be cut arbitrarily, then participants will not trust the crowdsourcer, resulting in the decline of participation enthusiasm.
To the best of our knowledge, only the work [40] designs a class of incentive tree mechanisms with budget constraint: lottery tree (lottree
) mechanisms, which select one participant as the unique recipient of the payout with a probability computed by a lottery function. The
Pachira lottree is proposed to satisfy CCI, CSI, VPC, USB and USA. However, it violates BC. Moreover, it allows only one winner (for the sake of distinction, we call it 1Pachira lottree in the rest of the paper), which is not always effective for all scenarios. In fact, it is nontrivial to adjust the 1Pachira lottree for satisfying BC while not violating other properties, or extend it to generalized mechanisms with multiple winners, as we will elaborate later. By contrast, we propose an effective strategy to rescale the 1Pachira lottree, and prove it satisfies all the six desirable properties. Furthermore, we design generalized Pachira lottree mechanisms, including Pachira lottree that allows multiple winners, and SharingPachira lottree that allows each participant to be a winner. In SharingPachira lottree, all participants proportionally share the budget based on their respective winning probabilities.Now another key and interesting question is: which mechanism is best among the 1Pachira, Pachira and SharingPachira lottrees? Some recent studies [41, 42, 43, 44, 45, 46] have been conducted to compare lotterybased (a.k.a. randomized reward) and fixed payment (a.k.a. micropayment, linear reward) mechanisms by realworld experiments. However, they lack a general and solid theoretical basis to account for their experimental results, and thus fail to provide a more persuasive guidance to the mechanism selection for different scenarios. Moreover, none of them considers incentive tree mechanisms. By contrast, we leverage the Cumulative Prospect Theory (CPT) [47] to compare different generalized lottree mechanisms by numerical analysis. This provides us an interesting and important theoretical guidance to the mechanism selection for satisfying various application requirements: If a crowdsourcer has a large budget constraint, or it only requires a small number of participants, then the SharingPachira lottree mechanism should be recommended, otherwise the 1Pachira lottree mechanism should be recommended.
Finally, in order to verify our theoretical analysis, we first build a social network based simulator and implement the three generalized lottree mechanisms. Extensive simulations are conducted to confirm our theoretical analysis. Second, we investigate a typical application case: looking for lost objects, and design an interesting experimental mobile game, Treasure Hunt, to conduct extensive performance evaluations. 82 users register in our APP in 11 days, based on which 12 tasks are designed with different budget constraints and limits on the number of participants. Experimental results are also consistent with our theoretical analysis.
The main contributions of this paper are listed as follows:
To the best of our knowledge, we are the first to investigate budgetconsistent incentive tree mechanisms, while guaranteeing several desirable properties, CCI, CSI, VPC, USB and USA (Section IIBC and IIIA).
We design generalized lottree mechanisms in support of multiple winners, and provide theoretical guidance to the mechanism selection for satisfying different requirements by leveraging the CPT (Section IIIBE).
We evaluate various mechanisms by both extensive simulations and realistic experiments to confirm our theoretical analysis, and present a typical application case (Section IVV).
Ii Problem Formulation and Preliminaries
In this section, we first present the crowdsourcing model, the formal definition of a generalized lottree, and the desirable properties. We next provide some preliminaries, including the 1Pachira lottree and the CPT.
Iia Crowdsourcing Model
Suppose that there is a crowdsourcer who requires to recruit users to participate in a crowdsourcing campaign with a budget constraint . The crowdsourcer may have a specific limit on the number , especially when it only requires a small number of participants. For instance, the participatory sensing data collection applications [41], GarbageWatch, What’s Bloomin, and AssetLog, only require a few motivated users to document various resource use issues at a university by taking geotagged photos of various resources, like outdoor waste bins, water usage of plants, bicycle racks, recycle bins, and charge stations. Certainly, many crowdsourcing applications expect as many participants as possible, so that they often have no explicit limit on the number . For instance, a large number of participants are required to build a largescale urban sensing maps [10, 11, 12] or find a lost child quickly [15]. In fact, both the number of required participants and the budget constraint have nonnegligible impacts on the effectiveness of different incentive mechanisms, as we will elaborate later.
On the other hand, users can participate in a crowdsourcing campaign and contribute to it (e.g., solving tasks, uploading sensing data, finding balloons or a lost child). Both the homogeneous and heterogeneous user models are considered [26], where the former is a special case of the latter. The former can account for atom tasks where each user can complete only a single task and thus make the same contribution. For instance, Gigwalk [16] recruits users who are in a shopping mall for conducting consumer research, where each user can complete only one questionnaire. The latter can account for divisible tasks or crowdsourcing campaigns that users could continuously participate in, where different users may complete different numbers of tasks or participate in a campaign for different durations, resulting different contributions. For instance, Microsoft has recruited users to add panoramic images to its Bing Map results through Gigwalk [16], where different users are willing to take different amounts of photos. For another, in FindingNemo [15] different users may spend different amounts of time on looking for a lost child. Generally, the contribution of a user is denoted by , .
Furthermore, users can also solicit new users. Such solicitations induce a tree . Each user is represented as a tree node , and there is a directed edge between two users and if has participated in the campaign in response to a solicitation by . In other words, if participates in the campaign via a solicitation by , it becomes a childnode of in . The crowdsourcer is the root node . The users who have participated in the campaign directly in response to the solicitation from the crowdsourcer are the childnodes of . denotes the subtree of rooted at node . denotes the set of directed edges in . is a weighted tree in which the weight of node is its contribution to the campaign, . Since the crowdsourcer has no direct contribution, we have . The total contribution of all nodes in is denoted by .
IiB Generalized Lottree
A generalized^{1}^{1}1The original definition of “lottree” allows multiple winners, but in fact such generalized cases are not considered throughout the paper [40]. Thus, the word “generalized” is purposely used here to emphasize that the mechanism allows one or multiple winners. lottree is an incentive tree mechanism that leverages lotteries to probabilistically select one or multiple participants as the winner(s), and pay out a reward to each winner. One key component of a generalized lottree is a lottery function () that determines the lottery value (i.e., winning probability) of each node , and satisfies . The lottery value of a node should depend on both the tree structure and nodes’ contributions, so that both contributions and solicitations from participants are encouraged. Another key component is a reward function that determines the reward of each node , and satisfies . The reward of a node depends on the crowdsourcer’s reward strategy, namely how many winners are allowed among participants. Specifically, three reward strategies are considered: 1lottree with only one winner, lottree with ) winners, and Sharinglottree that allows each participant to be a winner.
IiC Desirable Properties
While the main objective of a generalized lottree mechanism is to incentivize both contributions and solicitations under a certain budget constraint, it should also guarantee the fairness and be robust against various strategic behaviors by participants. In the following, we define the set of desirable properties that a generalized lottree should ideally satisfy.
Budget Consistency (BC): A generalized lottree satisfies BC if the total reward to all nodes in the tree except the root node is consistent with the budget, i.e., . This property has a stricter constraint than the socalled Zero Value to Root (ZVR) property defined in [40]. ZVR only requires that the reward to the root node of the tree is zero: , but allowing that the total payout is less than the budget. We argue that it is not sufficient, as it will lead to an uncommitted crowdsourcer and result in the decline of users’ participation enthusiasm.
Continuing Contribution Incentive (CCI): A generalized lottree satisfies CCI if it provides nodes with increasing expected reward in response to increased contribution. Formally, given a tree , if a node increases its contribution, , and all other nodes maintain the same contribution, , then the expected reward of increases: .
Continuing Solicitation Incentive (CSI): A generalized lottree satisfies CSI if each node always has an incentive to solicit new nodes. We follow the notion of “weak solicitation incentive (WSI)” defined in [40]. Formally, if the subtree of a node includes some node : , but does not include some other node : , and there is a new node : , which in case 1 joins the tree as a child of , and in case 2 joins the tree as a child of , then the reward of in case 1, denoted by , is greater in expectation than that in case 2, denoted by : .
Value Proportional to Contribution (VPC): This property demands that the mechanism should maintain a notion of fairness among nodes, as intuitively participants expect that their rewards are proportional to their contributions. We say that a generalized lottree satisfies VPC for some , if it ensures that the expected reward of each node is at least times the relative contribution made by that node: .
Unprofitable Solicitor Bypassing (USB): A generalized lottree satisfies USB if a new node can never gain expected reward by joining the tree as a child of some node other than its solicitor. Violation of this property can result in undesirable consequences: participants will lose interest in soliciting new nodes, as new nodes tend to join the tree not as children of the nodes that solicited them. Formally, if nodes and are in the tree: , and there is a new node : , which in case 1 joins the tree as a child of , and in case 2 joins the tree as a child of , then the reward of in case 1, denoted by , is not smaller in expectation than that in case 2, denoted by : , which, by symmetry, implies .
Unprofitable Sybil Attack (USA): This property demands that no participant can gain lottery value by pretending to have multiple identities to join the tree as a set of Sybil nodes instead of joining singly. Formally, the Sybil attack is defined as follows: Given any node whose parent is and who has children , launches the Sybil attack by splitting itself into multiple replicas (i.e., Sybil nodes), denoted by (), ; each Sybil node can only be a child of , or a child of one of the other Sybil nodes; each node is a child of one Sybil node. The total expected reward for from this Sybil attack is . We say that a generalized lottree satisfies USA if for any node , it cannot gain expected reward by any Sybil attack without making extra contributions: .
IiD 1Pachira Lottree
The 1Pachira lottree has been proven to satisfy CCI, CSI, VPC, USB and USA [40]. In principle, the 1Pachira lottree can be defined using any function that satisfies the following properties:

, ;

: (minimum slope of );

: (strictly convex).
In this paper, we follow a particularly convenient and intuitive function with the above properties:
(1) 
where and are two input parameters that tradeoff solicitation incentive against fairness. Then for each node , a weight is computed as the function applied to its proportional contribution: . Besides, the weight of a subtree is defined as
(2) 
Specially, for any leaf node , it holds that . Finally, the 1Pachira lottree determines the lottery value of each node as the weight of the subtree rotted at minus the weights of all ’s child subtrees:
(3) 
Only one node obtains all the reward with probability .
IiE Cumulative Prospect Theory (CPT)
An effective lottree mechanism should consider how people perceive the payout for different reward strategies based on the cognitive psychology of lottery gambling [48]. For this purpose, the Prospect Theory, a generally accepted economic model proposed by Kahneman and Tversky [49], can adequately describe how individuals evaluate losses and gains in lotteries instead of the expected utility theory. Furthermore, the Cumulative Prospect Theory (CPT) extends it to uncertain as well to risky prospects with any number of outcomes, which also confirms a distinctive fourfold pattern of risk attitudes: risk aversion for gains and risk seeking for losses of high probability; risk seeking for gains and risk aversion for losses of low probability [47]. Note that only the gain case is considered for lottrees. Specifically, for a single outcome of the gain with a probability , the value function and weighting function are respectively defined based on a nonlinear transformation as follows:
(4) 
(5) 
The cumulative prospect value (CPV) is then computed as individuals’ perceived gain:
(6) 
Furthermore, if there are a series of possible outcomes with gainprobability pairs , , then the cumulative decision weight is defined by:
(7) 
(8) 
The CPV is then computed as:
(9) 
Iii Generalized Pachira Lottree
In this section, we first present how to rescale the 1Pachira lottree for guaranteeing all the desirable properties. Note that, since each node’s reward depends only on its own lottery value for the 1Pachira lottree, it is only required to consider the lottery value for analyzing various properties. We then extend it to KPachira lottree and SharingPachira lottree, respectively. Finally, we analyze how to select mechanisms based on the CPT, and give an important theoretical guideline.
Iiia Rescaling 1Pachira Lottree
The 1Pachira lottree does not satisfy ZVR, because the root node can obtain the reward with probability . It also violates BC, as ZVR is a necessary but not sufficient condition for BC. It is a straightforward strategy to rescale the lottree for satisfying ZVR by distributing the root’s lottery value to the other nodes. Interestingly, however, it is nontrivial for a rescaling strategy to ensure the desirable properties, especially for BC, USB and USA. For instance, a rescaling strategy is proposed by [40] to distribute the root’s lottery value among the other nodes in proportion to their lottery values (Fig. 1(b)), but it violates USB. Although another rescaling strategy is further used to satisfy USB, it still violates BC. Intuitively, a crowdsourcer may tend to use the other two rescaling strategies: one is to distribute the root’s lottery value to nodes who are at a lower level of the tree, e.g., the secondlevel nodes who join the tree directly in response to the solicitation from the crowdsourcer (Fig. 1(c)); the other is to distribute the root’s lottery value to nodes who join the tree earlier, e.g., the first two nodes who join the tree regardless of the tree structure (Fig. 1(d)). Both the two strategies encourage users to participate in the campaign as soon as possible, which is also a good property for an incentive mechanism. In the following, we define two more general rescaling strategies.
Structuredependent Rescaling: Given a lottree with lottery value for each nonroot node and for the root node , a rescaling strategy is structuredependent if it distributes ’s lottery value to part of nodes who are at the specified levels or locations based on the tree structure with proportion , , so that ’s lottery value is rescaled as , , ’s lottery value is rescaled as , and other nodes’ lottery values remain the same.
Timedependent Rescaling: Given a lottree with lottery value for each nonroot node and for the root node , a rescaling strategy is timedependent if it distributes ’s lottery value to part of nodes who are in the specified time orders joining the tree with proportion , , so that ’s lottery value is rescaled as , , ’s lottery value is rescaled as , and other nodes’ lottery values remain the same.
Besides, we define the firstisroot rescaling strategy as the special case of both structuredependent and timedependent rescaling strategies, as illustrated in Fig. 1(e) and Fig. 1(f).
Firstisroot Rescaling: Given a lottree with lottery value for each nonroot node and for the root node , a firstisroot rescaling strategy rescales the lottery value of node , who is the first to join the tree, as , ’s lottery value as , and leaves other nodes’ lottery values unchanged.
We next analyze the above rescaling strategies one by one.
Theorem 1.
Any structuredependent rescaling strategy violates USB except the firstisroot rescaling.
Proof.
Let us first consider any structuredependent rescaling strategy except the firstisroot rescaling, where some nodes will tend to bypass their solicitors strategically to become the nodes who are at the specified levels or locations for gaining a higher lottery value. It can be proved by the following example: For a lottree illustrated in Fig. 1(a), if user with lottery value bypasses its solicitor to become the child of , then its lottery value remains the same in the case without structuredependent rescaling as illustrated in Fig. 2(a): , but its lottery value can be increased after the structuredependent rescaling as illustrated in Fig. 2(b): . This means that USB is violated after a structuredependent rescaling, as can gain a higher lottery value by bypassing its solicitor .
Then we consider the firstisroot rescaling strategy. In this special case, no node can bypass its solicitor to become the first node. It implies that no node can gain a higher lottery value by bypassing its solicitor. Thus, the firstisroot rescaling strategy satisfies USB. ∎
Theorem 2.
Any timedependent rescaling strategy violates USA except the firstisroot rescaling.
Proof.
Let us first consider any timedependent rescaling strategy except the firstisroot rescaling, where some nodes will tend to launch a Sybil attack by having multiple identities in the specified time orders joining the tree for gaining a higher lottery value. It can be proved by the following example: For a lottree illustrated in Fig. 1(a), if user with lottery value splits itself into two nodes and , which are the first two nodes joining the tree, resulting in a new lottree illustrated in Fig. 3(a), then cannot gain lottery value in the case without timedependent rescaling:
(10) 
which also implies that the lottery value of , , remains the same. However, a timedependent rescaling will result in a new lottree illustrated in Fig. 3(b), where ’s lottery value will gain as follows:
(11) 
This means that USA is violated after a timedependent rescaling, as can gain a higher lottery value by launching a Sybil attack.
Then we consider the firstisroot rescaling strategy. For any node in the lottree except the first node, it is impossible to gain a higher lottery value by launching a Sybil attack, as none of its sybil nodes may become the first node to occupy the original lottery value of the root node. Now we consider the first node . According to the definition of the firstisroot rescaling, if does not launch any Sybil attack, its lottery value will be:
(12) 
If launches a Sybil attack by splitting itself into replicas (), , then the original lottery value of the root node is only distributed to the first Sybil node regardless of how organize the structure of Sybil nodes, and its lottery value will be:
(13) 
This implies that the first node is also impossible to gain a higher lottery value by launching a Sybil attack. Thus, the firstisroot rescaling strategy satisfies USA. ∎
According to the definition of the firstisroot rescaling strategy, it is straightforward to satisfy BC. It is also easy to infer that the firstisroot rescaling strategy satisfies CCI and VPC, as the lottery value gets higher for the first node and remains unchanged for any other node . One may think that in the firstisroot rescaling strategy the first node will have no incentive to solicit new nodes as it will become the root node and all other nodes are its descendants regardless of whether it makes referrals. Considering this, one may think the firstisroot rescaling strategy violates CSI. In fact, however, it can be proved that the first node still has an incentive to solicit new nodes due to the competitive effect.
Lemma 1.
The firstisroot rescaling strategy satisfies CSI.
Proof.
Since the firstisroot rescaling strategy does not change the lottery value of any node except the root node and the first node, it is only required to prove that the first node has an incentive to solicit new nodes. Let us consider an example illustrated in Fig. 4: and are the first two nodes. According to the firstisroot rescaling strategy, is rescaled as the root node, and is the child of , as illustrated in Fig. 4. Now assume that there is a new node : , which in case 1 joins the tree in response of ’s solicitation (Fig. 4), and in case 2 joins the tree in response of ’s solicitation (Fig. 4). The lottery values of in the two cases are respectively as follows:
(14) 
(15) 
Due to the strict convexity of the function , the following inequality holds:
(16) 
which implies that . Thus, CSI is satisfied. ∎
The aforementioned analysis together prove the following theorem.
Theorem 3.
The firstisroot rescaling strategy satisfies all desirable properties, including BC, CCI, CSI, VPC, USB and USA.
Besides, the firstisroot rescaling strategy has an additional advantage: users will compete to be the first participant for winning the extra lottery value, which is benefit for recruiting the first batch of users as soon as possible.
IiiB Pachira Lottree
One basic problem for extending the 1Pachira lottree to Pachira lottree is how to select winners based on users’ lottery values. Generally, there are four potential strategies:
Strategy A: different winners are selected in rounds. After each round, the selected winner is excluded from the candidate set, and the next winner is selected from the rest of nodes based on their respective lottery values.
Strategy B: All nodes are sorted according to their decreasing lottery values, and then the top nodes are selected as winners.
Strategy C: winners are selected in rounds. In each round, a winner is selected from all nodes based on their respective lottery values, and it is never excluded from the candidate set. It means a node may be selected as winners multiple times.
Strategy D: All nodes are allocated virtual lottery tickets proportionately based on their respective lottery values, and then tickets are drew randomly in one round to determine their owners as winners.
It is interesting to see that both Strategy A and B violate USB, while both Strategy C and D maintain all desirable properties of the 1Pachira lottree with firstisroot rescaling. We first analyze Strategy A by an example illustrated in Fig. 4: Assume that both two nodes and in Fig. 4 solicit a new node , and then in case 1 joins the tree in response of ’s solicitation (Fig. 4), and in case 2 joins the tree in response of ’s solicitation (Fig. 4). Meanwhile, assume that three nodes have the same contribution, 10, and the Pachira lottree is adopted with . We can get different lottery values for case 1 and 2, as illustrated in Fig. 4 and 4. Furthermore, we can compute the probability that becomes one of the two winners:
(17) 
which equals to 0.649 and 0.648 for case 1 and case 2, respectively. Obviously, this violates USB. Specifically, a new node tends to become the child of a solicitor with higher lottery value, so that it has a better chance to win in the next round after the node with higher lottery value is selected and excluded from the candidate set.
Strategy B is a competitive strategy in essence. It is not difficult to infer that a new node tends to become the child of a solicitor with lottery value higher than itself so as to maintain its competitive advantage over other nodes with lower lottery values. Thus, Strategy B also violates USB.
More generally, in order to satisfy USB, each node’s final winning probability should be independent of other nodes’ lottery values. Both strategies C and D follow this principle, and thus satisfy USB. In essence, Strategy C and D
are equivalent to the sampling with replacement and the sampling without replacement in the probability theory, respectively. Specifically, each node
has the same winning probability in each round for Strategy C, and the same node has a slightly higher probability of winning at least once for Strategy D. It is also not difficult to see that both Strategy C and D maintain other desirable properties.After determining winners, another basic problem for extending the 1Pachira lottree to Pachira lottree is how to allocate rewards to these winners. It should follow a similar principle, namely that each node’s final reward should be independent of other nodes’ lottery values. It is a good choice to allocate the total reward equally to winners, where each node has the same expected reward as that by using the 1Pachira lottree mechanism. In the rest of paper, when referring to the Pachira lottree, we use Strategy C together with the reward equipartition strategy for convenience.
IiiC SharingPachira Lottree
In essence, the SharingPachira lottree is equivalent to one extreme case of the Pachira lottree with infinite lottery drawings. Under this case, all nodes will proportionally share the budget based on their respective lottery values. In other words, each node will share a reward:
(18) 
It is easy to know that the SharingPachira lottree maintains all desirable properties as each node’s reward is independent of other nodes’ lottery values.
IiiD CPTbased Mechanism Selection
Since each user has an uncertain reward before the end of a crowdsourcing campaign for both 1Pachira and Pachira lottrees, it is important how a user perceives the payout, which decides whether the user is willing to make contributions and solicitations. As introduced in Sec. IIE, we can leverage CPT to analyze how users perceive the payout for different reward mechanisms. For the 1Pachira lottree, the perceived reward for each user can be computed according to Eqs. (4)(6) by using the total reward as the gain and the lottery value as the probability . For the Pachira lottree, each user has possible outcomes with gainprobability pairs:
(19) 
and then the perceived reward for user can be computed according to Eqs. (7)(9). For the SharingPachira lottree, each user has a certain reward as shown in Eq. (18). We compare a user’s perceived reward with various lottery values and two different budgets () for 4 mechanisms (1Pachira, 5Pachira, 10Pachira, and SharingPachira), as shown in Fig. 5. Three interesting phenomenons can be observed:
i) When the budget stays the same, a user with a lower lottery value perceives the largest payout by the 1Pachira lottree among all mechanisms, and a user with a higher lottery value perceives the largest payout by the SharingPachira lottree, whereas the Pachira lottree always stays the middle level regardless of the lottery value.
ii) When the budget stays the same, a distinct critical lottery value exists, below which one may prefer the 1Pachira lottree to the SharingPachira lottree, and above which one may prefer the SharingPachira lottree to the 1Pachira lottree.
iii) As the budget increases, the critical lottery value will become larger.
In essence, the above observations are consistent with the CPT that one tends to risk seeking for higher gains of low probability, and risk aversion for lower gains of high probability. This provides us an interesting and important theoretical guidance to the mechanism selection for satisfying different application requirements as follows.
Guidance to Mechanism Selection: If a crowdsourcer has a large budget constraint, or it only requires a small number of participants, then the SharingPachira lottree mechanism should be recommended, otherwise the 1Pachira mechanism should be recommended.
Iv Performance Evaluation By Simulations
To evaluate the performance of different lottree mechanisms under various scenarios, we build a simulator and conduct extensive simulations. Moreover, the impacts of the budget constraint and the number of required participants are investigated. In this section, we present the simulation framework, parameter settings, and simulation results.
Iva Simulation Framework and Parameter Settings
We build a simulator based on the following four steps:
Step 1): The crowdsourcer pushes the crowdsourcing campaign information (i.e., send solicitations) to an initial set of users.
Step 2): Each solicited user decides whether to participate in the campaign: He first decides whether to consider a possible participation according to a participating interest factor. If he does consider it and supposes to make a specific contribution following a contribution model, he then evaluates the perceived reward according to a payout valuation model. Finally, he decides to participate if his perceived reward outweighs the cost of participation following a cost model.
Step 3): Each participant decides whether to solicit other users: He first predicts how many users from his acquaintances would accept his solicitations based on a solicitation prediction model, and then computes the perceived gain from soliciting according to a payout valuation model. Finally, he decides to send solicitations if his perceived gain outweighs the cost of sending solicitations following a cost model. Each user’s acquaintances are determined based on a social network model.
Step 4): Repeat Steps 2) and 3) until the number of participants reaches the crowdsourcer’s requirement or the campaign deadline arrives.
The aforementioned simulation framework is similar to [40], which involves in a set of theories and models that have been widely accepted in literature. We briefly describe these models and some parameter settings as follows.
Social Network Model: An evolving network model [50] is used to model the acquaintanceship of users, which exhibits several recognized properties of a social network, such as short average path length, broad degree distribution, high clustering, and community structure. Three basic parameters for the model, , , and , are set the same as specified by Toivonen et al. [50].
Participating Interest Factor: Each solicited user has two behavioral intentions: showing absolutely no interest or having an interest to consider whether to participate in. We assume each user has a participating interest factor, , to express his likelihood of two behavioral intentions.
Contribution Model: As described in Section I, a more general model, heterogeneous user model, is considered. Specifically, each user ’s contribution,
, is assumed to follow a random uniform distribution.
Payout Valuation Model: As described before, we leverage CPT to compute the perceived reward for different mechanisms. For Step 2), we first compute the lottery value based on the current tree structure, and then derive the perceived reward as described in Section IIID. For Step 3), we compute the perceived reward from soliciting new participants, and derive the difference between it and the original reward without sending solicitations as the perceived gain. The key parameters and are set the same as [40] for computing lottery values, and and are set the same as [47] for leveraging CPT.
Solicitation Prediction Model: Each user assumes that all his neighbors have not joined in the campaign, and each of his neighbors will join in the campaign with the probability if he sending solicitations. Thus, user will predict the number of users who accept his solicitations as .
Cost Model: Each user has a cost of participation, , and another cost of sending solicitations, , to represent his expected rewards, which are assumed to follow two different random uniform distributions.
The above models involve in many parameters as listed in Table I. Moreover, in order to evaluate the impacts of the budget constraint () and the number of required participants (), we vary the values of from 5 to 50 with the increment of 1, and set two different values of
as 1000 and 5000. For each setting, simulations are repeated 100 times and the respective average results are obtained, so as to reduce variance.
Model  Parameter and Value 
,  
Social Network Model  , , 
Participating Interest Factor  
Contribution Model  
, ,  
Payout Valuation Model  , 
Cost Model 
IvB Simulation Results
Fig. 6 shows the relationship between the number of required solicitations and the number of required participants for three lottree mechanisms, 1Pachira, 10Pachira, and SharingPachira, under different budget constraints. If we set the same budget constraint and the same number of required number of participants, then the less solicitations a mechanism requires, the easier it is to achieve the requirement of the crowdsourcing campaign. In other words, we use the number of required solicitations as a key metric for mechanism selection. First, when the budget stays the same, we can observe two common and interesting phenomenons independently from Fig. 6(a) and Fig. 6(b):
i) The number of required solicitations increase with the number of required participants. Meanwhile, there is a larger and larger increasing rate of the number of required solicitations for SharingPachira, whereas 1Pachira and 10Pachira have a relatively lower increasing rate. It means that: as the number of required participants increases, SharingPachira will be harder and harder to achieve the requirement of the crowdsourcing campaign, whereas at this time, 1Pachira and 10Pachira could be the better choice.
ii) When a small number of participants is required, the number of required solicitations for the three lottree mechanisms presents the following relationships: SharingPachira 10Pachira 1Pachira, meaning that SharingPachira is the beast choice; when a large number of participants is required, it presents an opposite relationships: 1Pachira 10Pachira SharingPachira, meaning that 1Pachira is the best choice; whereas 10Pachira is almost always not the best choice. Generally, there is a distinct critical value of the required number of participants, below which one may prefer 1Pachira to SharingPachira, and above which one may prefer SharingPachira to 1Pachira. Specifically, this critical value is 10 (16) when (5000).
iii) As the budget increases, the critical value of the required number of participants will become larger.
Summary: In essence, the above results are consistent with the CPT and the analysis in Sec. IIID, which also validate our important theoretical guidance to the mechanism selection in Sec. IIID.
V Looking For Lost Objects: An Application Case and Its Performance Evaluation
In this section, we first investigate an application case: looking for lost objects. Then we design an experimental mobile game, Treasure Hunt, and present a series of experiments and several metrics for evaluating the performance of lottree mechanisms. Finally, we provide the experimental results.
Va Looking For Lost Objects: An Application Case
As elaborated in Section I, an incentive tree mechanism could be used in many crowdsourcing and mobile crowd sensing applications. Next we mainly consider a very useful application case: looking for lost objects, such as a lost child, pet, smartphone, key, and wallet. Imagine a child attaches a Bluetooth Low Energy (BLE) peripheral, e.g., Chipolo [51], in his clothes or shoes. The low power consumption and miniaturization of BLE peripherals make it perfect for tracking the child continuously. If the child is lost, many smartphone users can be recruited to cooperatively look for him by continuous Bluetooth scanning and even locate him [15]. There is no doubt that incentive is a key to the success of this application.
VB Treasure Hunt: An Experimental Mobile Game
In order to evaluate the performance of lottree mechanisms, we design an interesting experimental mobile game, Treasure Hunt, which could be used directly for looking for lost objects due to the same intrinsic mechanism. The game involves in three roles:
a crowdsourcer residing in the cloud, who is responsible for publicizing treasure hunt tasks, monitoring users’ participation process, and allocating rewards,
a set of users, who register in our mobile APP to play games by using a Bluetoothenabled smartphone, and
a treasure, which is in fact a volunteer moving freely with a Bluetoothenabled smartphone.
In essence, Treasure Hunt is to find the socalled treasure by discovering its Bluetooth when a user gets close to it. A treasure hunt task could be characterized by the reward budget, the number of required participants, the treasure ID (i.e., its Bluetooth ID), the task deadline, and the incentive type. Next we will introduce the operation procedure of the game, the contribution function design, and the incentive mechanism design, respectively.
Operation Procedure of Treasure Hunt: It consists of six main points as follows.
i) The crowdsourcer publicizes a treasure hunt task, and pushes the related information to all registered users. Note that only a part of users who run our mobile APP in the background and maintain a Internet connection can get the task information timely.
ii) Each user who received the task information decides whether to participate in. If yes, he turns on his Bluetooth, optionally opens his GPS, and periodically reports the participation information (time duration of Bluetooth scanning, GPS points) to the crowdsourcer.
iii) Each participating user decides whether to send solicitations to other users through a social network. The solicitation structure is recorded by the crowdsourcer.
iv) If some user discovers the treasure, then he reports the result to the crowdsourcer.
v) When the number of participants achieves the requirement, then no user can participate in any longer.
vi) After the deadline arrives, the crowdsourcer builds an incentive tree according to the participants’ contributions and solicitation relationships, and then allocate rewards according to the announced incentive mechanism.
Contribution Function Design of Treasure Hunt: Intuitively, we hope that each user has a long time duration for Bluetooth scanning and a long travelling distance so as to more easily find the treasure. Thus, we design a contribution function, , by comprehensively considering three factors: time duration for Bluetooth scanning, , traveling distance, , and whether the user find the treasure, (a boolean function), namely that,
(20) 
Here, is measured in minutes, is measured in meters, the number 0.5 is a weight factor, 0.1 is set because of that a traveling distance of 1 m takes about 0.1 min on average in our experiments, and 120 means that an extra contribution during 120 mins (the duration of a task) will be given to the user who finds the treasure.
Incentive Mechanism Design of Treasure Hunt: One of the most important objectives of Treasure Hunt is to compare the three lottree mechanisms: 1Pachira, Pachira, and SharingPachira, by realistic experiments. However, it seems hard for users to understand the details of these mechanisms if we describe them straightforwardly. In fact, it is completely unnecessary for users to know about such complicated design. Instead, we only need to tell users a simple rule:
“Each user will earn a value, and will be rewarded based his value.”
To make it more intuitionistic for users to understand how to evaluate their values, we present the following descriptions to them:
“How to get a higher value: the longer duration you turn on your Bluetooth, the longer distance you travel (based on your GPS trajectory), the more friends you recommend to, then the higher value you get. Besides, the first participant and the participant who find the treasure will be given an extra value.”
Moreover, we show users intuitionistic descriptions on 1Pachira, Pachira, and SharingPachira lottree mechanisms, respectively:
“Mechanism A: Only one participant can get all the reward (). Of course, the higher value, the more likely you win.”
“Mechanism B: We will have lottery drawings times, and each winner will get one th of the total reward (). The higher value, the more likely you win.”
“Mechanism C: Every participant can get a reward. The higher value, the higher reward. But the total budget is .”
VC Experimental Settings and Evaluation Metrics
We conduct Treasure Hunt experiments in a university campus. For the convenience of comparing the three kinds of incentive mechanisms, we need to have a set of registered users as a basis. Thus, we first post an advertisement on our university BBS and some social groups in social networks (QQ and WeChat) seeking people to register in our mobile APP two days before the experiments officially start. In our advertisement, we tell users the game rules, and announce a budget of 500 RMB to recruit users, who will share the reward equally without limitation on the number of participants. Finally, 62 users registered in our APP before the experiments officially start. After that, we publicize 12 Treasure Hunt tasks in 9 days. Each task begins at a random time and lasts for 2 hours. In order to investigate the impact of the number of required participants (), we set two values of as 10 (tasks 13) and 50 (tasks 4, 6, 8) while fixing the budget constraint as RMB. Meanwhile, in order to investigate the impact of the budget constraint (), we set three values of as 50 RMB (tasks 5, 7, 9), 100 RMB (tasks 4, 6, 8), and 500 RMB (tasks 1012) while fixing the value of as . The detailed settings are shown in Table II.
Task  Release  Budget  Limit on # of  Incentive 
No.  Date  ()  participants ()  Mechanism 
1  Jan. 5, 2018  100 RMB  10  1Pachira 
2  Jan. 6, 2018  100 RMB  10  SharingPachira 
3  Jan. 7, 2018  100 RMB  10  5Pachira 
4  Jan. 8, 2018  100 RMB  50  1Pachira 
5  Jan. 8, 2018  50 RMB  50  1Pachira 
6  Jan. 9, 2018  100 RMB  50  SharingPachira 
7  Jan. 9, 2018  50 RMB  50  SharingPachira 
8  Jan. 10, 2018  100 RMB  50  5Pachira 
9  Jan. 10, 2018  50 RMB  50  5Pachira 
10  Jan. 11, 2018  500 RMB  50  1Pachira 
11  Jan. 12, 2018  500 RMB  50  SharingPachira 
12  Jan. 13, 2018  500 RMB  50  5Pachira 
Generally, three performance metrics should be concerned: total number of participants, total contribution of participants, and average contribution of participants. However, two practical factors need to be considered. First, it is a common phenomenon that users’ participation enthusiasm declines over time, which has been described in some literature [21, 52], and verified through a longterm experiment [53]. It means that it is not fair to directly compare the total number of participants, as our experiments span a long time. In order to reduce the effect of this factor, we consider another metric, total number of active users, meaning the number of users who have ever opened the APP in a certain day. Note that, the reason that an active user opened the APP may be his interest in the APP itself or in the certain task. Whereas a participator must be interested in the certain task. Thus, we use a metric called Relative Participation Ratio (RPR) to represent the actual attractiveness of a task, defined as follows:
(21) 
where the denominator could be used to indicate the actual activeness of users that is independent of a certain task.
Second, there is a strong randomness on whether a user can find the treasure. In order to reduce the effect of this factor on evaluating different incentive mechanisms, we revise the contribution function in Eq. (20) as follows:
(22) 
which is used for computing the total contribution of participants (TCP) and average contribution of participants (ACP).
VD Experimental Results
In our experiments, 20 new users registered in our APP, resulting in 82 registered users in total by adding 62 initial users. However, there are always some inactive users each day. First, we verify the phenomenon that users’ participation enthusiasm declines over time. Fig. 7 shows the changes in the number of active users in 9 days. Generally, there is a significant decline in the number of active users over time. Although the number of active users shows a transient increase on Jan. 8 and Jan. 11, one big reason is the increase of the budget () or the limit on the number of participants (). Moreover, the number of active users shows a significant decreasing trend over time for the same settings of and (by comparing Jan. 57, Jan. 810, and Jan. 1113, respectively). This justifies the usage of the metric RPR as explained earlier.
Next, we analyze the experimental results on the three metrics introduced earlier: RPR, TCP, and ACP. Moreover, the impacts of the budget constraint () and the number of required participants () are investigated. Note that, when we set , the number of participants achieves the limitation for all of the three incentive mechanisms. Thus, it is unnecessary to consider the RPR and ACP for the experiments in Jan. 57.
Relative Participation Ratio (RPR): Fig. 8 shows the RPR under different budget constraints when we fix . When RMB, the 1Pachira lottree has the highest RPR; when RMB, three mechanisms’ RPRs are very close; when RMB, the SharingPachira lottree has the highest RPR.
Total Contribution of Participants (TCP): Fig. 9 plots the TCP under different values of when we fix , from which we observe that the SharingPachira lottree has the highest TCP when a small number of participants is required, while the 1Pachira lottree has the highest TCP when a large number of participants is required. Fig. 10 plots the TCP under different values of when we fix , from which we observe that the 1Pachira lottree has the highest TCP when there is a small budget constraint (50 RMB and 100 RMB), while the SharingPachira lottree has the highest TCP when there is a large budget constraint (500 RMB).
Average Contribution of Participants (ACP): Fig. 11 plots the ACP under different budget constraints when we fix . When RMB, the 1Pachira lottree has a slightly higher ACP than the other two mechanisms; when RMB, the 1Pachira lottree has a similar ACP as the 5Pachira lottree, which is clearly higher than the SharingPachira lottree; when RMB, the SharingPachira lottree has the highest ACP, which is clearly higher than the other two mechanisms.
Summary: In essence, the above results are almost all consistent with the CPT and the analysis in Sec. IIID, which also validate our important theoretical guidance to the mechanism selection in Sec. IIID. Note that, some results seem not to be very matched with the theoretical analysis or our intuition. For example, the budget 100 RMB results in lower RPR, TCP, and RPR than the budget 50 RMB when we fix . It might be due to the impact of task release time or order. For another, the 5Pachira lottree is sometimes best but sometimes worst. It exhibits slight instability, the reason of which is difficult, if not impossible, to understand as human psychology and behavior are themselves very complex. Nevertheless, it does not affect the obvious regularity from our experimental results that is indeed very matched with our theoretical guidance in Sec. IIID.
Vi Related Work
Many incentive mechanisms have been proposed for crowdsourcing [19, 20, 21, 22, 23, 24, 25, 26, 27]
. Generally, these mechanisms fall into two categories: the crowdsourcercentric mechanisms, where the crowdsourcer provides a fixed reward to participates, and the usercentric mechanisms, where users have their reserve prices for crowdsourcing services. For the former, a Stackelberg game is often used by assuming that the costs of participants or their probability distribution is known
[19, 20]. For the latter, various types of auctions are often used [21, 22, 23, 24, 25, 26, 27]. Lee and Hoh [21] designed a dynamic auction mechanism for purchasing users’ sensing data. Jaimes et al. [22] further considered a budget constraint and users’ locations. Yang et al. [19] proposed the MSensing auction mechanism, and proved that it satisfied several properties including computational efficiency, individual rationality, profitability, and truthfulness. Zhang et al. [23] proposed an auction mechanism for incentivizing crowd workers to label a set of binary tasks under a strict budget constraint. Zhang et al. [24] considered three auction models, which involve cooperation and competition among users. Zhao et al. proposed two kinds of online auction mechanisms: budgetfeasible mechanisms [25] and frugal mechanisms [26]. Guo et al. [27] proposed a dynamic and qualityenhanced auction mechanism.On the other hand, some incentive tree mechanisms have been investigated in various fields. Emek et al. [35] presented multilevel marketing mechanisms that motivate participants to promote a certain product among their friends through social networks. Drucker and Fleischer [36] proposed a family of multilevel marketing mechanisms that preserve natural properties and are simple to implement. Chen et al. [37] designed efficient sybilproof incentive mechanisms, called the direct referral mechanisms, for retrieving information from networked agents. Zhang et al. [38] proposed a sybilproof incentive tree mechanism for crowdsourcing scenarios where the contribution model is considered to be submodular and timesensitive. Lv and Moscibroda [32] presented two families of incentive tree mechanisms for crowdsourcing, where each family achieves a set of desirable properties. Zhang et al. [39] designed an auctionbased incentive tree mechanism for mobile crowd sensing which combines the advantages of auctions and incentive trees. However, all of these studies failed to account for a budget constraint. To the best of our knowledge, only the early work [40] designed a class of incentive tree mechanisms with budget constraint, but they violate BC and allow only one winner.
Besides, some studies have been conducted to examine incentive mechanisms by realworld experiments. Reddy et al. [41] examined various micropayment schemes from a pilot study in a university campus sustainability initiative. Musthag et al. [42] et al. used a combination of statistical analysis and models from labor economics to evaluate three micropayment schemes in the context of highburden user studies. Celis et al. [43] investigated the benefits and potential pitfalls in employing a lotterybased payment mechanism for crowdsourcing via experiments on MTurk. Rula et al. [44] compared micropayments and lottery based schemes by using data from a large, 2day experiment with 96 participants at a corporate conference. Rokicki et al. [45] compared three classes of reward schemes, linear reward, competitivebased, and lotterybased by largescale experimental evaluations. They further investigated how team mechanisms can be leveraged to improve the cost efficiency of crowdsourcing [46]. However, all of these studies lacked a general and solid theoretical basis to account for their experimental results, and none of them considered incentive tree mechanisms.
Vii Conclusion
In this paper, we investigated budgetconsistent incentive tree mechanisms for crowdsourcing. We proposed three types of generalized lottree mechanisms, 1Pachira, Pachira, and SharingPachira for allowing one winner, multiple winners, and each participant to be a winner, respectively. We proved that our mechanisms satisfy BC, CCI, CSI, VPC, USB and USA. A theoretical guidance to the mechanism selection was provided for satisfying different requirements. Both extensive simulations and realistic experiments were conducted to confirm our theoretical analysis.
References
 [1] G. Chatzimilioudis, A. Konstantinidis, C. Laoudias, and D. ZeinalipourYazti, “Crowdsourcing with smartphones,” IEEE Internet Computing, vol. 16, no. 5, pp. 36–44, 2012.
 [2] “Amazon,” http://www.amazon.com.
 [3] “Yelp,” https://www.yelp.com.
 [4] “Yahoo!answers,” https://answers.yahoo.com.
 [5] “Zhihu,” https://www.zhihu.com.
 [6] “Openstreetmap,” https://www.openstreetmap.org.
 [7] L. Von Ahn and L. Dabbish, “Labeling images with a computer game,” in Proceedings of the SIGCHI conference on Human factors in computing systems, 2004, pp. 319–326.
 [8] “Mturk,” https://www.mturk.com.
 [9] “Taskcn,” http://www.taskcn.com.
 [10] “Sensorly,” http://www.sensorly.com.
 [11] M. Stevens and E. D’Hondt, “Crowdsourcing of pollution data using smartphones,” in Proc. ACM UbiComp, 2010, pp. 1–4.
 [12] P. Dutta, P. M. Aoki, N. Kumar, A. Mainwaring, C. Myers, W. Willett, and A. Woodruff, “Common sense: participatory urban sensing using a network of handheld air quality monitors,” in Proc. ACM SenSys, 2009, pp. 349–350.
 [13] P. Mohan, V. N. Padmanabhan, and R. Ramjee, “Nericell: rich monitoring of road and traffic conditions using mobile smartphones,” in Proc. ACM SenSys, 2008, pp. 323–336.

[14]
A. Thiagarajan, L. Ravindranath, K. LaCurts, S. Madden, H. Balakrishnan, S. Toledo, and J. Eriksson, “Vtrack: accurate, energyaware road traffic delay estimation using mobile phones,” in
Proc. ACM SenSys, 2009, pp. 85–98.  [15] K. Liu and X. Li, “Finding nemo: Finding your lost child in crowds via mobile crowd sensing,” in Proc. IEEE MASS, 2014, pp. 1–9.
 [16] “Gigwalk,” https://www.www.gigwalk.com.
 [17] “Jana,” https://www.jana.com.
 [18] “Weichaishi,” https://www.weichaishi.com.
 [19] D. Yang, G. Xue, X. Fang, and J. Tang, “Crowdsourcing to smartphones: incentive mechanism design for mobile phone sensing,” in Proc. ACM MobiCom, 2012, pp. 173–184.
 [20] L. Duan, T. Kubo, K. Sugiyama, J. Huang, T. Hasegawa, and J. Walrand, “Incentive mechanisms for smartphone collaboration in data acquisition and distributed computing,” in Proc. IEEE INFOCOM, 2012, pp. 1701–1709.
 [21] J. Lee and B. Hoh, “Sell your experiences: a market mechanism based incentive for participatory sensing,” in Proc. IEEE PerCom, 2010, pp. 60–68.
 [22] L. Jaimes, I. VergaraLaurens, and M. Labrador, “A locationbased incentive mechanism for participatory sensing systems with budget constraints,” in Proc. IEEE PerCom, 2012, pp. 103–108.
 [23] Q. Zhang, Y. Wen, X. Tian, X. Gan, and X. Wang, “Incentivize crowd labeling under budget constraint,” in Proc. IEEE INFOCOM, 2015, pp. 2812–2820.
 [24] X. Zhang, G. Xue, R. Yu, D. Yang, and J. Tang, “Truthful incentive mechanisms for crowdsourcing,” in Proc. IEEE INFOCOM, 2015, pp. 2830–2838.
 [25] D. Zhao, X.Y. Li, and H. Ma, “Budgetfeasible online incentive mechanisms for crowdsourcing tasks truthfully,” IEEE/ACM Transactions on Networking, vol. 24, no. 2, pp. 647–661, 2016.
 [26] D. Zhao, H. Ma, and L. Liu, “Frugal online incentive mechanisms for mobile crowd sensing,” IEEE Transactions on Vehicular Technology, vol. 66, no. 4, pp. 3319–3330, 2017.
 [27] B. Guo, H. Chen, Z. Yu, W. Nan, X. Xie, D. Zhang, and X. Zhou, “Taskme: Toward a dynamic and qualityenhanced incentive mechanism for mobile crowd sensing,” International Journal of HumanComputer Studies, vol. 102, pp. 14–26, 2017.
 [28] “Facebook,” http://www.facebook.com.
 [29] “Twitter,” http://www.twitter.com.
 [30] “Wechat,” https://www.wechat.com.
 [31] H. Ma, D. Zhao, and P. Yuan, “Opportunities in mobile crowd sensing,” IEEE Communications Magazine, vol. 52, no. 8, pp. 29–35, 2014.
 [32] Y. Lv and T. Moscibroda, “Fair and resilient incentive tree mechanisms,” Distributed Computing, vol. 29, no. 1, pp. 1–16, 2016.
 [33] “Pyramid scheme,” http://www.fbi.gov/scamssafety/fraud.
 [34] G. Pickard, W. Pan, I. Rahwan, M. Cebrian, R. Crane, A. Madan, and A. Pentland, “Timecritical social mobilization,” Science, vol. 334, no. 6055, pp. 509–512, 2011.
 [35] Y. Emek, R. Karidi, M. Tennenholtz, and A. Zohar, “Mechanisms for multilevel marketing,” in Proc. ACM EC, 2011, pp. 209–218.
 [36] F. A. Drucker and L. K. Fleischer, “Simpler sybilproof mechanisms for multilevel marketing,” in Proc. ACM EC, 2012, pp. 441–458.
 [37] W. Chen, Y. Wang, D. Yu, and L. Zhang, “Sybilproof mechanisms in query incentive networks,” in Proc. ACM EC, 2013, pp. 197–214.
 [38] X. Zhang, G. Xue, D. Yang, and R. Yu, “A sybilproof and timesensitive incentive tree mechanism for crowdsourcing,” in Proc. IEEE GLOBECOM, 2015, pp. 1–6.
 [39] X. Zhang, G. Xue, R. Yu, D. Yang, and J. Tang, “Robust incentive tree design for mobile crowdsensing,” in Proc. IEEE ICDCS, 2017, pp. 458–468.
 [40] J. R. Douceur and T. Moscibroda, “Lottery trees: motivational deployment of networked systems,” in Proc. ACM SIGCOMM, 2007, pp. 121–132.
 [41] S. Reddy, D. Estrin, M. Hansen, and M. Srivastava, “Examining micropayments for participatory sensing data collections,” in Proc. ACM UbiComp, 2010, pp. 33–36.
 [42] M. Musthag, A. Raij, D. Ganesan, S. Kumar, and S. Shiffman, “Exploring microincentive strategies for participant compensation in highburden studies,” in Proc. ACM UbiComp, 2011, pp. 435–444.
 [43] L. E. Celis, S. Roy, and V. Mishra, “Lotterybased payment mechanism for microtasks,” in First AAAI Conference on Human Computation and Crowdsourcing, 2013, pp. 12–13.
 [44] J. P. Rula, V. Navda, F. E. Bustamante, R. Bhagwan, and S. Guha, “No onesize fits all: Towards a principled approach for incentives in mobile crowdsourcing,” in Proc. ACM HotMobile, 2014, pp. 1–5.
 [45] M. Rokicki, S. Chelaru, S. Zerr, and S. Siersdorfer, “Competitive game designs for improving the cost effectiveness of crowdsourcing,” in Proc. CIKM, 2014, pp. 1469–1478.
 [46] M. Rokicki, S. Zerr, and S. Siersdorfer, “Groupsourcing: Team competition designs for crowdsourcing,” in Proc. WWW, 2015, pp. 906–915.
 [47] A. Tversky and D. Kahneman, “Advances in prospect theory: Cumulative representation of uncertainty,” Journal of Risk and uncertainty, vol. 5, no. 4, pp. 297–323, 1992.
 [48] P. Rogers, “The cognitive psychology of lottery gambling: A theoretical review,” Journal of gambling studies, vol. 14, no. 2, pp. 111–134, 1998.
 [49] D. Kahneman and A. Tversky, “Prospect theory: An analysis of decision under risk,” Econometrica, vol. 47, pp. 263–291, 1979.
 [50] R. Toivonen, J.P. Onnela, J. Saramäki, J. Hyvönen, and K. Kaski, “A model for social networks,” Physica A: Statistical Mechanics and its Applications, vol. 371, no. 2, pp. 851–860, 2006.
 [51] “Chipolo,” https://www.chipolo.net/.
 [52] L. Gao, F. Hou, and J. Huang, “Providing longterm participation incentive in participatory sensing,” in Proc. IEEE INFOCOM, 2015, pp. 2803–2811.
 [53] X. Ji, D. Zhao, H. Yang, and L. Liu, “Exploring diversified incentive strategies for longterm participatory sensing data collections,” in Proc. IEEE BIGCOM, 2017, pp. 15–22.
Comments
There are no comments yet.