The Internet-of-Thing (IoT) is one of the fastest growing research and industry areas involving vast amount and variety of computing devices, which creates the availability of massive data generation and exchange among the devices. Annually, the IoT devices grow in both quantity and quality (capability). Quantitatively, the number of IoT devices installed or deployed worldwide is expected to reach 75.44 billion in 2025 according to IHS Markit survey conducted in 2016 . Qualitatively, smartphones, one of the most representative types of IoT devices, are now equipped with powerful CPU, GPU, RAM, and storage, e.g., Octa-core (4x2.8 GHz Kryo 385 Gold & 4x1.7 GHz Kryo 385 Silver), 4 GB RAM in Samsung Galaxy S9, which are comparable to those of many laptops and desktops several years ago, as well as various sensors such as accelerometer, gyroscope, iris, and fingerprint scanner. Utilizing such powerful and numerously distributed IoT devices, we can provide a service where people can request IoT devices to collect massive amount of sensor data or to solve computationally complex problem inexpensively using their idle computing power in a manner of crowdsourcing in which tasks that were traditionally completed by appointed agents are now outsourced to an undefined large group of crowd. Using the concept of crowdsourcing, many applications have been proposed: LiveCompare  for grocery price comparison, GreenGPS  to allow drivers to find the most fuel-efficient route, LiFS  for indoor localization, and so on [5, 6, 7].
However, such IoT-based crowdsourcing services are viable only when IoT device users actively participate in the crowdsourcing system (not only to use the services but also to provide their resources). Thus, we require an incentive mechanism that properly rewards resource providers for their resources, which will induce them to participate in the crowdsourcing system. In the literature, various incentive mechanisms for crowdsourcing have been proposed to encourage user collaboration [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. However, despite their well-defined system models, there exist some gaps between the entities or markets modeled in the existing works and those in reality. In the crowdsourcing system, there are three kinds of entities: requesters, workers, and platforms. A requester asks a platform to assign his/her task to a worker, while a worker aims to complete the task to receive a reward. In the IoT-based crowdsourcing system, IoT device users can be both requesters and workers. A platform plays as an auctioneer to mediate between requesters and workers.
In terms of requesters, the existing works have a dichotomous task valuation model. That is, a task valuation immediately collapses to zero, like a step function, after its deadline. However, such dichotomous model is not general enough to include the task depreciation case where the value of a task is fully preserved until its deadline and depreciates in proportion to the time past after the deadline 
. Depending on the level of task depreciation over time, it may be beneficial to accept some late tasks results rather than requesting them again. Accordingly, such task depreciation can also affect the payment policy of incentive mechanisms. In the existing works that only considered fixed and binary task valuation model, workers are rewarded based on the fixed payment policy. However, to encompass general cases where tasks depreciate after their deadline and workers can still submit their task result, the reward for workers should be decided based on the task valuation achieved at the moment workers submit their assigned task result.
Stemming from the dichotomous task valuation model, workers in the literature are modeled to show only binary behavior in terms of punctuality. In other words, workers either submit their assigned task results in time or not submit. However, considering the aforementioned general cases where tasks depreciate over time and late task results are accepted, workers’ punctuality should be considered in diverse aspects. In practice, workers can have heterogeneous level of punctuality. That is, some workers may be always punctual, while others may be usually punctual but sometimes late. Even among the late workers, some may submit their task results slightly after the deadline while others submit far after the deadline. Thus, such heterogeneous punctuality should be considered to better capture the general behavior of users in the crowdsourcing market.
Moreover, the limitation of the crowdsourcing market model in the existing works is that a platform explicitly or implicitly monopolizes the crowdsourcing market alone. In the literature, mechanisms were separately evaluated under the same condition such as the number of participants. In addition, the crowdsourcing markets in the literature are static, which means that they do not involve or consider movements of participants over time. However, in reality, several platforms will compete with each other in a crowdsourcing market to attract more participants. Accordingly, each participant will decide which crowdsourcing platform to join depending on the payment policy of each platform. This results in dynamic movements of participants in the market over time.
To bridge those gaps in the literature, this paper makes the following main contributions:
We design a heterogeneous time-varying task valuation model that encompasses task depreciation.
We design a behavior model of workers that captures their stochastic punctuality in completing their assigned tasks.
We design an incentive mechanism that aims to maximize the expected social welfare in the long term by attracting and retaining more participants.
We model the dynamic competition over time between crowdsourcing service platforms which involves dynamic movements of participants between the platforms.
The rest of this paper is organized as follows. In Section 2, we provide a literature review of the existing mechanisms for crowdsourcing. In Section 3, we present our system models on requesters, workers, and a platform. Based on the models, we formulate the expected social welfare maximizing problem in Section 4. In Section 5, we propose our Expected Social Welfare Maximizing (ESWM) mechanism that selects appropriate requester-worker pairs considering heterogeneity in task depreciation speed and workers’ punctuality. In Section 6, we evaluate the performance of our ESWM mechanism. In Section 7, we conclude our paper.
2 Related Work
In this section, we review the state-of-the-art research works on incentive mechanisms for crowdsourcing. Yang et al.  presented two general models of incentive mechanisms to motivate mobile users’ participation: platform-centric model and user-centric model. D. Peng et al.  proposed a quality-based incentive mechanism for crowdsensing by rewarding participants proportionally to their contribution. Y. Wen et al.  presented a quality-driven auction-based incentive mechanism for a Wi-Fi fingerprint-based indoor localization system. In this direction, C. Liu et al.  also proposed a Quality of Information (QoI)-aware incentive mechanism to maximize the quality of information.
In the long-term view, Lee and Hoh  proposed a mechanism, called RADP-VPC, that provides long-term incentives to participants to maintain participants and promote dropped ones to participate again. Similarly, L. Gao et al.  proposed a mechanism to provide long-term incentives to participants to achieve the maximum total sensing value and the minimum total sensing cost.
Focusing on dynamic crowdsensing where participants arrive in an online manner, Zhao et al.  presented two online incentive mechanisms using a multiple-stage sampling-accepting process. Similarly, Y. Wei et al.  proposed an online incentive mechanism to maximize the number of matched pairs of participants and crowdsourcing service users when participants and service users are dynamically changing.
J. Sun et al.  proposed a behavior-based incentive mechanism for crowdsensing with budget constraint. This work aims to achieve both the extensive user participation and high quality sensing data submission, based on users’ behavior abilities. S. Ji et al.  presented an incentive mechanism for mobile phones with uncertain sensing time. In order to address the sensing time uncertainty problem, this work modeled the problem as a perturbed Stackelberg game where mobile phone users sensing task times may be different from their original plans. T. Luo et al. 
proposed an incentive mechanism for heterogeneous crowdsourcing using all-pay contests where workers have not only different type information (abilities and costs), but also the different beliefs (probabilistic knowledge) about their respective type information. In this work, the belief is modeled as a probability distribution.
X. Zhang et al.  proposed truthful incentive mechanisms for the case where participants’ cooperation may be required to finish a job. Using Tullock contest, T. Luo et al.  designed an incentive mechanism to maximize the crowdsourcing service user’s profit. L. Duan et al.  proposed an incentive mechanism for smartphone collaboration in distributed computing using contract theory under two different scenarios having complete and incomplete information of participants.
However, the heterogeneity of task depreciation over time has not been addressed in the literature. Task depreciation models over time in the existing works are dichotomous as the value of tasks immediately drop to zero after the given deadline. In other words, there can be only two kinds of task valuation, either full or null. Such dichotomous model overlooks the cases where task valuation remains valid even after the deadline, though depreciating in proportion to the amount of time past the deadline, which can be observed in practice and the literature .
Moreover, the heterogeneity of workers’ punctuality levels has not been addressed. In real crowdsourcing systems, each worker’s priority on its assigned task can vary, e.g., some prioritize on the assigned tasks while others prioritize on their own tasks. In the existing works, workers are assumed to have dichotomous behaviors. Moreover, many works implicitly or explicitly assumed that all selected workers meet their deadlines. However, coupled with the depreciation of task value after the deadline, workers’ heterogeneous punctuality levels can make difference in the realized task valuation at the task result submission time. For instance, a provider with a relatively higher punctuality level is expected to achieve a higher task valuation than the one with a lower punctuality level. Besides, the partial task valuation achieved after the deadline can vary depending on the punctuality level.
3 System Model
3.1 System Overview
In this section, we propose a system model considering task depreciation over time and workers’ stochastic punctuality. A crowdsourcing system consists of a platform (an auctioneer), a set of requesters (buyers) identified by , and a set of workers (sellers) identified by . Then, the crowdsourcing system is extended to two platforms that compete with each other. As mentioned earlier, IoT devices or users can be both requesters and workers in the IoT-based crowdsourcing system. For simplicity, we assume that a requester can submit only one task to the platform and a worker can take only one task. As in the real world systems, a platform is assumed to have a limited capacity to handle number of task requests. The process of crowdsourcing in our system is in a form of double auction where requesters (workers) compete with each other to be selected as the winning requesters (workers) as detailed below:
A set of requesters submit to the platform their own type information which contains the maximum task valuation , deadline , and etc. Similarly, a set of workers submit to the platform their own type information which contains their cost value . The platform analyzes and adds the level of punctuality to the corresponding .
The platform selects a subset of as the winning requesters and a subset of as the winning workers , and calculates temporary fees for and temporary payment for assuming that all will be perfectly punctual. Note that the temporary and the payment will be updated based on the task valuation at the time of actual task result submission.
For and , the platform matches each to each . The matching algorithm is detailed in Section 5.2.
When each completes and submits the requested task at , the platform decides definitive fee and payment , based on the achieved task valuation at . Note that and payment are the fee that will be charged and the payment will receive, respectively.
In crowdsourcing, requesters are buyers in a double auction who compete to be selected as the winners so that they can outsource their tasks to workers. For those selected , their outsourced tasks will be completed by the winning workers. In the initial stage of the auction, each requester submits to the platform its own type information . Here, , , and , respectively, denote a task outsourced by requester , the deadline when the valuation of the outsourced task starts to depreciate, and the expiry time when the valuation of the task becomes null. The maximum valuation of is denoted as . As each requester is rationally selfish, he/she will use crowdsourcing service only if the value of is higher than the corresponding fee . To this extent, requesters are modeled in the same way as the existing works where tasks are heterogeneous in various aspects such as task size, bid, and etc. Such heterogeneity has been well formulated into mathematical problems and various solutions for them have been proposed.
In addition, we consider the heterogeneity of task depreciation over time which has not been addressed in the literature as mentioned in Section 2. In this work, to reflect the diversity of task depreciation in crowdsourcing, we introduce to denote the speed of task depreciation after the deadline. In practice, the speed or level of task depreciation can vary, i.e., some tasks depreciate to null right after their deadline, while others do gradually. Therefore, to reflect such heterogeneity of task depreciation, we introduce a new function in terms of elapsed time which reflects the speed of task depreciation defined as
We use a quadratic function of elapsed time as a form of accelerated depreciation methods where tasks are more profitable or have a higher utility during their early time period. Until the deadline of , the task valuation remains constant at maximum . However, starts to depreciate after , depending on its . Tasks with small will depreciate slowly and we may achieve the partial task valuation when completed even after . In contrast, tasks with large will depreciate too fast to achieve any valuation even though it is completed shortly after . In Fig. 1, we show how changes with different values. Based on and , we define the utility of requester as follows
where denotes the time is completed.
In crowdsourcing, workers are sellers in a double auction who compete to be selected as the winners. Each winning worker is required to complete their requested tasks. In the process of executing assigned tasks, each worker will have task execution cost, . To compensate such incurred cost, submits to the platform its own type information that represents the minimum ask value that wants to receive as the reward for completing a requested task. Because is rationally selfish, will decide to work on the requested task only if it is paid with . To this extent, workers are modeled in the same way as the existing works where the costs of workers are heterogeneous.
On top of the heterogeneity in costs, we consider workers’ heterogeneous behavior in meeting the deadline which has not been addressed in the existing works. To reflect such heterogeneous punctuality levels, we model ’s stochastic behavior for its assigned task as a conditional probability function parameterized by and
, using the following truncated normal distribution
where (4) and
denote the probability density function of the standard normal distribution and the cumulative density function, respectively. Note thatis the mean of the truncated normal distribution, which can be derived from users’ log data. Note that the existing works [23, 24, 20] demonstrate that we can analyze user behaviors, using log analysis. In this work, we assume for simplicity. Such linear relationship between and is assumed to more clearly differentiate workers’ behaviors. For a newly participating , the platform sets its to 1.
Based on the behavior model, workers stochastically submit their requested task results to the platform. In other words, there is a high probability of submitting the task results around . To numerically represent such workers’ stochastic behavior, we use a punctuality coefficient , which is equal to . The platform appends to each resulting in . A user’s behaviors with different values for the same task are shown in Fig. 2. Based on the payment and the incurred cost , we define the utility of worker as follows
In crowdsourcing systems, a platform acts as an auctioneer to select the winners in both requesters (buyers) and providers (sellers), and match each winning requester to a winning worker. Reflecting servers have finite capacity, we assume the platform has a limited capacity to handle outsourcing requests.
For a platform, the sum of fees from and the sum of payments to are its revenue and expenditure, respectively. Given and , we define the utility of a platform as follows
3.5 Expected Social Welfare
In the existing works, to evaluate the performance of crowdsourcing services, the system-wise social welfare is calculated as follows
However, when tasks depreciate after their deadline at various speed and workers’ behaviors are stochastic, the expected social welfare can better evaluate the performance of crowdsourcing services than the simple calculation in (7). Thus, to reflect workers’ stochastic behaviors and potential task depreciation in the performance evaluation, we define the system-wise expected social welfare (ESW) as
where is the expected valuation of when assigned to . Note that in this work, we assume that the cost is constant whenever completes its assigned task. We define as
where remains constant until . Here, consists of two parts: 1) pre-deadline, and 2) post-deadline. The pre-deadline part and the post-deadline part represent the expected task valuation until the deadline and that after the deadline, respectively. Note that (7) is a special case of (8) where the integral of from 0 until is 1 because in this case is calculated as
Note that when the assumption of perfect punctuality does not hold, the expected valuation of (9) can be less than , i.e., , due to the task results submitted after the deadline.
3.6 Desirable Economic Properties
In this work, we aim to design an incentive mechanisms for crowdsourcing that satisfies the following four desirable economic properties: 1) individual rationality, 2) budget-balance, 3) computational efficiency, and 4) truthfulness. Each is described as below.
3.6.1 Individual Rationality
An incentive mechanism is individually rational if both requesters (buyers) and workers (sellers) have non-negative utility when their true valuation and cost are reported.
An incentive mechanism is budget-balanced if the platform has non-negative utility at the end of the auction. That is .
3.6.3 Computational Efficiency
An incentive mechanism is computationally efficient if it runs in polynomial time.
An incentive mechanism is truthful if neither requester nor worker can increase its utility by submitting the false task valuation or cost information. In other words, submitting the true valuation or cost information is a dominant strategy for all participants.
4 Expected Social Welfare Maximizing Problem
In our system, the objective of the platform is to find the optimal requester-worker matches that maximize the expected social welfare. As one requester is matched with only one worker, all possible combinations of pairs can be defined as a matrix below
where each element only if and are matched, otherwise 0. Thus, based on the matrix and (8), we can formulate the expected social welfare maximizing problem to find the optimal requester-provider matches as follows
The objective function (12
) is a combinatorial optimization problem to select the optimal requester-provider pairs that maximize the expected social welfare defined as (8) subject to constraints (12.b), (12.c), and (12.d). As mentioned in the definition of a matrix, in (12.a
) is a binary variable to indicate whetherand are paired or not. Constraint (12.b) states that the platform has a limited capacity to handle task requests. In an ideal case where the platform can manage all the incoming task requests, in (12.b) can be set to . Constraint (12.c) and (12.d) state that each requester can be matched to only one worker and vice versa.
In the ideal system where all the task requests can be handled, the optimal requester-provider matches can be obtained via the Hungarian algorithm . Guaranteeing to solve the assignment problem in polynomial time, the Hungarian algorithm, also known as Munkres assignment algorithm, can find the optimal requester-worker pairs in when is . However, despite its guaranteed polynomial running time, the running time of the Hungarian algorithm significantly increases to solve (12.d). To test the feasibility of the Hungarian algorithm in practice, we implemented it in Ubuntu 18.04.1 LTS equipped with Intel® Xeon(R) CPU E5-2630 @ 2.30GHz (24 cores) and 36GB RAM. In the test, we varied from 100 to 500 in an increment of 100, setting and to 100 and twice of , respectively. Note that we first find the optimal pairs including all the requesters, and then select the top pairs. As shown in Fig (a)a, there exists a significant difference in the time required to find the K optimal pairs. For instance, the Hungarian algorithm completes in approximately 190 seconds, while a greedy algorithm ends in 0.25 second. Moreover, such time difference widens as increase: the Hungarian algorithm requires almost half a day (38711 seconds 11 hours) to find the optimal pairs for 500 requesters, while the greedy approach-based algorithm requires approximately a second to complete (1.05 second). Consequently, it is not feasible to deploy the Hungarian algorithm to find the optimal requester-worker pairs in the growing IoT networks where instantaneous interactions between numerous IoT devices are demanded, despite its optimal return in social welfare as shown in Fig. (b)b.
In addition, the native Hungarian algorithm does not guarantee the aforementioned desirable economic properties (individual rationality, budget-balance, and truthfulness) which are essential to sustain the crowdsourcing service despite achieving higher expected social welfare. Thus, to address this limitation, we propose an expected social welfare maximizing mechanism (ESWM) that is based on a greedy algorithm to heuristically obtain the locally optimal solution. Considering heterogeneity in task depreciation speed and workers’ punctuality, the ESWM selects appropriate requester-worker pairs in a polynomial time (within a couple of seconds). Note that when the platform handles a large number of tasks in real-world IoT systems, the real-time response is essential for the practical deployment of the algorithm. Unlike the existing works, our ESWM aims to achieve a higher social welfare or platform utility in long-term view by attracting and retaining more participants, rather than attempting to simply maximize the platform utility in a given round of auction. In addition, the ESWM achieves individual rationality, budget-balance, computational efficiency, and truthfulness.
5 ESWM Mechanism
In this section, we propose an expected social welfare maximizing (ESWM) mechanism, which largely consists of 3 main steps: 1) winner selection step, 2) matching step, and 3) pricing step. In Fig. 4, the overall workflow of H-ESWM mechanism is shown.
5.1 Winner Selection Step
In the initial stage of each double auction, a set of requester and a set of provider submit to the platform their type information and , respectively. Ultimately, the platform aims to select the same number of winners from both and . As the platform is assumed to have a limited capacity to handle task requests, the maximum numbers of and are set to .
5.1.1 Winning Requester Selection Algorithm (WRSA)
The winner selection criterion in the WRSA is straightforward; select requesters whose task valuation is high and slowly depreciates the task valuation after the deadline. The platform iteratively selects a requester with the maximum ratio of among as a winning requester (line 3-4) as detailed in Algorithm 1. Note that is a tunable parameter to adjust weight between and . By controlling , we can decide the priority order between unit task valuation and depreciation speed in the winning requester selection. The effect of on the platform performance will be discussed in the later section. The selection process repeats until the number of reaches or every is selected. Among , the platform chooses a requester with the minimum ratio of as a threshold requester, and excludes it from (line 9-10).
5.1.2 Winning Worker Selection Algorithm (WWSA)
The criterion and selection process of the WWSA are similar to those of the WRSA. The WRSA selects providers who have low ask values and high probability to meet the deadline. In each iteration, the platform selects a worker with a minimum ratio of among as a winning worker (line 3-4). As in the WRSA, is a tunable parameter value to adjust where the platform puts more weight on between and for the winner selection. By using the ratio, it tries to select a worker with high punctuality as well as a low-cost value. The platform repeats such selection process until the size of reaches or every is selected. Then, the platform chooses a worker with a maximum ratio of among as a threshold worker, who is ruled out from (line 9-10).
5.2 Matching Step
In the matching step, the platform iteratively matches to in such a way that unmatched with the maximum is assigned to unmatched with the minimum as detailed in Algorithm 4. By such matching criterion, the platform pairs a whose task remains in high valuation even after its deadline with a who is cost-effective and punctual.
Before the matching of and , the platform first inspects whether both and can be one-to-one matched and replaces, if necessary, either or , by running the trimming algorithm (TA) (line 4) as detailed in Algorithm 3.
In the TA, the platform trims and returned by the WRSA and the WWSA in such a way that both and have the same size, i.e., , concurrently replacing either or (line 4 or 11). After the trimming process, the platform calculates temporary fee for each and temporary payment to each . For the calculation of and , the platform uses the ratio value of and the ratio value of , which consequently make and the critical values to guarantee truthfulness of requesters and workers, respectively (line 14-19). Note that we denote () temporary fee (temporary payment) because both and in the TA were initially calculated based on the assumption that all will meet the deadline and all will accordingly achieve their full valuation. Later when completes its assigned task, both and will be updated in the Pricing Algorithm.
After the TA, the platform checks whether the budget-balance holds (line 5). If it holds, the platform starts the matching process (line 10-12). Otherwise, the platform revokes the auction (line 6-8).
5.3 Pricing Step
Unlike the existing works where the fee for and the payment to are determined before the task submission and do not change, our mechanism determines the final fee and payment, called effective fee and payment, depending on the task valuation at the task submission time () of each . As detailed in Algorithm 5, when is matched to and submits its task result at , the platform decides the effective fee for and the effective payment to in proportion to the ratio of achieved valuation to the original (full) valuation , respectively (line 4-10). The platform can run the pricing processes for each match in parallel. Note that temporary and were calculated in the MA.
The entire process of H-ESWM is shown in Algorithm 6. The platform first runs the WRSA and the WWSA to decide the winners in a double auction. Then, it runs the MA to match to . Lastly, the platform decides both effective fee and payment in the PA.
6 Performance Evaluation
In this section, we evaluate the performance of the ESWM mechanism in both short-term and long-term scenarios. As crowdsourcing service platforms may continuously compete with each other in real IoT-based crowdsourcing systems, the current performance of platforms can also affect the performance in the future competitions. Thus, we first compare the performance of proposed ESWM mechanism with one of the existing works  in the short-term scenario with one round of auction. Then, we extend the evaluation to the long-term scenario where the previous performance metrics (average requester utility or worker utility) affect the current recruitment of requesters and workers. We let  and ESWM mechanism compete with each other and see how effectively they can attract participants.
6.1 Performance Metrics
As for performance metrics, we consider the social welfare, the platform utility, the average requester utility, and the average worker utility. We compare the performance metrics of our mechanism to those of the benchmark  whose winner selection process is also based on the greedy algorithm, but only considering the ratio of and . Lastly, we prove that our mechanism achieves the four desirable economic properties. Before we provide the simulation results, we define each performance metric as below.
6.1.1 Social Welfare
The social welfare is divided into two categories: 1) the Naïve social welfare (NSW) as defined in (7), and 2) the expected social welfare (ESW) as defined in (8). While the expected social welfare takes potential task depreciation into account, the Naïve social welfare merely assumes the ideal case where all the tasks are completed in time.
6.1.2 Platform Utility
The platform utility is as defined in (6).
6.1.3 Average Requester & Worker Utility
The average requester utility is defined as follows
Similarly, the average worker utility is defined as follows
6.2 Simulation Setting
In our simulations, we uniformly distribute, , , and for requesters over (0, 100] and [1, 10], (0,100], and [, 1.5], respectively. And we uniformly distribute over (0, 100] to include the case where a task becomes valueless right after the deadline (when and ). For workers, we uniformly distribute and over (0, 10] and (0, 1.5], respectively. All the simulation results for the performance metrics are averaged over 200 runs.
6.3 Benchmark vs ESWM in a Single Auction
In this simulation, we compare the performance of the ESWM mechanism to the benchmark when both are given the same 1,000 requesters and 2,000 workers with .
shows the social welfare of the benchmark and the ESWM mechanism, increasing the platform capacity from 100 to 1000 in an increment of 100. In terms of the naïve social welfare, both mechanisms achieve almost the same value. However, the ESWM mechanism produces higher expected social welfare than the benchmark. This can be attributed to the fact that the ESWM mechanism focuses on the estimated valuation which will be achieved in the submission time rather than the original valuation which may depreciate. In contrast, the benchmark mechanism shows a wider gap between its naïve social welfare and expected one, failing to capture the potential task depreciation.
Fig. (a)a shows the platform utility achieved by the benchmark and the ESWM mechanism, before and after the task submission of workers. In both cases, the benchmark mechanism produces higher platform utility than the ESWM mechanism. Such difference results from the difference in the pricing step. In the WRSA and the WWSA, the fee for and the payment to were calculated as
For , tends to be smaller than since the ESWM mechanism iteratively selects winners with the maximum . Consequently, is smaller than that of the benchmark calculated as . Similarly, for is likely to be larger than , as the ESWM mechanism iteratively selects as a winner worker with the minimum . Consequently, is larger than that of the benchmark calculated as . However, after the task submission, the difference significantly decreases due to more frequent unpunctuality in the benchmark, which consequently inflicts more utility loss.
Fig. (b)b and Fig. (c)c show the average requester utility and the average worker utility, respectively. As mentioned in the analysis of Fig. (a)a, the ESWM mechanism charges less and rewards more. Consequently, the ESWM mechanism makes both requesters and workers achieve higher average utility by sacrificing its own utility. In a myopic strategy, it is not a rational decision for a platform to sacrifice its own utility for participants. However, as demonstrated in the next subsection, such sacrifice can bring about a positive effect to the platform itself in the long-term strategy.
6.4 Benchmark vs ESWM with Re-selection
In the previous simulation, we showed that the ESWM mechanism achieves higher social welfare, average requester utility, and average worker utility than the benchmark, at the cost of its own utility. As an initial stage of the competition, both mechanisms were given the same number of requesters and workers in the previous simulation. That is, we assumed that both mechanisms could attract the same number of requesters and workers.
However, in reality, how attractive each mechanism is to participants can differ since each provides different utility for participants. As all participants are rationally selfish, they select the mechanism that would provide higher utility to them. Consequently, when the benchmark and our mechanism compete in a crowdsourcing system, the number of participants that each mechanism attracts can vary depending on the average utility each provides. Thus, to reflect such different attractiveness of each mechanism to participants, we define participation probability functions for requesters and workers as
where () and () denote the average utilities of requesters and workers who joined mechanism A (mechanism B) in the previous recruitment, respectively. Given the average utility of participants in each mechanism, the participation probability function returns a tuple of probabilities that a participant decides to join each mechanism. In this work, we set the participation probability proportional to the square root of the average utility of participants obtained from the previous simulation results, based on . According to the reference, the concavity of the square root function captures the diminishing impact of the utility on the participation probability.
Based on the probabilities, each participant decides which mechanism it will join in the current recruitment. Consequently, when there exist two different mechanisms competing in a crowdsourcing system, a mechanism which has provided higher utility to participants can expect to attract more participants. We call such decision-making process of participants re-selection. In the re-selection simulation, the benchmark mechanism and the ESWM mechanism compete with each other in a system with 2,000 requesters, 4,000 workers, and .
Fig. (a)a shows the simulation results of the social welfare under such re-selection scenario. Compared to the previous simulation results, the ESWM mechanism achieves much higher naïve and expected social welfare than the benchmark. As the ESWM mechanism provided both requesters and workers with higher average utility, it is more appealing to both requesters and workers than the benchmark, which attracts more of them. Consequently, such quantitative growth leads to higher chance of taking beneficial participants, which ultimately increases the social welfare of the ESWM mechanism. Note that in the point of view of the platform, beneficial participants means a requester with high and low , and a worker with low and high . In addition, even with more number of participants, especially workers, the ESWM mechanism can still handle the task requests better than the benchmark (800 task requests while the benchmark can handle up to 600). The reason for such difference can be inferred from the platform utility. As shown in Fig. (b)b, the platform utility of the benchmark is 0 after . Thus, the platform, which is rational and selfish, revokes the double auction as its budget-balance condition is not satisfied.
Moreover, unlike the previous simulation result, Fig. (b)b shows that the ESWM mechanism achieves higher platform utility than the benchmark. For the same reason in the social welfare, such increased platform utility is achieved as the ESWM mechanism attracts more participants, which enables the platform to get better behaving participants. As a result, even though the ESWM mechanism sacrificed its own utility in the initial stage, its utility loss is compensated by attracting more participants in the long-term competition. In addition, the platform capacity to achieve the maximum platform utility increases from 400 to 500, which results in the higher maximum platform utility.
Fig. (c)c and Fig. (d)d show the average requester utility and the average worker utility, respectively. In both results, the ESWM mechanism and the benchmark achieve almost the same average utility as long as both can handle task requests. This can increase the number of requesters and workers participating in the ESWM mechanism, which ironically reduces the average utility of participants, especially workers. In other words, as the number of participants who join the ESMW mechanism increases, more participants fail to be selected as winners, which results in more zero utilities. As a result, the average participant utility of the ESWM mechanism decreases. Based on our observation from Fig. (c)c and Fig. (d)d, we can anticipate that there will not be a significant number of re-selections since the benchmark and the ESWM mechanism offer similar average utilities for both requesters and workers. That is, the benchmark and the ESWM mechanism have approached near the balance point where both mechanisms are equally attractive to participants. Specifically, we can define the balance point as the case where and are discrete uniform distributions.
6.5 Effect of
In the previous simulations, we fixed which decides the weights on for and for in the winner selection step and the pricing step. To analyze the effect of , we evaluate the performance metrics by vary over (0, 2] in an increment of 0.1. In this simulation, we set and to 1,000 and 2,000, respectively, with .
Fig. 7 shows how affects the four performance metrics. In Fig. (a)a, the ESWM mechanism achieves higher platform utility as it sets lower on both and . Considering the effect of on and in the pricing process, such phenomenon is easily comprehensible. In the pricing process, as increases, the fee charged to whose is smaller than decreases. Consequently, the platform utility decreases since the fees for which is the revenue for the platform decreases. In contrast, the reward for whose is larger than increases. Since the rewards given to are an expenditure of the platform, its utility decreases. Interestingly, in Fig. 7, when for both and are set beyond 1.0, the platform utility becomes close to zero, which indicates that the platform cannot sustain its crowdsourcing service. Thus, even though appropriate values enable the platform to attract more participants by offering them higher average utility, a too large can obstruct the sustainability of the platform.
For the same reasons in the platform utility, the average requester and worker utilities increase as increases as shown in Fig. (b)b and Fig. (c)c, respectively. When for is constant, lower fees for lead to higher utility. With for constant, higher reward for results in higher utility. As in the case of the platform utility, we can observe zero utility for both requesters and workers.
Unlike the before-mentioned performance metrics, the expected social welfare shows a different trend. As shown in Fig. (d)d, while the social welfare generally increases as of increases, it decreases as of increases. Such inconsistency with the other performance metrics is due to the difference in the degree of importance of in the winning requester selection and winning worker selection processes. Compared to the role of of in the winning requester selection, of plays a less critical role to maximize the expected social welfare in the winning worker selection. In other words, the speed of task depreciation affects more to the expected social welfare than the workers’ punctuality. In addition, setting of large in the winning worker selection results in prioritizing workers’ punctuality over their cost, which may lead to the decrease of the expected social welfare. However, considering the continuous competition with the other platforms, setting of too small will ultimately result in the loss of both the expected social welfare and the platform utility, since of also affect the reward for workers, which plays a critical role in attracting and retaining more workers. Thus, dynamically setting appropriate values for of and is critical to the performance in the continuous competition.
6.6 Proof of Desirable Economic Properties
We prove that the ESWM mechanism satisfies the aforementioned four desirable economic properties.
ESWM mechanism is always individually rational for all participants, except for unpunctual workers, regardless of the task submission time, .
To prove the individual rationality of the ESWM mechanism, we show that each participant’s utility is non-negative by the end of an auction when reporting its true maximum valuation (requester) or cost (worker).
Requesters: To prove the individual rationality of requesters, we show that . For a , his/her utility is as defined in (2). When submits its true valuation , the platform calculates the temporary fee, for in the winner selection process. Putting into (2), the utility of each winning requester is before its matched worker submits the task result.
In the pricing step, the platform finally determines the fee for each winning requester as , based on the achieved valuation by the matched worker . Given and , the utility of each winning requester becomes
Since as defined in (1) and , is always non-negative. Therefore, the non-negative utility of each winning requester is guaranteed regardless of the task submission time. For the unselected requesters, their utility is 0 as defined in (2). Such non-negative utility proves that our incentive mechanism achieves the individual rationality of all requesters, regardless of the task submission time.
Workers: When a worker submits its true cost value, the platform calculates the temporary payment to each winning worker, . Putting into (5), the utility of each winning worker before its task result submission is .
In the pricing step, the platform determines the final payment to each winning worker who is matched to , based on the achieved valuation . Given and , the utility of becomes