Due to the rapid development of smart devices and wireless technology, mobile crowdsensing  has risen as an emerging sensing paradigm. It can employ a large number of smart devices to extract and share their local information using the embedded sensors on them. A typical mobile crowdsensing system usually consists of three major components: crowdsensing platform, service requesters, and mobile device users (data contributors). The platform is responsible for handling information requests from the service requesters and publishing sensing tasks to the users through the interaction of their smartphone applications.
A critical problem in crowdsensing is to find the best match between users and tasks. Most of the existing works adopt a platform-centric model [2, 3, 4, 5, 6, 7], which allows the platform to make centralized decisions on which users are selected to perform which sensing tasks. These works usually focus on the incentive problem, where a typical procedure goes like this: each user submits a bid reflecting her willingness or cost in participating in a task, and then the platform determines the set of selected users and their payments, so as to optimize certain utility metric (e.g., coverage, revenue, service quality) and satisfy some game-theoretic properties. The underlying assumption behind this type of model is that the users are fully rational and are capable of determining their optimal strategies. However, as pointed out in , this assumption, as well as the setting that each user’s preference can be abstracted as a single bidding parameter, could be an oversimplification of the complicated user behaviors.
Another type of task matching systems, referred as user-centric model, gives the users more freedom to choose their interested tasks. It has been widely adopted in many commercial crowdsensing systems, such as Waze , Field Agent , and Gigwalk . In these systems, the available tasks are shown to the users via their smartphone applications. The users can manually browse through the task corpus (often with simple built-in filters, such as proximity filter and payment filter), and choose their interested tasks to participate in. However, since the number of the tasks is often really large, it is inefficient for the users to browse page by page searching for suitable tasks. Without an efficient personalized task matching solution, the users may end up selecting tasks that they are not familiar with or not interested in, which may result in a decrement of the quality of their collected sensing data.
Considering the limitations of existing task matching works, we propose to design a personalized task recommender system for mobile crowdsensing, so as to facilitate the match of the users with suitable tasks. Note that in traditional recommender systems, such as movie recommendation, items are recommended based only on customers’ preferences . Whereas, in mobile crowdsensing, besides the metric of the users’ preferences, we also need to take the users’ reliability/data quality into consideration. That is because the users may have heterogeneous sensing behaviors towards different tasks, which could influence the quality of their collected data . Achieving preference- and quality-aware task recommendation can have a positive impact on both attracting the user’s further participation and improving the crowdsensing system’s effectiveness. However, such a personalized task recommender system is missing in the current crowdsensing literature. Jin et al. and Wang et al. studied the quality-aware incentive mechanism design without addressing the need of personalized task recommendation. Karaliopoulos et al.
proposed to assign the tasks to the users based on the profile of each user’s probability of accepting a task, but did not consider the users’ reliability information.
Central to the personalized task recommender system is a careful characterization on each user’s preference and reliability towards different tasks. However, it is not a trivial task, due to the unique nature of the crowdsensing scenarios. One of the challenges is finding a good way to model the users’ preference over different tasks. In some traditional recommendation scenarios, customers’ preference can be readily obtained from their previous ratings . However, the users in mobile crowdsensing do not typically provide explicit ratings on their preference, s.t., we have to infer the users’ preference from their implicit feedback, including their task browsing history and task selection record.
The most challenging part is estimating the users’ reliability levels. In particular, we have to learn the users’ reliability information for different tasks based on their submitted sensing data, if any, so as to build each user a profile characterizing the trustworthiness of the users’ data for performing the tasks. Althoughtruth discovery algorithms  can be adopted to jointly estimate the users’ data quality and the underlying truths, they cannot fully address the need of user reliability profiling in the context of task recommendation. Note that truth discovery algorithms usually generate a single reliability parameter for each user representing the overall trustworthiness level of the user. However, to conduct personalized task recommendation, the heterogeneity of a user’s reliability in different tasks has to be exploited, and thus a more fine-grained reliability profiling of the users should be considered. A possible alternative is to independently generate each user a reliability parameter for each task by applying truth discovery algorithms to the data of each sensing task. Unfortunately, this approach may suffer from scalability issue, and what’s worse, a user’s reliability for a task cannot be estimated by truth discovery algorithms, if the user did not contribute data to that task. This could often be a problem in real crowdsensing scenarios, especially when the users’ data are sparse, i.e., each user only contributes data to only a small number of the tasks. Besides, without the prior knowledge of truth and reliability measures, typical truth discovery algorithms are likely to fail, when the majority of data are inaccurate .
In this work, we jointly consider the problems of user profiling and personalized task matching in mobile crowdsensing, and propose a personalized task recommender system framework, which recommends tasks to the users based on both the users’ preference and reliability. We propose approaches to measure the users’ preference and reliability, respectively. First, in profiling the users’ preferences, we present a hybrid preference metric that integrates the feedback against both the users’ historical performance and the preference of their peers. Then, to tackle the more challenging part of profiling the users’ reliability, we model the problem as a semi-unsupervised learning problem, and propose an efficient block coordinate descent algorithm to jointly estimate the users’ reliability and the unknown ground truths. We surpass existing truth discovery methods by (1) considering grouping tasks into several categories, (2) taking the information of failed tasks into consideration, and (3) using a small number of available truth data to facilitate the estimation accuracy. Note that a user’s reliability for certain task category cannot be estimated if the user did not provide data for those tasks. To address this problem, we further propose a matrix factorization method to estimate the missing entries. We conduct a real-world experiment and a large-scale crowdsensing simulation to evaluate the performance of our methods. The evaluation results show that our proposed methods can achieve superior performance over existing works and our benchmarks.
The main contributions of this work are listed as follows.
First, we design a personalized task recommender system framework that matches tasks to the users based on both the users’ preference and reliability of the tasks. We propose a method to profile each user’s preference over the tasks by exploiting the user’s implicit feedback.
Second, we model the problem of user reliability profiling as a semi-supervised learning model, and propose an efficient algorithm to estimate the users’ reliability and the unknown ground truths simultaneously. We also propose a matrix factorization method to estimate each user reliability for tasks she did not contribute data to.
Third, we conduct a real-world crowdsensing experiment and a large-scale simulation to evaluate the performance of our methods. Both the experiment and simulation results show that our proposed methods can achieve significant performance improvements to our benchmarks.
The rest of the paper is organized as follows. We first present the system overview in Section 2, and then introduce the problem formulations in Section 3. In Section 4, we propose our reliability profiling algorithms. We evaluate our proposed methods and present the evaluation results in Section 5. In Section 7, we review the related works. Finally, we conclude this paper in Section 6.
2 System Overview
In this section, we present an overview of our proposed personalized task recommender system.
2.1 Personalized Task Recommendation
Suppose there are users and sensing tasks in the system. The set of users and tasks are denoted by and , respectively. We consider a user-centric model, where the users can browse the tasks in their smartphone applications and choose to participate in their interested tasks. If a user wants to participate in a task , she can click on some button to inform the platform her participation. After that, the user will use her smartphone to collect and then submit sensing data to the platform. Let denote the data submitted by the user to the task . The ground truth of the task is denoted by , which is usually unavailable to the platform.
We tend to build a personalized task recommender system, where the tasks are recommended to the users based on a joint consideration of the users’ preference and reliability. Specifically, for each task , suppose each user ’s preference and reliability regarding the task is denoted by and , respectively. We propose a recommendation score that takes both the user ’s preference and reliability for the task into account, i.e., , where the function outputs the recommendation metric based on the two input parameters.
Instances of the function can be specified by the platform according to its need. Simple instances may include a linear combination (i.e., ) or a product (i.e., ) of the two parameters. One may also treat the personalized recommendation as a constrained optimization problem, where one parameter is applied into the optimization term while the other as the constraint. For example, for each user , we want to recommend such task that maximizes the user’s preference and also satisfies the constraint that the user’s reliability for the task should be above a certain threshold. More formally:
where is the minimum required reliability for a user to perform a task.
Central to the system model is the users’ preference and reliability measures. To that end, we need to carefully examine the historical data of the crowdsensing system, in order to acquire profiles of the users’ preference and reliability.
2.2 User Preference Profiling
To characterize the users’ preference on the tasks, the users’ feedback information is needed. However, in mobile crowdsensing, the users’ explicit feedback (e.g., ratings, like or dislike) is usually unavailable. Thus, we have to exploit the users’ implicit feedback. Fortunately, the crowdsensing platform can have access to each user’s browsing history on the application, including which tasks the user has browsed, selected, and successfully completed. This information can be used to infer the users’ preference on the tasks, from two different perspectives, i.e., either against the user’s historical performance (content-based methods) or the preferences of other similar users (collaborative methods) .
2.2.1 Content-Based Method
Each task has many attributes, including time, location, travel distance, payment, category, and so on. Along with the users’ task selection choices (selected or not), this information can be regarded as training data. By applying classification methods, such as logistic regression or Bayesian classifier, we can build a classifier for each user to infer her probability of selecting each task. We let denote the probability of the user selecting the task .
2.2.2 Collaborative Method
Let denote the users’ task preference matrix, where the entry indicates the user ’s preference over the task . We assign the value of each by mapping the user’s task browsing history to a task preference value, i.e.,
Then, we can apply state-of-the-art collaborative filtering methods to predict these missing entries .
Both of the above two methods have their limitations. On one hand, the content-based method may suffer from an overspecialization problem, i.e., it will only recommend a user tasks that are similar to those she has already selected. On the other hand, the collaborative methods may not perform well when the preference matrix is sparse. To alleviate the limitations of these two methods, we propose a hybrid recommendation approach. To do so, we define each user ’s preference for each task as a linear combination of the content-based characteristic and the collaborative-based characteristic, i.e.,
is a hyperparameter.
Many previous works on recommender system have investigated the problem of exploiting customers’ implicit feedback in different application contexts. The intuitions of them can be further incorporated to improve our modelling of the users’ preferences. Possible extensions may include further considering the users’ preference over each category [18, 19], incorporating the implicit negative feedback , multidimensionality of recommendations , and Bayesian personalized ranking . In the rest of the paper, we tend to put our most efforts on user reliability profiling, which is the most challenging part of the system.
2.3 User Reliability Profiling
A user’s reliability in performing a sensing task is measured by the quality of her contributed data. Intuitively, if a user’s contributed data are accurate, i.e., close to the ground truths, the user would have a higher reliability level, and vice versa. Note that in mobile crowdsensing scenarios, the ground truths are usually unavailable, thus we cannot directly measure the users’ reliability level by comparing their data with the ground truths.
To address this problem, one possible approach is to adopt truth discovery algorithms, which are proposed to resolve conflicts in data provided by heterogeneous data sources. Existing truth discovery algorithms, e.g.[22, 23, 24, 25, 26, 27], usually follow a similar unsupervised procedure: first initializing the ground truth estimation using a simple majority voting or averaging scheme, and then iteratively updating reliability and ground truth based on the current estimation of the other. Although truth discovery algorithms have been performed well on many web mining tasks, they cannot be directly applied due to the following unique requirements of our reliability profiling contexts.
Multi-dimensional Reliability: Existing truth discovery algorithms usually output a single reliability parameter for each user characterizing the overall trustworthiness of the user. Whereas, with the objective of personalized task recommendation, differentiation among the tasks is needed, s.t., we need to estimate the users’ heterogeneous reliability levels for different tasks. Besides, we noticed that in mobile crowdsensing, there are a large number of tasks, where different tasks may require different data collection behaviors, thus a user’s reliability may vary towards different tasks. For example, a user who is used to put her smartphone in the bag may fail to provide accurate data in measuring the surrounding noise, while can still have high reliability level in monitoring traffic congestions . Thus, a more fine-grained reliability profiling method is needed to character the users’ multi-dimensional reliability.
Scalability: One natural idea to address the multi-dimensional reliability problem is to apply existing truth discovery algorithms to each task independently and generate each user a reliability measure for each task . However, due to the large number of tasks, calculating a reliability parameter per user per task is not scalable. Besides, estimating each user’s reliability based only on her data to a single task may be susceptible to noise, and thus cannot accurately reflect the user’s reliability level. Therefore, our reliability profiling method should not only provide a fine-grained reliability estimation, but also be scalable under a large number of tasks.
Robustness: Most truth discovery algorithms start with a uniform initialization of truth values or reliability values. As a result, their performance relies on the assumption that the most users’ are reliable. However, when this assumption fails, the iterative computation of truth estimation and reliability estimation may move towards incorrect directions, leading to poor estimation accuracy. This problem, referred as “initialization problem” could often occurs in mobile crowdsensing scenarios, due to the uncertainty of each individual human contributor. Thus, it is crucial to design a reliability estimation algorithm that is robust to such scenarios.
Complete Reliability Characterization: Note that the users’ reliability values are estimated based on the relative relative accuracy of the their data. In consequence, a user’s reliability for certain dimension cannot be estimated if the user did not provide data to that dimension. This may not be a problem in many truth discovery scenarios where their main goal is to infer the unknown ground truths. However, in the context of personalized task recommendation, we have to obtain a complete characterization of the users’ reliability levels. Thus, we need to propose a method to predict each user’s reliability for those dimensions that the user did not provide data to.
Different Data Types: Different sensing tasks may have different data types. For example, a traffic congestion task may require categorical data (e.g., no congestion, medium congestion, or high congestion), while a noise monitoring task may require continuous numerical data (i.e., the noise levels of the users’ surrounding environment). Thus, the reliability profiling algorithm needs to be carefully designed to handle both categorical and continuous data types.
Our proposed user reliability profiling methods are carefully designed to address the above requirements. Specifically, for the multi-dimensional reliability and the scalability issues, we classify tasks into a number of categories and estimate the users’ reliability for each category independently. As for the robustness issue, we propose a semi-supervised learning framework that exploits few available truth knowledge to improve the estimation accuracy. We also propose a matrix factorization method to predict the missing entries in reliability estimation. The issue of different data types is taken care of by considering different loss functions. In the subsequent sections, we present the problem formulation and algorithm design of our user reliability profiling problem respectively.
3 Problem Formulation
In this section, we formalize the user reliability profiling problem. We first present the problem model, and then propose a preliminary version and two enhancements of our problem. One enhancement is to incorporate the information of failed tasks, and the other is to integrate a small portion of truth data to improve the estimation accuracy.
3.1 Problem Model
To model the users’ multi-dimensional reliability, we tend to take the similarities among the tasks into consideration by classifying the tasks into different categories, where the tasks within each category focus on a similar sensing target. For example, some category only focuses on noise monitoring tasks, and an other focuses on traffic congestion monitoring. The classification of the tasks is common in current crowdsensing applications, e.g., Waze . It can be done by the platform’s direct designation in the task publication phase, or by applying text classification techniques  to automatically analyze the descriptions of the tasks. Specifically, we categorize the tasks into categories (). For each category , the set of the tasks belong to the category is denoted by (). For simplicity, we assume that each task only belongs to one category, thus the sets are mutually disjoint. More general situations will be discussed in Section 6. For each task category , let denote each user ’s reliability of the task category. The user reliability profiling problem is to infer the users’ reliability for each category. More formally:
Definition 1 (User Reliability Profiling Problem).
Given a set of users , a set of interested tasks , and the users’ contributed data , the user reliability profiling problem aims to estimate the unknown ground truths , and the users’ reliability matrix , where is the dimension of each user ’s reliability.
3.2 Preliminary Problem Formulation
We assume that the tasks in different categories are independent, s.t., we can estimate the users’ reliability for each category separately. Let denote the set of users who contributed data to tasks in category . To estimate the users’ reliability, for each category , we aim to solve the following optimization problem.
where indicates if the user has contributed data to the task , is our estimation for the task ’s ground truth, and is a regularization function. Following the convention of truth discovery literature , we adopt the exponential regularization function, i.e., . The loss function measures the distance between a user’s data and the estimated truth. For continuous data, can be defined as the squared distance, i.e., , while for categorial data, can be defined as the distance, i.e., if , and otherwise. An intuitive interpretation of the problem formulation is that the ground truth should be close to the data contributed by reliable users, and the users whose data are close to the ground truth should have high reliability levels.
3.3 Incorporating Information of Failed Tasks
We observe that in practice, the users may select certain tasks, but did not successfully complete them (e.g., decide to terminate the sensing procedure half way). The phenomenon, referred as failed tasks, is likely to reflect the users’ unreliability in performing certain tasks. In this part, we improve the above problem formalization by taking this issue into account.
We first introduce some notations. Among the set of tasks in each category , we let denote the set of tasks the user selected, and the set of tasks the user has successfully completed, where . For each category , we calculate each user ’s task completion ratio , which is defined as the number of tasks the user has finished over the number of tasks the user has selected, i.e., . We revise the original formulation by multiplying a penalty term to . The revised problem is presented as follows.
where is a function mapping each user’s completion ratio to a penalty. We can see that the users who have failed tasks will receive a completion ratio less than 1, and thus their reliability outputs should be less than the ones estimated by the previous method shown in Equation 4. An extreme case is that some user may select multiple tasks but completed zero (i.e., and ). In this case, the system cannot generate a reliability estimation for the user. We will handle this problem in Section 4.2.
3.4 Incorporating Available Ground Truths
The above formulation extends the basic truth discovery problem, which is built upon an underlying assumption that the majority of data are reliable. Unfortunately, it may suffer from the initialization problem, i.e., when most of the data are unreliable, the above estimation procedure may have bad performance . To tackle this issue, we propose a semi-supervised
learning framework, which incorporates a small number of ground truths to improve the estimation accuracy. To this end, the platform may intentionally add a few tasks with known ground truths into the task corpus to collect additional information on the users’ reliability, whereas the users have no idea which tasks are inserted by the platform. The platform may also sample a few tasks, and employ some trusted workers to obtain their ground truths. Several heuristic methods can be applied to choose the sampled set of tasks. For example, we may choose the sampled tasks randomly, choose the tasks whose data have the largest variations, or choose the tasks which have the most data contributors.
We let denote the set of tasks with unknown ground truths, and denote the set of tasks that are intentionally inserted by the platform with known truth information. For each category of tasks, we let and denote the set of the tasks without and with prior ground truths respectively.
Having the ground truths of some tasks in hand, we propose to leverage those information to further enhance our estimation accuracy. To distinguish the notations, we let denote the estimation of the ground truth (), and denote the known truth (). Then, for each category , the modified learning optimization problem is given by
where is a hyper parameter controlling the relative weight of the second loss terms. We can see that the second loss term is constant for each user in each task category . We let denote the term , and the problem presentation can be simplified as follows.
We summarize the frequently used notations in Table 1.
|User, number of users, and the set of users|
|Task, number of tasks, and the set of tasks|
|Recommendation score for user and task|
|Task category, and number of categories|
|User ’s preference for task|
|User ’s reliability in task category|
|The set of tasks in category|
|The set of users contributed data to|
|User ’s data for task|
|Ground truth of task|
|Estimation of the task ’s ground truth|
|If user contributed data to task|
|The set of tasks user selected in|
|The set of tasks user finished in|
|User ’s task completion ration in|
|The set of tasks with known ground truths|
|The set of tasks belong to and in|
|The set of users contributed data to|
4 User Reliability Profiling Algorithm
In this section, we first propose an algorithm to solve the user reliability profiling problem formulated above. Then, we further propose a matrix factorization method to estimate each user’s reliability for the task categories that lack the user’s historical performance.
4.1 Estimating Users’ Reliability
In the problem formulated in Equation 7, two sets of variables need to be estimated, i.e., the users’ reliability levels and unknown ground truths. We propose an efficient block coordinate descent algorithm to solve it. The idea of the algorithm is to fix one set of variables to solve the other, and repeat this process until convergence. Since the estimation process for each category can be done independently, parallel computing can be adopted to speed up the entire calculation process. For each task category , we perform the following three steps: parameter initialization, truth update, and reliability estimation.
4.1.0 Parameter Initialization
In the parameter initialization phase, we assign initial values to one set of the variables to give the learning algorithm a starting point. Existing truth discovery algorithms either initialize the unknown ground truths using a simple majority voting or averaging scheme, or uniformly initialize the reliability parameters. As pointed out in [16, 27], random or uniform initialization may result in poor estimation performance, which is especially true when most data are unreliable.
To mitigate this problem, we propose to enhance the initialization of the users’ reliability parameters by incorporating the prior knowledge of available ground truths. The idea is to leverage the known truth knowledge to give related users good initial estimations of their reliability. Specifically, for each category , let denote the set of users who contributed data to tasks in . For the users in , we initialize their reliability by solving the following problem.
The above problem is convex, thus we can apply the method of Lagrangian multipliers to solve it.
As for the remaining users in , since they did not contribute data to tasks whose ground truths are known, no prior knowledge can be applied. Thus, their reliability parameters are uniformly initialized such that
4.1.1 Truth Update
After obtaining an initial estimation of the users’ reliability, we can update the estimation of truths by treating the estimated reliability parameters as fixed values. Then, the truth of each task can be updated using the following rule.
Given the users’ reliability parameters, the optimization problem in Equation 11 can be optimally solved. For continuous data type, the optimal solution is given by
As for categorial data type, the solution is
where if , and 0 otherwise.
For each task , we first consider the case of continuous data, where . Then, the objective function can be formalized as follows
We take the partial derivative of the function with respect to and set it to zero, i.e.,
Solving the above equation, we get Equation 12.
4.1.2 Reliability Estimation
After updating the estimation of the ground truth, we now fix the values of , and calculate the users’ data qualities by solving the following optimization function. Intuitively, the users whose data are close to the ground truth will have high reliability estimations, and vice versa.
Given fixed truth estimation , the problem in Equation 15 can be optimally solved. The optimal value of each is given by
The problem is convex, since the objective term is linear and the constraint set is convex. Therefore, we can apply the method of Lagrangian multipliers to solve the problem. The Lagrangian of Equation 15 is given as:
where is a Lagrange multiplier. Taking the partial derivative of Equation 17 with respect to , we have
Letting Equation 18 to zero, we get
Summing both sides over , we get
Since , we have
The pseudo-code of the algorithm is presented in Algorithm 1. We first initialize the users’ reliability parameters, and then keep iterating the steps of truth update and reliability estimation until the change of the users’ reliability is below a certain threshold. Due to the convexity of our problem and the ability to achieve the optimal solution for each step (Theorem 1 and Theorem 2), our algorithm is guaranteed to converge to some local optimum, according to the proposition of the block coordinate descent . Further improvements can be made to find a 2-approximation of the global optimum within nearly linear time .
4.1.3 Reliability Normalization
Until now, we have obtained the estimations of the users’ reliability levels and unknown ground truths. However, there is a problem in our model, i.e., each user ’s reliability estimations for different categories are in different scales. From the regularization term , we can see that the average value for is , which is proportional to the number of data contributors for category . This means that a user is likely to receive a higher reliability score when she is among a large number of data contributors, which is not reasonable. In order to guarantee each user’s reliability estimations for different tasks are in the same scale, we normalize each user ’s reliability estimation into .
4.2 Estimating Missing Entries: A Latent Factor Model
From the above subsection, we have obtained each user’s reliability information over the task categories that she has contributed data to. However, we observe that if a user did not contribute data to some category (i.e., ), then Algorithm 1 is not able to estimate the user ’s reliability over . In this part, we propose a matrix factorization method to address this problem.
We use to denote the users’ reliability matrix, where each entry is the user ’s reliability for task category . We map both users and task categories to a joint latent factor space of dimensionality . Specifically, we assume that each user
is associated with a vector, and each category is associated with . The vector can be interpreted as the user ’s capabilities in different dimensions, and the vector can be seen as the weight of each capability needed by the category . Then, each user ’s reliability for each category can be calculated as .
To estimate the missing entries in matrix , we tend to calculate each user ’s latent vector and each category’s latent vector . Let and denote the sets of users’ and categories’ latent vectors, respectively. Then, the objective function can be formalized as follows.
where indicates if user has contributed data to category (1 means yes, and 0 otherwise). To prevent over-fitting, we add regularization terms in Equation 22.
where and . and are parameters controlling the weights of regularization terms.
We propose to use a simple gradient descent method to solve the above problem. The pseudo-code is presented in Algorithm 2. We first initialize and to small random values. After that, we apply gradient descent algorithm, i.e., for every and , we update and using the following rules
where is the learning rate. Finally, we can predict a user ’s reliability for a task category even if the user did not provide any data to , i.e., for , .
In this section, we implement and evaluate the performance of our proposed methods. We first conduct a real-world crowdsensing experiment, and then simulate a large-scale crowdsensing scenario to further examine the performance of our methods.
5.1 Experiment Setup
We recruit 10 users (8 males and 2 females) to participate in our experiment. In the experiment, we manually create 123 sensing tasks for 9 different categories. An overview of the tasks is presented in Table II. The tasks within the same category focus on the same sensing target (such as noise, traffic, or weather), but with different attributes, including time, locations, and payments. Each task category has a data type requirement. For instance, noise monitoring requires continuous data type, while weather monitoring requires categorical data type. The entire task corpus is shown to the users through the browsers on the users’ smartphones. Each user can browse through these tasks, and choose their interested tasks to work on. The ground truth of each task is monitored by the authors themselves, and unavailable to the users. We collect the users’ sensing data, as well as their operation records, including each user’s task browsing history, task selection history, and task completion history.
According to our collected data, each user contributes data to about 60% of the tasks in average. The parameter used in our semi-supervised learning model is set to 1. And for each task category, we use the ground truths of 10% of the tasks. The parameters , and used in our matrix factorization method are set to 3, 5 and 5, respectively.
|Category||Monitored target||# of tasks||Data type|
5.2 Experiment Results on User Reliability Profiling
In the experiment, we evaluate the performance of our proposed user profiling algorithm. To differentiate the notations, we use “URP-BA” to denote the basic version shown in 3.2, and “URP-E1” and “URP-E2” to denote the first enhancement and the second enhancement, respectively. We compare our algorithms with two benchmarks. One is a heuristic method that treats each user’s data equally, i.e., simple average (“Avg.”) for continuous data and majority voting (“Voting”) for categorical data. The other benchmark is a general truth discovery framework, called “CRH” , which uses a single parameter to model each user’s reliability level. We adopt the following two metrics to measure the performance of the algorithms.
RMSE: For continuous data, we use Root Mean Square Error (RMSE) to measure the distance between the estimation result and the ground truth. Mathematically, the RMSE is defined as .
Error Rate: For categorical data, we use Error Rate to quantify the performance of an algorithm. The Error Rate of an algorithm is defined as the percentage of the tasks to which the algorithm’s estimations are different from the ground truth, i.e., .
Fig. 1 presents the performance comparison between our algorithms and the benchmarks. We can see that for either data type, the truth discovery-based algorithms can achieve higher estimation accuracy than the simple average or majority voting, indicating the effectiveness of truth discovery algorithms. However, the performance of Avg./Voting, CRH, URP-BA, and URP-E1 tends to be similar. The main reason is that under the crowdsensing scenarios, these usually exist many tasks to which the majority of the users’ data are inaccurate, thus the traditional unsupervised learning models may have trouble identifying the users’ true reliability levels. In this case, as we can see that URP-E2 has superior performance to the other four algorithms, incorporating even a small number of ground truths can greatly improve the estimation accuracy.
5.3 Experiment Results on Personalized Task Matching
Besides profiling the users’ reliability, we also profile each user’s preference towards each task using the methods proposed in Section 2.2. In Fig. 2(a) and Fig. 2(b), we present the reliability profiles and preference profiles of two representative users respectively, where the user’s preference towards a task category is calculated as the user’s average preference score of the tasks in the category. We normalize the users’ preferences to [0,5] for better graphical presentation.
To evaluate the performance of our personalized task recommender system, we provide each user a list of 20 recommended tasks, and ask each user to choose their interested tasks. Recall that our personalized task recommender system recommends tasks to the users based on both the users’ reliability and preference. Specifically, for each user and task pair , we calculate a recommendation score . Suppose task belongs to category , then we set to . We use and in our experiment. After that, our system recommends each user 20 tasks with the highest recommendation scores. Three benchmarks are adopted, including random recommendation, preference-only recommendation, and reliability-only recommendation. Random task recommendation strategy provides each user a list of 20 randomly chosen tasks, while the preference- or reliability-only recommendation strategies provide each user 20 tasks with highest preference or reliability scores, respectively.
The performance of task matching strategies is measured on two different perspectives, i.e., task acceptance ratio and estimation accuracy. The task acceptance ratio is defined as the percentage of the recommended tasks that the users have selected, and the estimation accuracy is measured using RMSE or Error Rate depending on the data types of the tasks. The performance comparison of different task matching strategies is presented in Fig. 3. We can see that the preference-only strategy has the highest task acceptance ratio, while the reliability-only strategy outputs the most accurate estimation results. That is because these two strategies match tasks to the users with the tendency of facilitating the match of one certain perspective. Comparing with other task matching strategies, we can see that our proposed hybrid recommendation strategy can achieve a good balance between the acceptance ratio and the estimation accuracy.
5.4 Evaluations on A Large-Scale Scenario
In this subsection, we examine the performance of our user profiling algorithm on a large-scale crowdsensing scenario.
In our simulation, there are 100 users and 1000 tasks. These tasks are randomly distributed among 20 categories. Each user’s task selection rate is set to 10%, i.e., each user contributes data to each task with % probability. The ground truth of each task is randomly distributed within [30,100]. For each user , if she contributes data to the task of category , then her data
is generated based on a Gaussian distribution with the mean
and variance, i.e., . In URP-E2, we randomly choose 1% of tasks, and incorporate their ground truths in the user reliability profiling process.
In the simulation, we classify the users into three groups: reliable users, normal users, and unreliable users, where the users’ reliability distributions in these three groups are , , and , respectively. We consider three different settings. In the first setting, the users are classified into the three groups randomly. In the second setting, each user has 60% probability of being classified into reliable users, 30% normal users, and 10% unreliable users, while in the third setting, each user has 10% being reliable, 30% being normal, and 60% being unreliable. We assume that for each user, if her reliability for certain task is below 0.2, then the user will have 50% probability of failing the task.
Fig. 4 presents the estimation accuracy of different algorithms with a varying number of the users. The number of users varies from 10 to 100 with the increment of 10. We can see that the simple average has the worst estimation accuracy, while URP-E2 achieves the lowest RMSE in all the three settings. In 4(c), we observe that the RMSE first grows as the number of users increases, and then decrease when the number of users is getting larger. This is because that when the number of users is small, slightly increasing the number of users, especially unreliable users, may bring extra errors to the estimation results. As the number of users increases, the platform can access to more information, and thus can reduce the estimation errors.
Fig. 5 shows the estimation accuracy of different algorithms with varying task selection rate. We increase the task selection rate from 0.1 to 1 with the increment of 0.1. It can be seen that our proposed user profiling algorithm achieves the lowest RMSE, indicating the effectiveness of our algorithm. Besides, we can observe that the RMSE decreases as the task selection rate increases. This is because that increasing the task selection rate usually means having more data, s.t., the platform can identify the users’ reliability levels more accurately. A similar phenomenon was also observed in .
We also examine the effect of the number of incorporated ground truths on the estimation accuracy. The results are shown in Fig. 6. We can see that having more truth can improve our estimation results. Besides, comparing the different settings, we can see that Setting 2 achieves the best estimation accuracy, since most users in Setting 2 are reliable.
In this section, we discuss several practical issues and potential extensions of our proposed personalized task recommendation methods.
New User Problem: For a user that is new to our system, we may have very little information (browsing history and data contribution) on the user. In this case, it is difficult to get an accurate preference or reliability profile of the user. Fortunately, this new user problem has been widely studied in traditional recommender system literature, e.g., [31, 32], where their ideas can also be applied in our problem scenario. For example, we can recommend the most informative tasks to the new user, so as to gain knowledge of the user’s preference and reliability. Heuristics may include random recommendation, recommending most popular tasks, and recommending tasks among different categories.
Content-Based Reliability Prediction: In this work, we propose a matrix factorization method to predict the missing entries in each user’s reliability estimation. This method is able to capture inherent subtle characteristics of the users’ reliability without the need of extracting features of the users and the tasks. However, one drawback of the approach is that we may not be able to interpret what factors influence the users’ reliability. In situations where interpretability matters, using content-based methods such as building a classification model to predict the users’ reliability can be a good alternative.
General Reliability Profiling Problem: In our reliability profiling model, we assume that each task only belongs to one category and tasks in different categories are independent. Sometimes, these assumptions may not hold. In these cases, a general reliability profiling problem can be considered, i.e., given the users’ contributed data, we aim to estimate the unknown ground truths , each user’s reliability vector , and each task’s weight vector , where is a hyperparameter determining the dimension of the vectors.
Having estimated and , each user ’s reliability for each task can be calculated as . We can see that the model we proposed in Section 3.1 is a simplified version of the general reliability profiling problem, where we specify to be the number of task categories, and each entry of the task vector is 1 if task belongs to category , and zero otherwise. Note that in the general reliability profiling problem, we now have three sets of unknown variables that need to be estimated, which is more difficult. We tend to leave this problem to our future work.
Context-based User Profiling: We observe that a user’s preference and reliability can be dependent on the contextual situation of the user. For example, a user’s preference may be dependent on time (e.g., time of a day, or season of the year) . Also, as pointed out in , a user’s reliability could also be influenced by her activity (e.g., sitting, walking, or running) and her surrounding environment (e.g., home, office, shopping mall). Thus, we think the problem of profiling the users’ preference and reliability in a context-aware situation could also be an interesting future work.
Other Task Matching Models: Though this work focuses on the user-centric model, the estimated preference and reliability parameters of the users can be readily integrated into platform-centric model, allowing the platform to make centralized decisions based on them, such as selecting most reliable users to perform a sensing task , determining the payments of the users based on their reliability levels [36, 37, 27], or integrating the preference or reliability information into the design of incentive mechanisms .
7 Related Work
Crowdsensing Applications: The concept of mobile crowdsensing has attracted broad attention from both industry and academia, and has been applied in various application domains, including but not limited to environment monitoring [38, 39, 40], indoor localization [41, 42], indoor floorplan construction [43, 44], traffic and navigation [45, 46], and image sensing .
Platform-Centric Crowdsensing: Many researchers have studied the user selection problem in mobile crowdsensing. They usually modelled the problem from a game-theoretical perspective like . For example, Zhao et al. considered the problem of budget feasible mechanism design for crowdsensing, and proposed mechanisms for both offline and online scenarios. Karaliopoulos et al. addressed the user recruitment problem for opportunistic network scenario, and proposed two efficient algorithms to maximize the overall location coverage. Zhang et al. proposed a double auction mechanism for proximity-based mobile crowdsensing. In these works, the platform’s main concern was to determine the set of selected users and their corresponding payments so as to maximize a certain optimization metric. They only considered the heterogeneity of the users and assumed that the tasks are of no differences. Some researches have also studied the task assignment problem in mobile crowdsensing. For example, He et al. studied the optimal task allocation problem for location-dependent crowdsensing. Zhao et al. considered the task allocation problem in crowdsensing with the objective of optimizing the energy efficiency of smartphones. Cheung et al. considered the distributed task selection problem for time-sensitive and location-dependent tasks. However, these works were all based on a platform-centric model. Besides, none of these work took the issue of data quality into consideration.
User-Centric Crowdsensing: Few researches have studied the user-centric model in crowdsensing. Karaliopoulos et al. adopted logistic regression techniques to estimate a user’s probability of accepting a task, and tend to match tasks to users based on the information. However, they did not consider the users’ data quality or reliability in performing the sensing tasks. Although Jin et al. and Han et al. considered the problem of quality-aware task matching, they were based on the platform-centric model, and were unable to recommend personalized tasks for the users. In contrast, our work considers a user-centric task matching model by taking both the users’ preference and data quality into consideration. A preliminary version of this work appears at INFOCOM 2018 , while this work has substantial revision over the previous one including additional technical materials and discussions.
Truth Discovery: The problem of truth discovery has been widely studied to handle the situation where data collected from multiple sources tend to be conflicting and the ground truths are unknown . Wang et al. considered the problem of truth detection in social sensing based on EM algorithm. Wang et al. proposed a truth discovery algorithm to handle streaming data. Ouyang et al. proposed a truth discovery method to detect spatial events based on a graphical model. Su et al. designed a generalized decision aggregation framework for distributed sensing scenarios. Wang et al. studied the truth discovery problem in cyber-physical systems. Wang et al. further exploited the problem of truth discovery for interdependent phenomena in social sensing. Meng et al. exploited the spatial correlations to improve the estimation accuracy. CRH  is a general truth discovery framework that can handle both continuous and categorical data. Li et al. considered truth discovery problem for long-tail data, and proposed a confidence-aware approach. Peng et al. propose an EM algorithm to quantity the users’ data qualities in mobile crowdsensing. However, all of these works are based on unsupervised learning models, and thus may suffer from the initialization problem when most data are inaccurate . Yin and Tan et al. proposed a semi-supervised learning model to identify true facts from false ones. However, their work tended to focus on the truth estimation part, but did not output the reliability levels of the data sources, thus cannot address the need of user reliability profiling.
Recommender System: Recommender system has been a hot topic in recent decades. Generally, recommendation techniques can be classified into the following three categories: content-based recommendation, collaborative filtering-based recommendation, and hybrid recommendation . Besides various recommendation techniques, many practical issues in recommender systems have also been widely studied, including exploiting implicit feedback [56, 21], addressing negative feedback [20, 57], context-aware recommendation , group recommendation , and so on. Nevertheless, these works only focused on the users’ preferences, without considering the users’ reliability. In contrast, in mobile crowdsensing, the users’ reliability plays an important role in the effectiveness of the system, and thus should be taken into account in recommending tasks. To that end, we extend the traditional recommender systems by taking the users’ reliability into the consideration and proposing to recommend tasks based on both the users’ preference and reliability.
In this paper, we have studied the problem of personalized task matching in mobile crowdsensing. We have proposed a personalized task recommender framework that can recommend tasks to users based on a fine-grained characterization on both the users’ preference and reliability. We have proposed methods to measure each user’s preferences and reliability of different tasks, respectively. In particular, the proposed user reliability profiling algorithm originates from truth discovery problem, but surpasses existing truth discovery algorithms in three ways, i.e., by proposing a fine-grained multi-dimensional reliability profiling model, by exploiting the information of failed tasks, and also by incorporating a small number of ground truths to improve the estimation accuracy. Further more, we proposed a matrix factorization method to address a critical limitation of the existing truth discovery algorithms in estimating the users’ reliability for the uninvolved tasks. Both a real-world experiment and a large-scale simulation have been conducted to evaluate our proposed methods. The evaluation results have demonstrated the good performance of our methods.
-  R. K. Ganti, F. Ye, and H. Lei, “Mobile crowdsensing: current state and future challenges,” IEEE Communications Magazine, vol. 49, no. 11, 2011.
-  D. Yang, G. Xue, X. Fang, and J. Tang, “Crowdsourcing to smartphones: incentive mechanism design for mobile phone sensing,” in Proceedings of the 18th annual international conference on Mobile computing and networking (MobiCom). ACM, 2012, pp. 173–184.
-  D. Zhao, X.-Y. Li, and H. Ma, “How to crowdsource tasks truthfully without sacrificing utility: Online incentive mechanisms with budget constraint.” in 2014 IEEE International Conference on Computer Communications (INFOCOM), vol. 14, 2014, pp. 1213–1221.
-  Q. Zhao, Y. Zhu, H. Zhu, J. Cao, G. Xue, and B. Li, “Fair energy-efficient sensing task allocation in participatory sensing with smartphones,” in 2014 IEEE International Conference on Computer Communications (INFOCOM). IEEE, 2014, pp. 1366–1374.
-  M. Karaliopoulos, O. Telelis, and I. Koutsopoulos, “User recruitment for mobile crowdsensing over opportunistic networks,” in 2015 IEEE International Conference on Computer Communications (INFOCOM). IEEE, 2015, pp. 2254–2262.
-  H. Zhang, B. Liu, H. Susanto, G. Xue, and T. Sun, “Incentive mechanism for proximity-based mobile crowd service systems,” in 2016 IEEE International Conference on Computer Communications (INFOCOM). IEEE, 2016, pp. 1–9.
-  H. Jin, L. Su, D. Chen, K. Nahrstedt, and J. Xu, “Quality of information aware incentive mechanisms for mobile crowd sensing systems,” in Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc). ACM, 2015, pp. 167–176.
-  M. Karaliopoulos, I. Koutsopoulos, and M. Titsias, “First learn then earn: Optimizing mobile crowdsensing campaigns through data-driven user profiling,” in Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc). ACM, 2016, pp. 271–280.
-  Waze. [Online]. Available: https://www.waze.com/
-  Field agent. [Online]. Available: http://www.fieldagent.net/
-  Gigwalk. [Online]. Available: http://www.gigwalk.com/
-  G. Adomavicius and A. Tuzhilin, “Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions,” IEEE transactions on knowledge and data engineering, vol. 17, no. 6, pp. 734–749, 2005.
-  K. L. Huang, S. S. Kanhere, and W. Hu, “Are you contributing trustworthy data?: the case for a reputation system in participatory sensing,” in Proceedings of the 13th ACM international conference on Modeling, analysis, and simulation of wireless and mobile systems (MSWiM). ACM, 2010, pp. 14–22.
-  J. Wang, J. Tang, D. Yang, E. Wang, and G. Xue, “Quality-aware and fine-grained incentive mechanisms for mobile crowdsensing,” in 2016 IEEE 36th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2016, pp. 354–363.
-  X. Yin, J. Han, and S. Y. Philip, “Truth discovery with multiple conflicting information providers on the web,” IEEE Transactions on Knowledge and Data Engineering, vol. 20, no. 6, pp. 796–808, 2008.
-  Y. Li, J. Gao, C. Meng, Q. Li, L. Su, B. Zhao, W. Fan, and J. Han, “A survey on truth discovery,” Acm Sigkdd Explorations Newsletter, vol. 17, no. 2, pp. 1–16, 2016.
-  Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommender systems,” Computer, vol. 42, no. 8, 2009.
-  M.-C. Yuen, I. King, and K.-S. Leung, “Taskrec: A task recommendation framework in crowdsourcing systems,” Neural Processing Letters, vol. 41, no. 2, pp. 223–238, 2015.
-  J. Bao, Y. Zheng, and M. F. Mokbel, “Location-based and preference-aware recommendation using sparse geo-social networking data,” in Proceedings of the 20th international conference on advances in geographic information systems. ACM, 2012, pp. 199–208.
-  S. Carroll and M. Swain, “Explicit and implicit negative feedback,” Studies in second language acquisition, vol. 15, no. 3, pp. 357–386, 1993.
S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme, “Bpr: Bayesian
personalized ranking from implicit feedback,” in
Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence (UAI). AUAI Press, 2009, pp. 452–461.
-  Q. Li, Y. Li, J. Gao, L. Su, B. Zhao, M. Demirbas, W. Fan, and J. Han, “A confidence-aware approach for truth discovery on long-tail data,” Proceedings of the VLDB Endowment, vol. 8, no. 4, pp. 425–436, 2014.
-  Q. Li, Y. Li, J. Gao, B. Zhao, W. Fan, and J. Han, “Resolving conflicts in heterogeneous data by truth discovery and source reliability estimation,” in Proceedings of the 2014 ACM SIGMOD international conference on Management of data. ACM, 2014, pp. 1187–1198.
-  L. Su, Q. Li, S. Hu, S. Wang, J. Gao, H. Liu, T. F. Abdelzaher, J. Han, X. Liu, Y. Gao et al., “Generalized decision aggregation in distributed sensing systems,” in 2014 IEEE Real-Time Systems Symposium (RTSS). IEEE, 2014, pp. 1–10.
-  R. W. Ouyang, M. Srivastava, A. Toniolo, and T. J. Norman, “Truth discovery in crowdsourced detection of spatial events,” in Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management (CIKM). ACM, 2014.
-  C. Meng, W. Jiang, Y. Li, J. Gao, L. Su, H. Ding, and Y. Cheng, “Truth discovery on crowd sensing of correlated entities,” in Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems (SenSys). ACM, 2015, pp. 169–182.
-  S. Yang, F. Wu, S. Tang, X. Gao, B. Yang, and G. Chen, “On designing data quality-aware truth estimation and surplus sharing method for mobile crowdsensing,” IEEE Journal on Selected Areas in Communications, vol. 35, no. 4, pp. 832–847, 2017.
G. Forman, “An extensive empirical study of feature selection metrics for text classification,”
Journal of machine learning research, vol. 3, no. Mar, pp. 1289–1305, 2003.
-  D. P. Bertsekas, Nonlinear programming. Athena scientific Belmont, 1999.
-  H. Ding, J. Gao, and J. Xu, “Finding global optimum for truth discovery: Entropy based geometric variance,” in LIPIcs-Leibniz International Proceedings in Informatics, vol. 51. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2016.
-  A. M. Rashid, I. Albert, D. Cosley, S. K. Lam, S. M. McNee, J. A. Konstan, and J. Riedl, “Getting to know you: learning new user preferences in recommender systems,” in Proceedings of the 7th international conference on Intelligent user interfaces. ACM, 2002, pp. 127–134.
-  K. Yu, A. Schwaighofer, V. Tresp, X. Xu, and H.-P. Kriegel, “Probabilistic memory-based collaborative filtering,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 1, pp. 56–69, 2004.
-  G. Adomavicius and A. Tuzhilin, “Context-aware recommender systems,” in Recommender systems handbook. Springer, 2011, pp. 217–253.
-  S. Liu, Z. Zheng, F. Wu, S. Tang, and G. Chen, “Context-aware data quality estimation in mobile crowdsensing,” in 2017 IEEE International Conference on Computer Communications (INFOCOM). IEEE, 2017, pp. 1–9.
-  Z. He, J. Cao, and X. Liu, “High quality participant recruitment in vehicle-based crowdsourcing using predictable mobility,” in 2015 IEEE International Conference on Computer Communications (INFOCOM). IEEE, 2015, pp. 2542–2550.
-  D. Peng, F. Wu, and G. Chen, “Pay as how well you do: A quality based incentive mechanism for crowdsensing,” in Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc). ACM, 2015, pp. 177–186.
-  K. Han, H. Huang, and J. Luo, “Posted pricing for robust crowdsensing,” in Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc). ACM, 2016, pp. 261–270.
-  M. Mun, S. Reddy, K. Shilton, N. Yau, J. Burke, D. Estrin, M. Hansen, E. Howard, R. West, and P. Boda, “Peir, the personal environmental impact report, as a platform for participatory sensing systems research,” in Proceedings of the 7th international conference on Mobile systems, applications, and services (MobiSys). ACM, 2009, pp. 55–68.
-  H. Lu, W. Pan, N. D. Lane, T. Choudhury, and A. T. Campbell, “Soundsense: scalable sound sensing for people-centric applications on mobile phones,” in Proceedings of the 7th international conference on Mobile systems, applications, and services (MobiSys). ACM, 2009, pp. 165–178.
-  Y. Gao, W. Dong, K. Guo, X. Liu, Y. Chen, X. Liu, J. Bu, and C. Chen, “Mosaic: A low-cost mobile sensing system for urban air quality monitoring.” in 2016 IEEE International Conference on Computer Communications (INFOCOM), 2016, pp. 1–9.
-  M. Azizyan, I. Constandache, and R. Roy Choudhury, “Surroundsense: mobile phone localization via ambience fingerprinting,” in Proceedings of the 15th annual international conference on Mobile computing and networking (MobiCom). ACM, 2009, pp. 261–272.
-  A. Rai, K. K. Chintalapudi, V. N. Padmanabhan, and R. Sen, “Zee: Zero-effort crowdsourcing for indoor localization,” in Proceedings of the 18th annual international conference on Mobile computing and networking (MobiCom). ACM, 2012, pp. 293–304.
-  M. Alzantot and M. Youssef, “Crowdinside: automatic construction of indoor floorplans,” in Proceedings of the 20th International Conference on Advances in Geographic Information Systems. ACM, 2012, pp. 99–108.
-  R. Gao, M. Zhao, T. Ye, F. Ye, Y. Wang, K. Bian, T. Wang, and X. Li, “Jigsaw: Indoor floor plan reconstruction via mobile crowdsensing,” in Proceedings of the 20th annual international conference on Mobile computing and networking (MobiCom). ACM, 2014, pp. 249–260.
-  P. Zhou, Y. Zheng, and M. Li, “How long to wait?: predicting bus arrival time with mobile phone based participatory sensing,” in Proceedings of the 10th international conference on Mobile systems, applications, and services (MobiSys). ACM, 2012, pp. 379–392.
-  Y. Shu, K. G. Shin, T. He, and J. Chen, “Last-mile navigation using smartphones,” in Proceedings of the 21st Annual International Conference on Mobile Computing and Networking (MobiCom). ACM, 2015, pp. 512–524.
-  Y. Wang, W. Hu, Y. Wu, and G. Cao, “Smartphoto: a resource-aware crowdsourcing approach for image sensing with smartphones,” in Proceedings of the 15th ACM international symposium on Mobile ad hoc networking and computing (MobiHoc). ACM, 2014, pp. 113–122.
-  S. He, D.-H. Shin, J. Zhang, and J. Chen, “Toward optimal allocation of location dependent tasks in crowdsensing,” in 2014 IEEE International Conference on Computer Communications (INFOCOM). IEEE, 2014, pp. 745–753.
-  M. H. Cheung, R. Southwell, F. Hou, and J. Huang, “Distributed time-sensitive task selection in mobile crowdsensing,” in Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc). ACM, 2015, pp. 157–166.
-  S. Yang, K. Han, Z. Zheng, S. Tang, and F. Wu, “Towards personalized task matching in mobile crowdsensing via fine-grained user profiling,” in 2018 IEEE International Conference on Computer Communications (INFOCOM), 2018.
-  D. Wang, L. Kaplan, H. Le, and T. Abdelzaher, “On truth discovery in social sensing: A maximum likelihood estimation approach,” in Proceedings of the 11th international conference on Information Processing in Sensor Networks (IPSN). ACM, 2012, pp. 233–244.
-  D. Wang, T. Abdelzaher, L. Kaplan, and C. C. Aggarwal, “Recursive fact-finding: A streaming approach to truth estimation in crowdsourcing applications,” in 2013 IEEE 33rd International Conference on Distributed Computing Systems (ICDCS). IEEE, 2013, pp. 530–539.
-  S. Wang, D. Wang, L. Su, L. Kaplan, and T. F. Abdelzaher, “Towards cyber-physical systems in social spaces: The data reliability challenge,” in 2014 IEEE Real-Time Systems Symposium (RTSS). IEEE, 2014, pp. 74–85.
-  S. Wang, L. Su, S. Li, S. Hu, T. Amin, H. Wang, S. Yao, L. Kaplan, and T. Abdelzaher, “Scalable social sensing of interdependent phenomena,” in Proceedings of the 14th International Conference on Information Processing in Sensor Networks (IPSN). ACM, 2015, pp. 202–213.
-  X. Yin and W. Tan, “Semi-supervised truth discovery,” in Proceedings of the 20th international conference on World wide web (WWW). ACM, 2011, pp. 217–226.
-  D. W. Oard, J. Kim et al., “Implicit feedback for recommender systems,” in Proceedings of the AAAI workshop on recommender systems, vol. 83. WoUongong, 1998.
-  D. H. Lee and P. Brusilovsky, “Reinforcing recommendation using implicit negative feedback,” in International conference on user modeling, adaptation, and personalization. Springer, 2009, pp. 422–427.
-  J. Masthoff, “Group recommender systems: Combining individual models,” in Recommender systems handbook. Springer, 2011, pp. 677–702.