Task Recommendation in Crowdsourcing Based on Learning Preferences and Reliabilities

07/27/2018 ∙ by Qiyu Kang, et al. ∙ Nanyang Technological University 0

Workers participating in a crowdsourcing platform can have a wide range of abilities and interests. An important problem in crowdsourcing is the task recommendation problem, in which tasks that best match a particular worker's preferences and reliabilities are recommended to that worker. A task recommendation scheme that assigns tasks more likely to be accepted by a worker who is more likely to complete it reliably results in better performance for the task requester. Without prior information about a worker, his preferences and reliabilities need to be learned over time. In this paper, we propose a multi-armed bandit (MAB) framework to learn a worker's preferences and his reliabilities for different categories of tasks. However, unlike the classical MAB problem, the reward from the worker's completion of a task is unobservable. We therefore include the use of gold tasks (i.e., tasks whose solutions are known a priori and which do not produce any rewards) in our task recommendation procedure. Our model could be viewed as a new variant of MAB, in which the random rewards can only be observed at those time steps where gold tasks are used, and the accuracy of estimating the expected reward of recommending a task to a worker depends on the number of gold tasks used. We show that the optimal regret is O(√(n)), where n is the number of tasks recommended to the worker. We develop three task recommendation strategies to determine the number of gold tasks for different task categories, and show that they are order optimal. Simulations verify the efficiency of our approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In a typical crowdsourcing platform, a worker may be recommended a wide variety of tasks to choose from [1, 2, 3]. For example, on Amazon Mechanical Turk (MTurk) and CrowdFlower, tasks can include labeling the content of an image, determining whether or not sentences extracted from movie reviews are positive, or determining whether or not a website has adult contents. In these examples, the different tasks require different sets of skills from a worker. In the image labeling task, a worker who is good at visual recognition will perform reliably, whereas identifying whether a movie review is positive or not requires a worker with language skills and knowledge of the nuances of that particular language the review is written in. Workers may also have different interests, and may choose not to accept tasks that they are qualified for. Therefore, the crowdsourcing platform needs to find a way to best match the available tasks to the most suitable workers to improve the likelihood of obtaining high-quality solutions to the tasks.

In this paper, we assume that tasks are organized into several categories. Tasks belonging to the same category need similar domain knowledge to be completed. Within the task assignment context, a number of studies have focused on offline strategies, which assign the tasks to workers without any adaptation or feedback during the workers’ tasks completion process. For example, in [4, 5], the authors proposed a task assignment scheme using a bipartite graph to model the affinity of workers for different binary tasks and an iterative algorithm based on belief propagation to infer the final decision from the workers’ responses. Reference [6] studied a task allocation model that provides distributional fairness among workers, while maximizing the task allocation ratio. A privacy-preserving task recommendation scheme was proposed in [7], which only considers workers’ preferences without taking workers’ reliabilities into account. The paper [8] considered heterogeneous tasks with different difficulties using the generalized Dawid-Skene model [9], while [10]

utilizes the expectation maximization approach to estimate the tasks’ solutions and workers’ reliabilities. The aforementioned approaches are all offline, i.e., the task assignments are made without taking into account additional information about the workers’ performances and behaviors that can be gleaned while the workers complete tasks sequentially over time.

Online learning for adaptive task assignments has also been considered in recent years. For instance, in [11, 12], as workers may be unreliable, the authors proposed to perform sequential questioning in which the questions posed to the workers are designed based on previous questions and answers. Reference [13] studied sequential user selection using a Bayes update aided online solution in the problem of news disseminating. The paper [14] investigated the problem of label inference for heterogeneous classification tasks by applying online primal-dual techniques. In [15], the authors studied sequential task assignment with budgets that specify how many times the requester would like each task to be completed. Expert-crowdsourcing in which workers are experts who have unknown reliabilities was studied in [16], which also assumed that the requester has a budget constraint and each worker has a maximum task limit. The authors proposed an algorithm, which uses an initial exploration phase to uniformly sample the performance of a wide range of workers, and in the latter exploitation phase solves a bounded knapsack problem [17]. In the mobile micro-task allocation problem in spatial crowdsourcing scenarios, where the orders of arriving tasks and workers are dynamic, [18] proposed an online two-phase-based bipartite graph matching algorithm with good average performance. In [19], the authors proposed an adaptive crowdsourcing framework, which continuously estimates the reliability of a worker by evaluating her performance on the completed tasks, and predicts which tasks the worker is well acquainted with. In the above work, the authors assumed that the crowdsourcing platform is able to quickly and accurately evaluate the quality of all the tasks that have been completed. However, as is often the case in practice, the crowdsourcing platform cannot directly and immediately evaluate the quality of a completed task [20]. For example, in tasks where workers are asked to determine whether an image contains a particular object, the crowdsourcing platform has no way to determine if the task completed by a worker is successful or not. In [21], the authors proposed an online algorithm, in which workers’ reliabilities are estimated by repetitively assigning each worker the same tasks occasionally.

The above-mentioned works do not take workers’ interests or preferences for certain tasks into consideration when performing a task assignment. In most existing crowdsourcing platforms like MTurk, workers may choose to accept or reject a recommended task. As the number of tasks uploaded by requesters daily can be large [22], the crowdsourcing platform should aim to recommend tasks to each worker that he will likely accept in order to optimize the platform productivity. Including the workers’ preferences in task assignment can also help workers to find their preferred tasks faster as they will have fewer recommendations to consider. At the same time, by estimating the reliability of a worker in each task category helps the requester to collect high-quality results quicker [23]. The paper [24] proposed to perform task assignment using workers’ preferences based on many-to-one matching. However, the paper assumes that workers’ preferences and reliabilities are known a priori, which is unrealistic in a practical system. In [22], the authors proposed two recommendation strategies that compute the top- most suitable tasks for a given worker and the top- best workers for a given task. However, they assume that the workers can be evaluated immediately. The references [25, 23] employed matrix factorization techniques to capture workers’ preferences and reliability where, however, the evaluated scores in the matrix are also assumed to be gotten immediately.

In this paper, we propose an online learning approach to estimate workers’ preferences and reliabilities. We assume that tasks can be categorized, and that each worker can choose to accept or reject a recommended task at each time step. If he chooses to accept the task, he completes it within the current time step. For each time step, our goal is to recommend a task from a given number of task categories so that it best matches the worker’s preferences and reliabilities according to a reward function. As the worker’s preferences and reliabilities for different task categories are unknown a priori, we adopt a multi-armed bandit (MAB) formulation [26, 27] in which these are learnt through exploration, while the cumulative sum reward is optimized through the exploitation of the empirical best match. However, unlike the classical MAB problem in which the reward at each time step can be observed, the reward is unobservable in our formulation as the crowdsourcing platform does not know the true solution of each task. To overcome this, we include gold tasks [28] whose solutions are known a priori. In [29, 30], gold tasks are used for training and estimating workers’ reliabilities. However, gold tasks do not generate any rewards. Including more gold tasks leads to better estimates of the worker’s preferences and reliabilities for different task categories, but also results in less reward. In a crowdsourcing platform, a plausible compensation scheme is to reward a worker for completed tasks according to his reliability [31, 32, 33]. Furthermore, to accurately fuse the results from multiple workers, the knowledge of the workers’ reliabilities are required [14, 34]. Therefore, the platform aims to estimate the worker’s reliability as accurately as possible. To reflect this, we include both the reward and estimation accuracy in computing the regret. We show that the optimal regret under our formulation has order , where is the number of time steps. In addition, we propose three strategies that achieve the order optimal regret.

Our MAB formulation is related to, but different from, the risk-averse MAB problem considered in [35, 36, 37]

, which uses mean-variance

[38, 39] as its risk objective. The differences between our MAB formulation and the risk-averse MAB are: (i) The worker in our problem may choose not to accept a task at each time step, leading to no reward at that time step, whereas in the risk-averse MAB, such an option is not available. (ii) The variance of the reward is used in the formulation of the risk-averse MAB, while we use the variance of the reward estimator in formulating our regret. This is because as explained above, in a crowdsourcing platform, the reward at each time step is unobservable, and our regret is dependent on the estimation accuracy from using gold tasks to estimate the expected reward instead of the reward variance.

The rest of this paper is organized as follows. In Section 2, we present our system model and assumptions. In Section 3, we derive the optimal order of the regret and introduce three strategies that are order optimal in Section 4. In Section 5, we present simulations to compare the performance of our approaches. Section 6 concludes the paper.

Notations: We use

to denote the distribution of a Bernoulli random variable

with . The notation denotes equality in distribution. We use to denote the set of positive integers and to denote the complement of the event . The indicator function if and only if . We let . For non-negative functions and , we write if , if , and if .

2 System Model

We consider a crowdsourcing platform where each task belongs to one of categories. The platform recommends a task from some category to a worker at each time . The worker has different reliabilities and preferences for different categories. The worker’s reliability for category

is the probability

of completing a task from category correctly, and his preference refers to the probability of accepting a task from category . We let if the worker accepts the -th task recommended to him from category , and otherwise. We let if the worker completes the task correctly, and otherwise. For all and , we assume that are independent and identically distributed. Similarly, are independent and identically distributed. We assume that and are independent.

We model the task recommendation problem as a MAB problem. However, since is typically unobservable since the system does not know the task solution a priori, we include the use of gold tasks in our recommendation. A gold task is a task whose solution is known a priori to the system. A reward is obtained only if the worker accepts and completes a non-gold task correctly.

Let , whose distribution is . We denote the task category that maximizes as . For each time step , let denote the event that the -th task from category recommended to the worker is a gold task. Let

(1)

be the number of completed gold tasks. If the worker completes a non-gold task, he is rewarded in proportion to , the probability that he has completed the task correctly. However, since we do not know a priori, the crowdsourcing platform estimates it using the empirical mean from the completed gold tasks:

(2)

Due to the uncertainty in the estimate creftype 2, we assume that the reward for the -th non-gold task from category if the worker completes it is given by

(3)

where is a predefined weight that quantifies the importance of the estimator ’s accuracy to the crowdsourcing platform, and is the variance of a task reward from category . The quantity is the variance of conditioned on .

For each and time , let denote the number of tasks recommended so far till time from category . We define the cumulative reward function at time as:

(4)

since , and are independent. Note that the coefficient can also be interpreted as the Lagrange multiplier for a constrained optimization problem in which the reward

is maximized subject to a constraint on the uncertainty in the estimator creftype 2.

To avoid dividing by zero, i.e., to ensure , we assume that the worker completes one gold task from each category before the recommendation starts, which can be done through a calibration process in a crowdsourcing platform.

To maximize the above reward function (4) is equivalent to minimizing the regret function at time , defined as:

(5)

For simplicity, we assume that every task category is non-empty at each time step. This is a reasonable assumption since the task pool in a crowdsourcing platform is updated constantly, and a single task may be assigned to more than one worker. In the following, we use the terms “task category” and “arm” interchangeably in our MAB formulation.

3 Optimal Regret Order

In this section, we first show that the optimal order of the regret function (5) is , where is the number of time steps. We then propose three recommendation strategies, all of which achieve the order optimal regret.

Theorem 1.

The optimal order of the regret function (5) is , where is the number of time steps.

Proof:

Let

(6)

be the total number of gold tasks recommended till time . Then, from creftype 4, we have

(7)

From creftypeplural 7 and 5, with , when is sufficiently large, we then have

(8)
(9)
(10)

where the inequality in creftype 9 follows because with probability . The theorem is now proved. ∎

4 Order Optimal Strategies

In this section, we propose three strategies that achieve the optimal order regret of in Theorem 1, and discuss the advantages of each.

4.1 Greedy Recommendation Strategy

In our greedy recommendation strategy (GR), we divide time into epochs. In each epoch

, a single task category or arm is chosen and all tasks in that epoch are drawn from that category. In the first epochs, each of which consists of a single time step, the worker completes a gold task from each of the categories. Subsequently, the -th epoch, where , consists of time steps, where , and . In each of these epochs, we set the first task to be a gold task, while for all other time steps in the epoch, the tasks recommended are non-gold tasks chosen from a particular task category. For each , the task category is chosen based on a -greedy policy [27] as follows.

Let , and choose so that . For each , let , where is a fixed constant. At the beginning of each epoch , where , we choose

(11)

where is the number of epochs within the first epochs that category is chosen,

(12)

and are the indices of the first gold tasks recommended from category . Then, with probability , we let to be the task category chosen for epoch , and with probability , we let to be a randomly chosen category. The GR strategy is summarized in Algorithm 1.

0:  , , .
0:  
1:  Recommend one gold task from each category.
2:  Set , and for all .
3:  loop
4:     Find in creftype 11.
5:     Set . With probability , choose , otherwise set to be a randomly chosen arm.
6:     Recommend one gold task from arm at the first time step of epoch . Update , and .
7:     Recommend a non-gold task drawn from arm at each of the following time steps.
8:     Set .
9:  end loop
Algorithm 1 GR
Theorem 2.

Suppose that . Then, GR has regret of order , where is the number of time steps.

Proof:

For each arm in epoch , we denote by , the probability that arm is chosen. From Theorem 3 in [27], if and , we have

(13)

so that

(14)

Let be the number of epochs completed till the time step , i.e., is the largest integer satisfying . We have . Let be the number of category gold tasks completed in the first epochs. Since a single gold question is recommended in each epoch, we have for all time steps in epoch . From creftype 4, we obtain

(15)

Recall also that is the number of epochs within the first epochs in which arm was chosen. Since GR assumes that the worker completes a gold task from arm before the recommendation starts, we have

and

(16)

From (14), (15) and (4.1), we obtain

(17)

We next prove the following lemma.

Lemma 1.

.

We have

(18)

Therefore, to show the lemma, it suffices to show that the tail probability is of order .

Consider . Let be the number of times for , and be the number of times arm was randomly chosen for . From the union bound, we have

(19)

From Hoeffding’s inequality, we obtain

(20)

where the second inequality follows because the probability of randomly choosing arm at each epoch is not more than and

In order to bound the second term in (19), we use a similar argument as that in [40]. Let and . Define the following events:

Under the event , we have for all ,

which implies that

(21)

for all . Since , we have because otherwise there exists such and , which contradicts creftype 21. Therefore, , and

(22)

where the last inequality is the union bound. We next bound each of the terms on the right hand side of creftype 22 separately. For the first term, we have

(23)

where the second inequality follows from Hoeffding’s inequality. Similarly, for the second term on the right hand side of creftype 22, we obtain

(24)

since and .

From the proof of Theorem 3 in [27], we have and

(25)

where the third inequality follows from Bernstein’s inequality (see (13) of [27] for a detailed proof).

Applying creftypeplural 25, 24 and 23 on the right hand side of creftype 22, we obtain . From creftypeplural 19 and 20, we then have

which yields

(26)

and Lemma 1 is proved.

From Lemmas 1 and 4.1, we obtain

since . The proof of Theorem 2 is now complete. ∎

4.2 Uniform-Pulling Recommendation Strategy

0:  .
0:  Recommend one gold task from each arm. Set .
1:  loop
2:     At the first time steps of the -th epoch, recommend gold tasks, each from one arm.
3:     Choose arm .
4:     Recommend a non-gold task drawn from arm at each of the following time steps.
5:     Set .
6:  end loop
Algorithm 2 UR

In this subsection, we propose another strategy, which we call the Uniform-Pulling Recommendation (UR) strategy. We again divide time into epochs, with the -th epoch containing time steps. In the first time steps of the -th epoch, where , we first recommend gold tasks, one from each of the task categories. Next, we choose arm , where now for all . In the subsequent time steps, we recommend a non-gold task from arm at each time step. The UR strategy is summarized in Algorithm 2.

Theorem 3.

UR has regret of order , where is the number of time steps.

Proof:

The proof uses a similar argument and the same notations as that in the proof of Theorem 2. For , we have

(27)

where the penultimate inequality follows from Hoeffding’s inequality.

We choose be the largest integer satisfying so that , we have from Sections 4.1 and 15,

where the second inequality follows from (27). Therefore, , and the proof of Theorem 3 is complete. ∎

4.3 -First Recommendation Strategy

0:  .
1:  Recommend gold tasks from each of the arms.
2:  Choose arm .
3:  Recommend a non-gold task drawn from arm at each of the following time steps.
Algorithm 3 -First

The -first strategy was designed to deal with the budget-limited MAB problem [41], where the budget refers to the total number of time steps. In the -first strategy, the first fraction of the budget is used to explore the reward distributions of all arms, while the remaining fraction is used spent on the empirically best arm found in the initial exploration phase. To apply the -first strategy to our task recommendation problem, we let . Uniform pulling of arms is used in the exploration phase. The -first strategy is summarized in Algorithm 3, where .

Theorem 4.

If , then the -first strategy has regret of order , where is the number of time steps.

Proof:

Let . Similar to creftype 27 in the proof of Theorem 3, it can be shown that . From creftype 4 and Section 4.1, we obtain

(28)

since . Therefore, , and the proof is complete. ∎

4.4 Discussions

The -first strategy assumes that the total number of tasks to be recommended to each worker is known beforehand, while the other two strategies do not need such an assumption. If both UR and the -first strategy recommend the same total number of gold tasks, then the -first strategy achieves a smaller regret than UR. This is because the -first strategy recommends all the gold questions in the initial exploration phase, leading to a better estimation of the reward distributions before any non-gold tasks are recommended. However, in a practical crowdsourcing platform, knowing the total number of tasks for a worker may not be feasible as a worker is not guaranteed to be active or to remain in the system. An alternative is to use a hybrid UR and -first strategy, where the initial exploration phase in each epoch of the UR strategy is set to be an fraction of the total number of time steps in that epoch.

On the other hand, UR can become more costly than GR when the number of arms is large. If UR and GR both recommend the same total number of gold tasks to the worker, GR tends to recommend more gold tasks from the best task category in the long run, leading to a smaller estimation variance of creftype 2 for the best category and a smaller asymptotic regret than UR.

In both the GR and UR strategies, we choose , which determines how frequently gold tasks are recommended to the worker. A more general choice is for . We now show that is essentially the only choice that gives optimal order regret in GR and UR. Let be the number of gold tasks recommended till time . Then, since GR and UR recommends a fixed number of gold tasks in each epoch, we have if is within epoch . The number of time steps up to and including epoch is