DeepAI AI Chat
Log In Sign Up

Task Recommendation in Crowdsourcing Based on Learning Preferences and Reliabilities

by   Qiyu Kang, et al.
Nanyang Technological University

Workers participating in a crowdsourcing platform can have a wide range of abilities and interests. An important problem in crowdsourcing is the task recommendation problem, in which tasks that best match a particular worker's preferences and reliabilities are recommended to that worker. A task recommendation scheme that assigns tasks more likely to be accepted by a worker who is more likely to complete it reliably results in better performance for the task requester. Without prior information about a worker, his preferences and reliabilities need to be learned over time. In this paper, we propose a multi-armed bandit (MAB) framework to learn a worker's preferences and his reliabilities for different categories of tasks. However, unlike the classical MAB problem, the reward from the worker's completion of a task is unobservable. We therefore include the use of gold tasks (i.e., tasks whose solutions are known a priori and which do not produce any rewards) in our task recommendation procedure. Our model could be viewed as a new variant of MAB, in which the random rewards can only be observed at those time steps where gold tasks are used, and the accuracy of estimating the expected reward of recommending a task to a worker depends on the number of gold tasks used. We show that the optimal regret is O(√(n)), where n is the number of tasks recommended to the worker. We develop three task recommendation strategies to determine the number of gold tasks for different task categories, and show that they are order optimal. Simulations verify the efficiency of our approaches.


page 1

page 2

page 3

page 4


Multi-Armed Bandit Problem with Temporally-Partitioned Rewards: When Partial Feedback Counts

There is a rising interest in industrial online applications where data ...

On the Design of Strategic Task Recommendations for Sustainable Crowdsourcing-Based Content Moderation

Crowdsourcing-based content moderation is a platform that hosts content ...

The Challenge of Variable Effort Crowdsourcing and How Visible Gold Can Help

We consider a class of variable effort human annotation tasks in which t...

The Assistive Multi-Armed Bandit

Learning preferences implicit in the choices humans make is a well studi...

Eliciting Worker Preference for Task Completion

Current crowdsourcing platforms provide little support for worker feedba...

Older Adults' Motivation and Engagement with Diverse Crowdsourcing Citizen Science Tasks

In this exploratory study we evaluated the engagement, performance and p...

In-Route Task Selection in Crowdsourcing

One important problem in crowdsourcing is that of assigning tasks to wor...