Task Recommendation in Crowdsourcing Based on Learning Preferences and Reliabilities

07/27/2018
by   Qiyu Kang, et al.
0

Workers participating in a crowdsourcing platform can have a wide range of abilities and interests. An important problem in crowdsourcing is the task recommendation problem, in which tasks that best match a particular worker's preferences and reliabilities are recommended to that worker. A task recommendation scheme that assigns tasks more likely to be accepted by a worker who is more likely to complete it reliably results in better performance for the task requester. Without prior information about a worker, his preferences and reliabilities need to be learned over time. In this paper, we propose a multi-armed bandit (MAB) framework to learn a worker's preferences and his reliabilities for different categories of tasks. However, unlike the classical MAB problem, the reward from the worker's completion of a task is unobservable. We therefore include the use of gold tasks (i.e., tasks whose solutions are known a priori and which do not produce any rewards) in our task recommendation procedure. Our model could be viewed as a new variant of MAB, in which the random rewards can only be observed at those time steps where gold tasks are used, and the accuracy of estimating the expected reward of recommending a task to a worker depends on the number of gold tasks used. We show that the optimal regret is O(√(n)), where n is the number of tasks recommended to the worker. We develop three task recommendation strategies to determine the number of gold tasks for different task categories, and show that they are order optimal. Simulations verify the efficiency of our approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2022

Multi-Armed Bandit Problem with Temporally-Partitioned Rewards: When Partial Feedback Counts

There is a rising interest in industrial online applications where data ...
research
06/04/2021

On the Design of Strategic Task Recommendations for Sustainable Crowdsourcing-Based Content Moderation

Crowdsourcing-based content moderation is a platform that hosts content ...
research
05/20/2021

The Challenge of Variable Effort Crowdsourcing and How Visible Gold Can Help

We consider a class of variable effort human annotation tasks in which t...
research
01/24/2019

The Assistive Multi-Armed Bandit

Learning preferences implicit in the choices humans make is a well studi...
research
02/10/2016

Feature Based Task Recommendation in Crowdsourcing with Implicit Observations

Existing research in crowdsourcing has investigated how to recommend tas...
research
07/07/2022

Older Adults' Motivation and Engagement with Diverse Crowdsourcing Citizen Science Tasks

In this exploratory study we evaluated the engagement, performance and p...
research
09/14/2018

In-Route Task Selection in Crowdsourcing

One important problem in crowdsourcing is that of assigning tasks to wor...

Please sign up or login with your details

Forgot password? Click here to reset