Confident in the Crowd: Bayesian Inference to Improve Data Labelling in Crowdsourcing

05/28/2021
by   Pierce Burke, et al.
0

With the increased interest in machine learning and big data problems, the need for large amounts of labelled data has also grown. However, it is often infeasible to get experts to label all of this data, which leads many practitioners to crowdsourcing solutions. In this paper, we present new techniques to improve the quality of the labels while attempting to reduce the cost. The naive approach to assigning labels is to adopt a majority vote method, however, in the context of data labelling, this is not always ideal as data labellers are not equally reliable. One might, instead, give higher priority to certain labellers through some kind of weighted vote based on past performance. This paper investigates the use of more sophisticated methods, such as Bayesian inference, to measure the performance of the labellers as well as the confidence of each label. The methods we propose follow an iterative improvement algorithm which attempts to use the least amount of workers necessary to achieve the desired confidence in the inferred label. This paper explores simulated binary classification problems with simulated workers and questions to test the proposed methods. Our methods outperform the standard voting methods in both cost and accuracy while maintaining higher reliability when there is disagreement within the crowd.

READ FULL TEXT
research
06/01/2020

Variational Bayesian Inference for Crowdsourcing Predictions

Crowdsourcing has emerged as an effective means for performing a number ...
research
12/24/2019

Attention-Aware Answers of the Crowd

Crowdsourcing is a relatively economic and efficient solution to collect...
research
08/31/2016

Dynamic Allocation of Crowd Contributions for Sentiment Analysis during the 2016 U.S. Presidential Election

Opinions about the 2016 U.S. Presidential Candidates have been expressed...
research
09/01/2016

Crowdsourcing with Unsure Option

One of the fundamental problems in crowdsourcing is the trade-off betwee...
research
02/16/2021

Finding the Ground-Truth from Multiple Labellers: Why Parameters of the Task Matter

Employing multiple workers to label data for machine learning models has...
research
10/26/2017

Optimal Crowdsourced Classification with a Reject Option in the Presence of Spammers

We explore the design of an effective crowdsourcing system for an M-ary ...
research
08/25/2015

Visualizing NLP annotations for Crowdsourcing

Visualizing NLP annotation is useful for the collection of training data...

Please sign up or login with your details

Forgot password? Click here to reset