Certified Computation in Crowdsourcing

09/12/2017
by   Themis Gouleakis, et al.
0

A wide range of learning tasks require human input in labeling massive data. For this labeling and because of the quantity of the data, crowdsourcing has become very crucial. The collected data though are usually low quality as workers are not appropriately incentivized to put effort. Significant research has focused on designing appropriate incentive schemes but do not capture the full range of tasks that appear in practice. Moreover, even if incentives are theoretically aligned noise in the data can still exist because it is hard to model exactly how workers will behave. In this work, we provide a generic approach that is based on verification of only few worker reports to guarantee high quality learning outcomes for various optimization objectives. Our method, identifies small sets of critical workers and verifies their reports. We show that many problems only need poly(1/ε) verifications, to ensure that the output of the computation is at most a factor of (1 ±ε) away from the truth. For any given instance, we provide an instance optimal solution that verifies the minimum number of workers possible to approximately certify correctness. In case this certification step fails, a misreporting worker will be identified. Removing these workers and repeating until success, guarantees that the result will be correct and will depend only on the verified workers. Surprisingly, as we show, for several computation tasks more efficient methods are possible. These methods always guarantee that the produced result is not affected by the misreporting workers, since any misreport that affects the output will be detected and verified.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2021

Quantifying and Avoiding Unfair Qualification Labour in Crowdsourcing

Extensive work has argued in favour of paying crowd workers a wage that ...
research
03/26/2020

A Misreport- and Collusion-Proof Crowdsourcing Mechanism without Quality Verification

Quality control plays a critical role in crowdsourcing. The state-of-the...
research
04/14/2019

Boomerang: Rebounding the Consequences of Reputation Feedback on Crowdsourcing Platforms

Paid crowdsourcing platforms suffer from low-quality work and unfair rej...
research
11/28/2022

Incentive-boosted Federated Crowdsourcing

Crowdsourcing is a favorable computing paradigm for processing computer-...
research
06/01/2018

Inference Aided Reinforcement Learning for Incentive Mechanism Design in Crowdsourcing

Incentive mechanisms for crowdsourcing are designed to incentivize finan...
research
02/19/2015

Approval Voting and Incentives in Crowdsourcing

The growing need for labeled training data has made crowdsourcing an imp...
research
07/21/2017

Autocompletion interfaces make crowd workers slower, but their use promotes response diversity

Creative tasks such as ideation or question proposal are powerful applic...

Please sign up or login with your details

Forgot password? Click here to reset