False Discovery Rate Control and Statistical Quality Assessment of Annotators in Crowdsourced Ranking

05/19/2016
by   Qianqian Xu, et al.
0

With the rapid growth of crowdsourcing platforms it has become easy and relatively inexpensive to collect a dataset labeled by multiple annotators in a short time. However due to the lack of control over the quality of the annotators, some abnormal annotators may be affected by position bias which can potentially degrade the quality of the final consensus labels. In this paper we introduce a statistical framework to model and detect annotator's position bias in order to control the false discovery rate (FDR) without a prior knowledge on the amount of biased annotators - the expected fraction of false discoveries among all discoveries being not too high, in order to assure that most of the discoveries are indeed true and replicable. The key technical development relies on some new knockoff filters adapted to our problem and new algorithms based on the Inverse Scale Space dynamics whose discretization is potentially suitable for large scale crowdsourcing data analysis. Our studies are supported by experiments with both simulated examples and real-world data. The proposed framework provides us a useful tool for quantitatively studying annotator's abnormal behavior in crowdsourcing data arising from machine learning, sociology, computer vision, multimedia, etc.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/14/2021

Towards A Reliable Ground-Truth For Biased Language Detection

Reference texts such as encyclopedias and news articles can manifest bia...
research
02/15/2021

Controlling False Discovery Rates Using Null Bootstrapping

We consider controlling the false discovery rate for many tests with unk...
research
01/11/2018

Average Power and λ-power in Multiple Testing Scenarios when the Benjamini-Hochberg False Discovery Rate Procedure is Used

We discuss several approaches to defining power in studies designed arou...
research
12/05/2018

A Technical Survey on Statistical Modelling and Design Methods for Crowdsourcing Quality Control

Online crowdsourcing provides a scalable and inexpensive means to collec...
research
10/03/2021

Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control

We introduce Learn then Test, a framework for calibrating machine learni...
research
06/17/2015

Communication-Efficient False Discovery Rate Control via Knockoff Aggregation

The false discovery rate (FDR)---the expected fraction of spurious disco...

Please sign up or login with your details

Forgot password? Click here to reset