Learning From Noisy Singly-labeled Data

12/13/2017
by   Ashish Khetan, et al.
0

Supervised learning depends on annotated examples, which are taken to be the ground truth. But these labels often come from noisy crowdsourcing platforms, like Amazon Mechanical Turk. Practitioners typically collect multiple labels per example and aggregate the results to mitigate noise (the classic crowdsourcing problem). Given a fixed annotation budget and unlimited unlabeled data, redundant annotation comes at the expense of fewer labeled examples. This raises two fundamental questions: (1) How can we best learn from noisy workers? (2) How should we allocate our labeling budget to maximize the performance of a classifier? We propose a new algorithm for jointly modeling labels and worker quality from noisy crowd-sourced data. The alternating minimization proceeds in rounds, estimating worker quality from disagreement with the current model and then updating the model by optimizing a loss function that accounts for the current estimate of worker quality. Unlike previous approaches, even with only one annotation per example, our algorithm can estimate worker quality. We establish a generalization error bound for models learned with our algorithm and establish theoretically that it's better to label many examples once (vs less multiply) when worker quality is above a threshold. Experiments conducted on both ImageNet (with simulated noisy workers) and MS-COCO (using the real crowdsourced labels) confirm our algorithm's benefits.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2021

Rethinking Crowdsourcing Annotation: Partial Annotation with Salient Labels for Multi-Label Image Classification

Annotated images are required for both supervised model training and eva...
research
12/24/2019

Attention-Aware Answers of the Crowd

Crowdsourcing is a relatively economic and efficient solution to collect...
research
10/15/2021

Learning with Noisy Labels by Targeted Relabeling

Crowdsourcing platforms are often used to collect datasets for training ...
research
08/21/2023

Label Selection Approach to Learning from Crowds

Supervised learning, especially supervised deep learning, requires large...
research
02/12/2019

Crowdsourced PAC Learning under Classification Noise

In this paper, we analyze PAC learnability from labels produced by crowd...
research
12/26/2021

Budget Sensitive Reannotation of Noisy Relation Classification Data Using Label Hierarchy

Large crowd-sourced datasets are often noisy and relation classification...
research
12/04/2021

In Search of Ambiguity: A Three-Stage Workflow Design to Clarify Annotation Guidelines for Crowd Workers

We propose a novel three-stage FIND-RESOLVE-LABEL workflow for crowdsour...

Please sign up or login with your details

Forgot password? Click here to reset