Learning From Noisy Labels By Regularized Estimation Of Annotator Confusion

02/10/2019
by   Ryutaro Tanno, et al.
0

The predictive performance of supervised learning algorithms depends on the quality of labels. In a typical label collection process, multiple annotators provide subjective noisy estimates of the "truth" under the influence of their varying skill-levels and biases. Blindly treating these noisy labels as the ground truth limits the accuracy of learning algorithms in the presence of strong disagreement. This problem is critical for applications in domains such as medical imaging where both the annotation cost and inter-observer variability are high. In this work, we present a method for simultaneously learning the individual annotator model and the underlying true label distribution, using only noisy observations. Each annotator is modeled by a confusion matrix that is jointly estimated along with the classifier predictions. We propose to add a regularization term to the loss function that encourages convergence to the true annotator confusion matrix. We provide a theoretical argument as to how the regularization is essential to our approach both for the case of single annotator and multiple annotators. Despite the simplicity of the idea, experiments on image classification tasks with both simulated and real labels show that our method either outperforms or performs on par with the state-of-the-art methods and is capable of estimating the skills of annotators even with a single label available per image.

READ FULL TEXT
research
07/31/2020

Disentangling Human Error from the Ground Truth in Segmentation of Medical Images

Recent years have seen increasing use of supervised learning methods for...
research
03/20/2022

Learning from Multiple Expert Annotators for Enhancing Anomaly Detection in Medical Image Analysis

Building an accurate computer-aided diagnosis system based on data-drive...
research
01/02/2023

In Quest of Ground Truth: Learning Confident Models and Estimating Uncertainty in the Presence of Annotator Noise

The performance of the Deep Learning (DL) models depends on the quality ...
research
09/18/2023

Drawing the Same Bounding Box Twice? Coping Noisy Annotations in Object Detection with Repeated Labels

The reliability of supervised machine learning systems depends on the ac...
research
06/20/2019

Latent Distribution Assumption for Unbiased and Consistent Consensus Modelling

We study the problem of aggregation noisy labels. Usually, it is solved ...
research
03/29/2023

Improving Object Detection in Medical Image Analysis through Multiple Expert Annotators: An Empirical Investigation

The work discusses the use of machine learning algorithms for anomaly de...
research
03/31/2021

CrowdTeacher: Robust Co-teaching with Noisy Answers Sample-specific Perturbations for Tabular Data

Samples with ground truth labels may not always be available in numerous...

Please sign up or login with your details

Forgot password? Click here to reset