Augmentation Strategies for Learning with Noisy Labels

03/03/2021
by   Kento Nishi, et al.
0

Imperfect labels are ubiquitous in real-world datasets. Several recent successful methods for training deep neural networks (DNNs) robust to label noise have used two primary techniques: filtering samples based on loss during a warm-up phase to curate an initial set of cleanly labeled samples, and using the output of a network as a pseudo-label for subsequent loss calculations. In this paper, we evaluate different augmentation strategies for algorithms tackling the "learning with noisy labels" problem. We propose and examine multiple augmentation strategies and evaluate them using synthetic datasets based on CIFAR-10 and CIFAR-100, as well as on the real-world dataset Clothing1M. Due to several commonalities in these algorithms, we find that using one set of augmentations for loss modeling tasks and another set for learning is the most effective, improving results on the state-of-the-art and other previous methods. Furthermore, we find that applying augmentation during the warm-up period can negatively impact the loss convergence behavior of correctly versus incorrectly labeled samples. We introduce this augmentation strategy to the state-of-the-art technique and demonstrate that we can improve performance across all evaluated noise levels. In particular, we improve accuracy on the CIFAR-10 benchmark at 90 absolute accuracy and we also improve performance on the real-world dataset Clothing1M. (* equal contribution)

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/23/2022

A Study on the Impact of Data Augmentation for Training Convolutional Neural Networks in the Presence of Noisy Labels

Label noise is common in large real-world datasets, and its presence har...
research
12/02/2021

Sample Prior Guided Robust Model Learning to Suppress Noisy Labels

Imperfect labels are ubiquitous in real-world datasets and seriously har...
research
06/13/2019

A Meta Approach to Defend Noisy Labels by the Manifold Regularizer PSDR

Noisy labels are ubiquitous in real-world datasets, which poses a challe...
research
09/30/2020

Improving Generalization of Deep Fault Detection Models in the Presence of Mislabeled Data

Mislabeled samples are ubiquitous in real-world datasets as rule-based o...
research
08/18/2021

Confidence Adaptive Regularization for Deep Learning with Noisy Labels

Recent studies on the memorization effects of deep neural networks on no...
research
03/01/2021

DST: Data Selection and joint Training for Learning with Noisy Labels

Training a deep neural network heavily relies on a large amount of train...
research
12/08/2020

Two-Phase Learning for Overcoming Noisy Labels

To counter the challenge associated with noise labels, the learning stra...

Please sign up or login with your details

Forgot password? Click here to reset