Augmentation Strategies for Learning with Noisy Labels

03/03/2021
by   Kento Nishi, et al.
0

Imperfect labels are ubiquitous in real-world datasets. Several recent successful methods for training deep neural networks (DNNs) robust to label noise have used two primary techniques: filtering samples based on loss during a warm-up phase to curate an initial set of cleanly labeled samples, and using the output of a network as a pseudo-label for subsequent loss calculations. In this paper, we evaluate different augmentation strategies for algorithms tackling the "learning with noisy labels" problem. We propose and examine multiple augmentation strategies and evaluate them using synthetic datasets based on CIFAR-10 and CIFAR-100, as well as on the real-world dataset Clothing1M. Due to several commonalities in these algorithms, we find that using one set of augmentations for loss modeling tasks and another set for learning is the most effective, improving results on the state-of-the-art and other previous methods. Furthermore, we find that applying augmentation during the warm-up period can negatively impact the loss convergence behavior of correctly versus incorrectly labeled samples. We introduce this augmentation strategy to the state-of-the-art technique and demonstrate that we can improve performance across all evaluated noise levels. In particular, we improve accuracy on the CIFAR-10 benchmark at 90 absolute accuracy and we also improve performance on the real-world dataset Clothing1M. (* equal contribution)

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/02/2021

Sample Prior Guided Robust Model Learning to Suppress Noisy Labels

Imperfect labels are ubiquitous in real-world datasets and seriously har...
09/29/2021

Robust Temporal Ensembling for Learning with Noisy Labels

Successful training of deep neural networks with noisy labels is an esse...
06/13/2019

A Meta Approach to Defend Noisy Labels by the Manifold Regularizer PSDR

Noisy labels are ubiquitous in real-world datasets, which poses a challe...
08/18/2021

Confidence Adaptive Regularization for Deep Learning with Noisy Labels

Recent studies on the memorization effects of deep neural networks on no...
09/30/2020

Improving Generalization of Deep Fault Detection Models in the Presence of Mislabeled Data

Mislabeled samples are ubiquitous in real-world datasets as rule-based o...
03/28/2021

Friends and Foes in Learning from Noisy Labels

Learning from examples with noisy labels has attracted increasing attent...
02/05/2021

In-Loop Meta-Learning with Gradient-Alignment Reward

At the heart of the standard deep learning training loop is a greedy gra...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.