A Meta Approach to Defend Noisy Labels by the Manifold Regularizer PSDR

06/13/2019
by   Pengfei Chen, et al.
0

Noisy labels are ubiquitous in real-world datasets, which poses a challenge for robustly training deep neural networks (DNNs) since DNNs can easily overfit to the noisy labels. Most recent efforts have been devoted to defending noisy labels by discarding noisy samples from the training set or assigning weights to training samples, where the weight associated with a noisy sample is expected to be small. Thereby, these previous efforts result in a waste of samples, especially those assigned with small weights. The input x is always useful regardless of whether its observed label y is clean. To make full use of all samples, we introduce a manifold regularizer, named as Paired Softmax Divergence Regularization (PSDR), to penalize the Kullback-Leibler (KL) divergence between softmax outputs of similar inputs. In particular, similar inputs can be effectively generated by data augmentation. PSDR can be easily implemented on any type of DNNs to improve the robustness against noisy labels. As empirically demonstrated on benchmark datasets, our PSDR impressively improve state-of-the-art results by a significant margin.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/13/2019

Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels

Noisy labels are ubiquitous in real-world datasets, which poses a challe...
research
03/03/2021

Augmentation Strategies for Learning with Noisy Labels

Imperfect labels are ubiquitous in real-world datasets. Several recent s...
research
07/06/2021

An Ensemble Noise-Robust K-fold Cross-Validation Selection Method for Noisy Labels

We consider the problem of training robust and accurate deep neural netw...
research
06/07/2018

Dimensionality-Driven Learning with Noisy Labels

Datasets with significant proportions of noisy (incorrect) class labels ...
research
09/17/2020

Noisy Concurrent Training for Efficient Learning under Label Noise

Deep neural networks (DNNs) fail to learn effectively under label noise ...
research
09/07/2018

MixUp as Locally Linear Out-Of-Manifold Regularization

MixUp, a data augmentation approach through mixing random samples, has b...
research
07/21/2022

Learning from Data with Noisy Labels Using Temporal Self-Ensemble

There are inevitably many mislabeled data in real-world datasets. Becaus...

Please sign up or login with your details

Forgot password? Click here to reset