An Ensemble Noise-Robust K-fold Cross-Validation Selection Method for Noisy Labels

by   Yong Wen, et al.

We consider the problem of training robust and accurate deep neural networks (DNNs) when subject to various proportions of noisy labels. Large-scale datasets tend to contain mislabeled samples that can be memorized by DNNs, impeding the performance. With appropriate handling, this degradation can be alleviated. There are two problems to consider: how to distinguish clean samples and how to deal with noisy samples. In this paper, we present Ensemble Noise-robust K-fold Cross-Validation Selection (E-NKCVS) to effectively select clean samples from noisy data, solving the first problem. For the second problem, we create a new pseudo label for any sample determined to have an uncertain or likely corrupt label. E-NKCVS obtains multiple predicted labels for each sample and the entropy of these labels is used to tune the weight given to the pseudo label and the given label. Theoretical analysis and extensive verification of the algorithms in the noisy label setting are provided. We evaluate our approach on various image and text classification tasks where the labels have been manually corrupted with different noise ratios. Additionally, two large real-world noisy datasets are also used, Clothing-1M and WebVision. E-NKCVS is empirically shown to be highly tolerant to considerable proportions of label noise and has a consistent improvement over state-of-the-art methods. Especially on more difficult datasets with higher noise ratios, we can achieve a significant improvement over the second-best model. Moreover, our proposed approach can easily be integrated into existing DNN methods to improve their robustness against label noise.


page 1

page 2

page 3

page 4


PARS: Pseudo-Label Aware Robust Sample Selection for Learning with Noisy Labels

Acquiring accurate labels on large-scale datasets is both time consuming...

Robust Learning Under Label Noise With Iterative Noise-Filtering

We consider the problem of training a model under the presence of label ...

Cross-Validation Is All You Need: A Statistical Approach To Label Noise Estimation

Label noise is prevalent in machine learning datasets. It is crucial to ...

Generation and Analysis of Feature-Dependent Pseudo Noise for Training Deep Neural Networks

Training Deep neural networks (DNNs) on noisy labeled datasets is a chal...

A Meta Approach to Defend Noisy Labels by the Manifold Regularizer PSDR

Noisy labels are ubiquitous in real-world datasets, which poses a challe...

Identifying Training Stop Point with Noisy Labeled Data

Training deep neural networks (DNNs) with noisy labels is a challenging ...

Collaborative Label Correction via Entropy Thresholding

Deep neural networks (DNNs) have the capacity to fit extremely noisy lab...

Please sign up or login with your details

Forgot password? Click here to reset