Late Stopping: Avoiding Confidently Learning from Mislabeled Examples

08/26/2023
by   Suqin Yuan, et al.
0

Sample selection is a prevalent method in learning with noisy labels, where small-loss data are typically considered as correctly labeled data. However, this method may not effectively identify clean hard examples with large losses, which are critical for achieving the model's close-to-optimal generalization performance. In this paper, we propose a new framework, Late Stopping, which leverages the intrinsic robust learning ability of DNNs through a prolonged training process. Specifically, Late Stopping gradually shrinks the noisy dataset by removing high-probability mislabeled examples while retaining the majority of clean hard examples in the training set throughout the learning process. We empirically observe that mislabeled and clean examples exhibit differences in the number of epochs required for them to be consistently and correctly classified, and thus high-probability mislabeled examples can be removed. Experimental results on benchmark-simulated and real-world noisy datasets demonstrate that the proposed method outperforms state-of-the-art counterparts.

READ FULL TEXT

page 2

page 5

page 8

research
09/02/2023

Regularly Truncated M-estimators for Learning with Noisy Labels

The sample selection approach is very popular in learning with noisy lab...
research
10/20/2019

Leveraging inductive bias of neural networks for learning without explicit human annotations

Classification problems today are typically solved by first collecting e...
research
10/22/2021

PropMix: Hard Sample Filtering and Proportional MixUp for Learning with Noisy Labels

The most competitive noisy label learning methods rely on an unsupervise...
research
01/02/2023

Knockoffs-SPR: Clean Sample Selection in Learning with Noisy Labels

A noisy training set usually leads to the degradation of the generalizat...
research
06/16/2023

Training shallow ReLU networks on noisy data using hinge loss: when do we overfit and is it benign?

We study benign overfitting in two-layer ReLU networks trained using gra...
research
06/01/2021

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

In learning with noisy labels, the sample selection approach is very pop...
research
12/09/2020

A Topological Filter for Learning with Label Noise

Noisy labels can impair the performance of deep neural networks. To tack...

Please sign up or login with your details

Forgot password? Click here to reset