Robust and On-the-fly Dataset Denoising for Image Classification

03/24/2020
by   Jiaming Song, et al.
0

Memorization in over-parameterized neural networks could severely hurt generalization in the presence of mislabeled examples. However, mislabeled examples are hard to avoid in extremely large datasets collected with weak supervision. We address this problem by reasoning counterfactually about the loss distribution of examples with uniform random labels had they were trained with the real examples, and use this information to remove noisy examples from the training set. First, we observe that examples with uniform random labels have higher losses when trained with stochastic gradient descent under large learning rates. Then, we propose to model the loss distribution of the counterfactual examples using only the network parameters, which is able to model such examples with remarkable success. Finally, we propose to remove examples whose loss exceeds a certain quantile of the modeled loss distribution. This leads to On-the-fly Data Denoising (ODD), a simple yet effective algorithm that is robust to mislabeled examples, while introducing almost zero computational overhead compared to standard training. ODD is able to achieve state-of-the-art results on a wide range of datasets including real-world ones such as WebVision and Clothing1M.

READ FULL TEXT

page 1

page 2

page 3

page 6

page 8

page 10

page 16

page 17

research
04/03/2021

Exponentiated Gradient Reweighting for Robust Training Under Label Noise and Beyond

Many learning tasks in machine learning can be viewed as taking a gradie...
research
11/08/2022

Learning advisor networks for noisy image classification

In this paper, we introduced the novel concept of advisor network to add...
research
12/21/2020

Regularization in neural network optimization via trimmed stochastic gradient descent with noisy label

Regularization is essential for avoiding over-fitting to training data i...
research
12/23/2020

Noisy Labels Can Induce Good Representations

The current success of deep learning depends on large-scale labeled data...
research
06/19/2023

Simple and Fast Group Robustness by Automatic Feature Reweighting

A major challenge to out-of-distribution generalization is reliance on s...
research
05/31/2020

Graph Learning with Loss-Guided Training

Classically, ML models trained with stochastic gradient descent (SGD) ar...
research
12/16/2021

Understanding Memorization from the Perspective of Optimization via Efficient Influence Estimation

Over-parameterized deep neural networks are able to achieve excellent tr...

Please sign up or login with your details

Forgot password? Click here to reset