Consistency Regularization Can Improve Robustness to Label Noise

10/04/2021
by   Erik Englesson, et al.
0

Consistency regularization is a commonly-used technique for semi-supervised and self-supervised learning. It is an auxiliary objective function that encourages the prediction of the network to be similar in the vicinity of the observed training samples. Hendrycks et al. (2020) have recently shown such regularization naturally brings test-time robustness to corrupted data and helps with calibration. This paper empirically studies the relevance of consistency regularization for training-time robustness to noisy labels. First, we make two interesting and useful observations regarding the consistency of networks trained with the standard cross entropy loss on noisy datasets which are: (i) networks trained on noisy data have lower consistency than those trained on clean data, and(ii) the consistency reduces more significantly around noisy-labelled training data points than correctly-labelled ones. Then, we show that a simple loss function that encourages consistency improves the robustness of the models to label noise on both synthetic (CIFAR-10, CIFAR-100) and real-world (WebVision) noise as well as different noise rates and types and achieves state-of-the-art results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/29/2021

Robust Temporal Ensembling for Learning with Noisy Labels

Successful training of deep neural networks with noisy labels is an esse...
research
05/27/2021

Using Early-Learning Regularization to Classify Real-World Noisy Data

The memorization problem is well-known in the field of computer vision. ...
research
12/08/2020

Multi-Objective Interpolation Training for Robustness to Label Noise

Deep neural networks trained with standard cross-entropy loss memorize n...
research
11/22/2021

S3: Supervised Self-supervised Learning under Label Noise

Despite the large progress in supervised learning with Neural Networks, ...
research
02/14/2023

The Missing Margin: How Sample Corruption Affects Distance to the Boundary in ANNs

Classification margins are commonly used to estimate the generalization ...
research
12/03/2022

CrossSplit: Mitigating Label Noise Memorization through Data Splitting

We approach the problem of improving robustness of deep learning algorit...
research
02/04/2022

Learning with Neighbor Consistency for Noisy Labels

Recent advances in deep learning have relied on large, labelled datasets...

Please sign up or login with your details

Forgot password? Click here to reset