Investigating Why Contrastive Learning Benefits Robustness Against Label Noise

01/29/2022
by   Yihao Xue, et al.
0

Self-supervised contrastive learning has recently been shown to be very effective in preventing deep networks from overfitting noisy labels. Despite its empirical success, the theoretical understanding of the effect of contrastive learning on boosting robustness is very limited. In this work, we rigorously prove that the representation matrix learned by contrastive learning boosts robustness, by having: (i) one prominent singular value corresponding to every sub-class in the data, and remaining significantly smaller singular values; and (ii) a large alignment between the prominent singular vector and the clean labels of each sub-class. The above properties allow a linear layer trained on the representations to quickly learn the clean labels, and prevent it from overfitting the noise for a large number of training iterations. We further show that the low-rank structure of the Jacobian of deep networks pre-trained with contrastive learning allows them to achieve a superior performance initially, when fine-tuned on noisy labels. Finally, we demonstrate that the initial robustness provided by contrastive learning enables robust training methods to achieve state-of-the-art performance under extreme noise levels, e.g., an average of 27.18% and 15.58% increase in accuracy on CIFAR-10 and CIFAR-100 with 80% symmetric noisy labels, and 4.11% increase in accuracy on WebVision.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2020

Coresets for Robust Training of Neural Networks against Noisy Labels

Modern neural networks have the capacity to overfit noisy labels frequen...
research
03/08/2022

Selective-Supervised Contrastive Learning with Noisy Labels

Deep networks have strong capacities of embedding data into latent repre...
research
02/05/2023

On the Role of Contrastive Representation Learning in Adversarial Robustness: An Empirical Study

Self-supervised contrastive learning has solved one of the significant o...
research
07/11/2022

RUSH: Robust Contrastive Learning via Randomized Smoothing

Recently, adversarial training has been incorporated in self-supervised ...
research
08/20/2021

Contrastive Representations for Label Noise Require Fine-Tuning

In this paper we show that the combination of a Contrastive representati...
research
12/27/2022

Truncate-Split-Contrast: A Framework for Learning from Mislabeled Videos

Learning with noisy label (LNL) is a classic problem that has been exten...
research
06/17/2021

Poisoning and Backdooring Contrastive Learning

Contrastive learning methods like CLIP train on noisy and uncurated trai...

Please sign up or login with your details

Forgot password? Click here to reset