RUSH: Robust Contrastive Learning via Randomized Smoothing

07/11/2022
by   Yijiang Pang, et al.
0

Recently, adversarial training has been incorporated in self-supervised contrastive pre-training to augment label efficiency with exciting adversarial robustness. However, the robustness came at a cost of expensive adversarial training. In this paper, we show a surprising fact that contrastive pre-training has an interesting yet implicit connection with robustness, and such natural robustness in the pre trained representation enables us to design a powerful robust algorithm against adversarial attacks, RUSH, that combines the standard contrastive pre-training and randomized smoothing. It boosts both standard accuracy and robust accuracy, and significantly reduces training costs as compared with adversarial training. We use extensive empirical studies to show that the proposed RUSH outperforms robust classifiers from adversarial training, by a significant margin on common benchmarks (CIFAR-10, CIFAR-100, and STL-10) under first-order attacks. In particular, under ℓ_∞-norm perturbations of size 8/255 PGD attack on CIFAR-10, our model using ResNet-18 as backbone reached 77.8 standard accuracy. Our work has an improvement of over 15 and a slight improvement in standard accuracy, compared to the state-of-the-arts.

READ FULL TEXT
research
10/26/2020

Robust Pre-Training by Adversarial Contrastive Learning

Recent work has shown that, when integrated with adversarial training, s...
research
12/08/2021

On visual self-supervision and its effect on model robustness

Recent self-supervision methods have found success in learning feature r...
research
12/24/2020

Adversarial Momentum-Contrastive Pre-Training

Deep neural networks are vulnerable to semantic invariant corruptions an...
research
02/05/2023

On the Role of Contrastive Representation Learning in Adversarial Robustness: An Empirical Study

Self-supervised contrastive learning has solved one of the significant o...
research
05/31/2019

Unlabeled Data Improves Adversarial Robustness

We demonstrate, theoretically and empirically, that adversarial robustne...
research
10/09/2021

Adversarial Training for Face Recognition Systems using Contrastive Adversarial Learning and Triplet Loss Fine-tuning

Though much work has been done in the domain of improving the adversaria...
research
01/29/2022

Investigating Why Contrastive Learning Benefits Robustness Against Label Noise

Self-supervised contrastive learning has recently been shown to be very ...

Please sign up or login with your details

Forgot password? Click here to reset