On visual self-supervision and its effect on model robustness

12/08/2021
by   Michal Kucer, et al.
0

Recent self-supervision methods have found success in learning feature representations that could rival ones from full supervision, and have been shown to be beneficial to the model in several ways: for example improving models robustness and out-of-distribution detection. In our paper, we conduct an empirical study to understand more precisely in what way can self-supervised learning - as a pre-training technique or part of adversarial training - affects model robustness to l_2 and l_∞ adversarial perturbations and natural image corruptions. Self-supervision can indeed improve model robustness, however it turns out the devil is in the details. If one simply adds self-supervision loss in tandem with adversarial training, then one sees improvement in accuracy of the model when evaluated with adversarial perturbations smaller or comparable to the value of ϵ_train that the robust model is trained with. However, if one observes the accuracy for ϵ_test≥ϵ_train, the model accuracy drops. In fact, the larger the weight of the supervision loss, the larger the drop in performance, i.e. harming the robustness of the model. We identify primary ways in which self-supervision can be added to adversarial training, and observe that using a self-supervised loss to optimize both network parameters and find adversarial examples leads to the strongest improvement in model robustness, as this can be viewed as a form of ensemble adversarial training. Although self-supervised pre-training yields benefits in improving adversarial training as compared to random weight initialization, we observe no benefit in model robustness or accuracy if self-supervision is incorporated into adversarial training.

READ FULL TEXT
research
06/28/2019

Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty

Self-supervision provides effective representations for downstream tasks...
research
10/26/2020

Robust Pre-Training by Adversarial Contrastive Learning

Recent work has shown that, when integrated with adversarial training, s...
research
07/11/2022

RUSH: Robust Contrastive Learning via Randomized Smoothing

Recently, adversarial training has been incorporated in self-supervised ...
research
12/24/2020

Adversarial Momentum-Contrastive Pre-Training

Deep neural networks are vulnerable to semantic invariant corruptions an...
research
11/21/2022

CLAWSAT: Towards Both Robust and Accurate Code Models

We integrate contrastive learning (CL) with adversarial learning to co-o...
research
07/02/2020

Decoder-free Robustness Disentanglement without (Additional) Supervision

Adversarial Training (AT) is proposed to alleviate the adversarial vulne...
research
10/03/2020

Does Network Width Really Help Adversarial Robustness?

Adversarial training is currently the most powerful defense against adve...

Please sign up or login with your details

Forgot password? Click here to reset