Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

03/28/2020
by   Tianlong Chen, et al.
11

Pretrained models from self-supervision are prevalently used in fine-tuning downstream tasks faster or for better accuracy. However, gaining robustness from pretraining is left unexplored. We introduce adversarial training into self-supervision, to provide general-purpose robust pre-trained models for the first time. We find these robust pre-trained models can benefit the subsequent fine-tuning in two ways: i) boosting final model robustness; ii) saving the computation cost, if proceeding towards adversarial fine-tuning. We conduct extensive experiments to demonstrate that the proposed framework achieves large performance margins (eg, 3.83 accuracy, on the CIFAR-10 dataset), compared with the conventional end-to-end adversarial training baseline. Moreover, we find that different self-supervised pre-trained models have a diverse adversarial vulnerability. It inspires us to ensemble several pretraining tasks, which boosts robustness more. Our ensemble strategy contributes to a further improvement of 3.59 while maintaining a slightly higher standard accuracy on CIFAR-10. Our codes are available at https://github.com/TAMU-VITA/Adv-SS-Pretraining.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/06/2021

KNN-BERT: Fine-Tuning Pre-Trained Models with KNN Classifier

Pre-trained models are widely used in fine-tuning downstream tasks with ...
research
11/21/2022

CLAWSAT: Towards Both Robust and Accurate Code Models

We integrate contrastive learning (CL) with adversarial learning to co-o...
research
10/07/2022

Pre-trained Adversarial Perturbations

Self-supervised pre-training has drawn increasing attention in recent ye...
research
03/13/2023

Model-tuning Via Prompts Makes NLP Models Adversarially Robust

In recent years, NLP practitioners have converged on the following pract...
research
07/20/2023

Revisiting Fine-Tuning Strategies for Self-supervised Medical Imaging Analysis

Despite the rapid progress in self-supervised learning (SSL), end-to-end...
research
12/25/2020

A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning

Adversarial Training (AT) with Projected Gradient Descent (PGD) is an ef...
research
06/09/2022

On Data Scaling in Masked Image Modeling

An important goal of self-supervised learning is to enable model pre-tra...

Please sign up or login with your details

Forgot password? Click here to reset