Understanding Zero-Shot Adversarial Robustness for Large-Scale Models

12/14/2022
by   Chengzhi Mao, et al.
0

Pretrained large-scale vision-language models like CLIP have exhibited strong generalization over unseen tasks. Yet imperceptible adversarial perturbations can significantly reduce CLIP's performance on new tasks. In this work, we identify and explore the problem of adapting large-scale models for zero-shot adversarial robustness. We first identify two key factors during model adaption – training losses and adaptation methods – that affect the model's zero-shot adversarial robustness. We then propose a text-guided contrastive adversarial training loss, which aligns the text embeddings and the adversarial visual features with contrastive learning on a small set of training data. We apply this training loss to two adaption methods, model finetuning and visual prompt tuning. We find that visual prompt tuning is more effective in the absence of texts, while finetuning wins in the existence of text guidance. Overall, our approach significantly improves the zero-shot adversarial robustness over CLIP, seeing an average improvement of over 31 points over ImageNet and 15 zero-shot datasets. We hope this work can shed light on understanding the zero-shot adversarial robustness of large-scale models.

READ FULL TEXT
research
03/24/2022

FitCLIP: Refining Large-Scale Pretrained Image-Text Models for Zero-Shot Video Understanding Tasks

Large-scale pretrained image-text models have shown incredible zero-shot...
research
01/30/2023

Anchor-Based Adversarially Robust Zero-Shot Learning Driven by Language

Deep neural networks are vulnerable to adversarial attacks. We consider ...
research
05/09/2023

Boosting Visual-Language Models by Exploiting Hard Samples

Large vision and language models, such as Contrastive Language-Image Pre...
research
07/14/2022

Contrastive Adapters for Foundation Model Group Robustness

While large pretrained foundation models (FMs) have shown remarkable zer...
research
09/04/2023

MathAttack: Attacking Large Language Models Towards Math Solving Ability

With the boom of Large Language Models (LLMs), the research of solving M...
research
10/09/2022

Learning to Decompose Visual Features with Latent Textual Prompts

Recent advances in pre-training vision-language models like CLIP have sh...
research
09/14/2023

Efficiently Robustify Pre-trained Models

A recent trend in deep learning algorithms has been towards training lar...

Please sign up or login with your details

Forgot password? Click here to reset