Towards Adversarial Attack on Vision-Language Pre-training Models

06/19/2022
by   Jiaming Zhang, et al.
11

While vision-language pre-training model (VLP) has shown revolutionary improvements on various vision-language (V+L) tasks, the studies regarding its adversarial robustness remain largely unexplored. This paper studied the adversarial attack on popular VLP models and V+L tasks. First, we analyzed the performance of adversarial attacks under different settings. By examining the influence of different perturbed objects and attack targets, we concluded some key observations as guidance on both designing strong multimodal adversarial attack and constructing robust VLP models. Second, we proposed a novel multimodal attack method on the VLP models called Collaborative Multimodal Adversarial Attack (Co-Attack), which collectively carries out the attacks on the image modality and the text modality. Experimental results demonstrated that the proposed method achieves improved attack performances on different V+L downstream tasks and VLP models. The analysis observations and novel attack method hopefully provide new understanding into the adversarial robustness of VLP models, so as to contribute their safe and reliable deployment in more real-world scenarios.

READ FULL TEXT
research
07/26/2023

Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models

Vision-language pre-training (VLP) models have shown vulnerability to ad...
research
12/08/2021

SNEAK: Synonymous Sentences-Aware Adversarial Attack on Natural Language Video Localization

Natural language video localization (NLVL) is an important task in the v...
research
08/28/2023

Adversarial Attacks on Foundational Vision Models

Rapid progress is being made in developing large, pretrained, task-agnos...
research
12/22/2021

Understanding and Measuring Robustness of Multimodal Learning

The modern digital world is increasingly becoming multimodal. Although m...
research
04/10/2023

Defense-Prefix for Preventing Typographic Attacks on CLIP

Vision-language pre-training models (VLPs) have exhibited revolutionary ...
research
08/22/2023

Adversarial Attacks on Code Models with Discriminative Graph Patterns

Pre-trained language models of code are now widely used in various softw...
research
07/13/2023

Introducing Foundation Models as Surrogate Models: Advancing Towards More Practical Adversarial Attacks

Recently, the no-box adversarial attack, in which the attacker lacks acc...

Please sign up or login with your details

Forgot password? Click here to reset