Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models

07/26/2023
by   Dong Lu, et al.
0

Vision-language pre-training (VLP) models have shown vulnerability to adversarial examples in multimodal tasks. Furthermore, malicious adversaries can be deliberately transferred to attack other black-box models. However, existing work has mainly focused on investigating white-box attacks. In this paper, we present the first study to investigate the adversarial transferability of recent VLP models. We observe that existing methods exhibit much lower transferability, compared to the strong attack performance in white-box settings. The transferability degradation is partly caused by the under-utilization of cross-modal interactions. Particularly, unlike unimodal learning, VLP models rely heavily on cross-modal interactions and the multimodal alignments are many-to-many, e.g., an image can be described in various natural languages. To this end, we propose a highly transferable Set-level Guidance Attack (SGA) that thoroughly leverages modality interactions and incorporates alignment-preserving augmentation with cross-modal guidance. Experimental results demonstrate that SGA could generate adversarial examples that can strongly transfer across different VLP models on multiple downstream vision-language tasks. On image-text retrieval, SGA significantly enhances the attack success rate for transfer attacks from ALBEF to TCL by a large margin (at least 9.78

READ FULL TEXT
research
12/10/2021

Cross-Modal Transferable Adversarial Attacks from Images to Videos

Recent studies have shown that adversarial examples hand-crafted on one ...
research
06/19/2022

Towards Adversarial Attack on Vision-Language Pre-training Models

While vision-language pre-training model (VLP) has shown revolutionary i...
research
08/24/2023

Exploring Transferability of Multimodal Adversarial Samples for Vision-Language Pre-training Models with Contrastive Learning

Vision-language pre-training models (VLP) are vulnerable, especially to ...
research
11/22/2019

Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction

Neural networks are known to be vulnerable to carefully crafted adversar...
research
08/23/2022

Learning More May Not Be Better: Knowledge Transferability in Vision and Language Tasks

Is more data always better to train vision-and-language models? We study...
research
04/22/2022

Enhancing the Transferability via Feature-Momentum Adversarial Attack

Transferable adversarial attack has drawn increasing attention due to th...
research
05/24/2023

SmartTrim: Adaptive Tokens and Parameters Pruning for Efficient Vision-Language Models

Despite achieving remarkable performance on various vision-language task...

Please sign up or login with your details

Forgot password? Click here to reset