Boosting Adversarial Transferability using Dynamic Cues

02/23/2023
by   Muzammal Naseer, et al.
0

The transferability of adversarial perturbations between image models has been extensively studied. In this case, an attack is generated from a known surrogate , the ImageNet trained model, and transferred to change the decision of an unknown (black-box) model trained on an image dataset. However, attacks generated from image models do not capture the dynamic nature of a moving object or a changing scene due to a lack of temporal cues within image models. This leads to reduced transferability of adversarial attacks from representation-enriched image models such as Supervised Vision Transformers (ViTs), Self-supervised ViTs (, DINO), and Vision-language models (, CLIP) to black-box video models. In this work, we induce dynamic cues within the image models without sacrificing their original performance on images. To this end, we optimize temporal prompts through frozen image models to capture motion dynamics. Our temporal prompts are the result of a learnable transformation that allows optimizing for temporal gradients during an adversarial attack to fool the motion dynamics. Specifically, we introduce spatial (image) and temporal (video) cues within the same source model through task-specific prompts. Attacking such prompts maximizes the adversarial transferability from image-to-video and image-to-image models using the attacks designed for image models. Our attack results indicate that the attacker does not need specialized architectures, , divided space-time attention, 3D convolutions, or multi-view convolution networks for different data modalities. Image models are effective surrogates to optimize an adversarial attack to fool black-box models in a changing environment over time. Code is available at https://bit.ly/3Xd9gRQ

READ FULL TEXT

page 3

page 5

page 16

page 18

page 21

page 22

page 23

page 24

research
07/18/2022

Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations

Transferable adversarial attacks optimize adversaries from a pretrained ...
research
10/18/2021

Boosting the Transferability of Video Adversarial Examples via Temporal Translation

Although deep-learning based video recognition models have achieved rema...
research
04/27/2022

Improving the Transferability of Adversarial Examples with Restructure Embedded Patches

Vision transformers (ViTs) have demonstrated impressive performance in v...
research
07/09/2021

Universal 3-Dimensional Perturbations for Black-Box Attacks on Video Recognition Systems

Widely deployed deep neural network (DNN) models have been proven to be ...
research
09/20/2022

GAMA: Generative Adversarial Multi-Object Scene Attacks

The majority of methods for crafting adversarial attacks have focused on...
research
09/14/2023

Semantic Adversarial Attacks via Diffusion Models

Traditional adversarial attacks concentrate on manipulating clean exampl...
research
06/15/2021

Model Extraction and Adversarial Attacks on Neural Networks using Switching Power Information

Artificial neural networks (ANNs) have gained significant popularity in ...

Please sign up or login with your details

Forgot password? Click here to reset