GAMA: Generative Adversarial Multi-Object Scene Attacks

09/20/2022
by   Abhishek Aich, et al.
0

The majority of methods for crafting adversarial attacks have focused on scenes with a single dominant object (e.g., images from ImageNet). On the other hand, natural scenes include multiple dominant objects that are semantically related. Thus, it is crucial to explore designing attack strategies that look beyond learning on single-object scenes or attack single-object victim classifiers. Due to their inherent property of strong transferability of perturbations to unknown models, this paper presents the first approach of using generative models for adversarial attacks on multi-object scenes. In order to represent the relationships between different objects in the input scene, we leverage upon the open-sourced pre-trained vision-language model CLIP (Contrastive Language-Image Pre-training), with the motivation to exploit the encoded semantics in the language space along with the visual space. We call this attack approach Generative Adversarial Multi-object scene Attacks (GAMA). GAMA demonstrates the utility of the CLIP model as an attacker's tool to train formidable perturbation generators for multi-object scenes. Using the joint image-text features to train the generator, we show that GAMA can craft potent transferable perturbations in order to fool victim classifiers in various attack settings. For example, GAMA triggers  16 state-of-the-art generative approaches in black-box settings where both the classifier architecture and data distribution of the attacker are different from the victim. Our code will be made publicly available soon.

READ FULL TEXT

page 4

page 10

page 14

research
09/20/2022

Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks

State-of-the-art generative model-based attacks against image classifier...
research
11/25/2019

ColorFool: Semantic Adversarial Colorization

Adversarial attacks that generate small L_p-norm perturbations to mislea...
research
08/24/2023

Exploring Transferability of Multimodal Adversarial Samples for Vision-Language Pre-training Models with Contrastive Learning

Vision-language pre-training models (VLP) are vulnerable, especially to ...
research
02/23/2023

Boosting Adversarial Transferability using Dynamic Cues

The transferability of adversarial perturbations between image models ha...
research
08/19/2021

Exploiting Multi-Object Relationships for Detecting Adversarial Attacks in Complex Scenes

Vision systems that deploy Deep Neural Networks (DNNs) are known to be v...
research
02/27/2023

GLOW: Global Layout Aware Attacks for Object Detection

Adversarial attacks aims to perturb images such that a predictor outputs...
research
03/29/2022

Zero-Query Transfer Attacks on Context-Aware Object Detectors

Adversarial attacks perturb images such that a deep neural network produ...

Please sign up or login with your details

Forgot password? Click here to reset