Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks

09/20/2022
by   Abhishek Aich, et al.
0

State-of-the-art generative model-based attacks against image classifiers overwhelmingly focus on single-object (i.e., single dominant object) images. Different from such settings, we tackle a more practical problem of generating adversarial perturbations using multi-object (i.e., multiple dominant objects) images as they are representative of most real-world scenes. Our goal is to design an attack strategy that can learn from such natural scenes by leveraging the local patch differences that occur inherently in such images (e.g. difference between the local patch on the object `person' and the object `bike' in a traffic scene). Our key idea is: to misclassify an adversarial multi-object image, each local patch in the image should confuse the victim classifier. Based on this, we propose a novel generative attack (called Local Patch Difference or LPD-Attack) where a novel contrastive loss function uses the aforesaid local differences in feature space of multi-object scenes to optimize the perturbation generator. Through various experiments across diverse victim convolutional neural networks, we show that our approach outperforms baseline generative attacks with highly transferable perturbations when evaluated under different white-box and black-box settings.

READ FULL TEXT

page 4

page 8

research
09/20/2022

GAMA: Generative Adversarial Multi-Object Scene Attacks

The majority of methods for crafting adversarial attacks have focused on...
research
07/02/2023

Query-Efficient Decision-based Black-Box Patch Attack

Deep neural networks (DNNs) have been showed to be highly vulnerable to ...
research
09/27/2022

Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks against Object Detection

Adversarial patch-based attacks aim to fool a neural network with an int...
research
12/26/2022

Simultaneously Optimizing Perturbations and Positions for Black-box Adversarial Patch Attacks

Adversarial patch is an important form of real-world adversarial attack ...
research
04/25/2023

Patch-based 3D Natural Scene Generation from a Single Example

We target a 3D generative model for general natural scenes that are typi...
research
10/16/2022

Object-Attentional Untargeted Adversarial Attack

Deep neural networks are facing severe threats from adversarial attacks....
research
02/27/2023

GLOW: Global Layout Aware Attacks for Object Detection

Adversarial attacks aims to perturb images such that a predictor outputs...

Please sign up or login with your details

Forgot password? Click here to reset