Object-Attentional Untargeted Adversarial Attack

10/16/2022
by   Chao Zhou, et al.
0

Deep neural networks are facing severe threats from adversarial attacks. Most existing black-box attacks fool target model by generating either global perturbations or local patches. However, both global perturbations and local patches easily cause annoying visual artifacts in adversarial example. Compared with some smooth regions of an image, the object region generally has more edges and a more complex texture. Thus small perturbations on it will be more imperceptible. On the other hand, the object region is undoubtfully the decisive part of an image to classification tasks. Motivated by these two facts, we propose an object-attentional adversarial attack method for untargeted attack. Specifically, we first generate an object region by intersecting the object detection region from YOLOv4 with the salient object detection (SOD) region from HVPNet. Furthermore, we design an activation strategy to avoid the reaction caused by the incomplete SOD. Then, we perform an adversarial attack only on the detected object region by leveraging Simple Black-box Adversarial Attack (SimBA). To verify the proposed method, we create a unique dataset by extracting all the images containing the object defined by COCO from ImageNet-1K, named COCO-Reduced-ImageNet in this paper. Experimental results on ImageNet-1K and COCO-Reduced-ImageNet show that under various system settings, our method yields the adversarial example with better perceptual quality meanwhile saving the query budget up to 24.16% compared to the state-of-the-art approaches including SimBA.

READ FULL TEXT

page 2

page 6

page 8

page 16

research
06/04/2022

Saliency Attack: Towards Imperceptible Black-box Adversarial Attack

Deep neural networks are vulnerable to adversarial examples, even in the...
research
01/19/2021

PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack

The studies on black-box adversarial attacks have become increasingly pr...
research
02/27/2023

GLOW: Global Layout Aware Attacks for Object Detection

Adversarial attacks aims to perturb images such that a predictor outputs...
research
01/22/2022

Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection

Object detection has been widely used in many safety-critical tasks, suc...
research
08/16/2020

Attack on Multi-Node Attention for Object Detection

This paper focuses on high-transferable adversarial attacks on detection...
research
12/17/2019

APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection

Physical adversarial attacks threaten to fool object detection systems, ...
research
09/20/2022

Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks

State-of-the-art generative model-based attacks against image classifier...

Please sign up or login with your details

Forgot password? Click here to reset