Transferable Physical Attack against Object Detection with Separable Attention

05/19/2022
by   Yu Zhang, et al.
0

Transferable adversarial attack is always in the spotlight since deep learning models have been demonstrated to be vulnerable to adversarial samples. However, existing physical attack methods do not pay enough attention on transferability to unseen models, thus leading to the poor performance of black-box attack.In this paper, we put forward a novel method of generating physically realizable adversarial camouflage to achieve transferable attack against detection models. More specifically, we first introduce multi-scale attention maps based on detection models to capture features of objects with various resolutions. Meanwhile, we adopt a sequence of composite transformations to obtain the averaged attention maps, which could curb model-specific noise in the attention and thus further boost transferability. Unlike the general visualization interpretation methods where model attention should be put on the foreground object as much as possible, we carry out attack on separable attention from the opposite perspective, i.e. suppressing attention of the foreground and enhancing that of the background. Consequently, transferable adversarial camouflage could be yielded efficiently with our novel attention-based loss function. Extensive comparison experiments verify the superiority of our method to state-of-the-art methods.

READ FULL TEXT

page 2

page 4

page 8

research
08/16/2020

Attack on Multi-Node Attention for Object Detection

This paper focuses on high-transferable adversarial attacks on detection...
research
04/27/2022

Improving the Transferability of Adversarial Examples with Restructure Embedded Patches

Vision transformers (ViTs) have demonstrated impressive performance in v...
research
04/26/2021

Delving into Data: Effectively Substitute Training for Black-box Attack

Deep models have shown their vulnerability when processing adversarial s...
research
11/21/2021

Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability

The black-box adversarial attack has attracted impressive attention for ...
research
03/01/2021

Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World

Deep learning models are vulnerable to adversarial examples. As a more t...
research
05/02/2023

Boosting Adversarial Transferability via Fusing Logits of Top-1 Decomposed Feature

Recent research has shown that Deep Neural Networks (DNNs) are highly vu...
research
03/18/2022

DTA: Physical Camouflage Attacks using Differentiable Transformation Network

To perform adversarial attacks in the physical world, many studies have ...

Please sign up or login with your details

Forgot password? Click here to reset