Transferable Physical Attack against Object Detection with Separable Attention

by   Yu Zhang, et al.

Transferable adversarial attack is always in the spotlight since deep learning models have been demonstrated to be vulnerable to adversarial samples. However, existing physical attack methods do not pay enough attention on transferability to unseen models, thus leading to the poor performance of black-box attack.In this paper, we put forward a novel method of generating physically realizable adversarial camouflage to achieve transferable attack against detection models. More specifically, we first introduce multi-scale attention maps based on detection models to capture features of objects with various resolutions. Meanwhile, we adopt a sequence of composite transformations to obtain the averaged attention maps, which could curb model-specific noise in the attention and thus further boost transferability. Unlike the general visualization interpretation methods where model attention should be put on the foreground object as much as possible, we carry out attack on separable attention from the opposite perspective, i.e. suppressing attention of the foreground and enhancing that of the background. Consequently, transferable adversarial camouflage could be yielded efficiently with our novel attention-based loss function. Extensive comparison experiments verify the superiority of our method to state-of-the-art methods.


page 2

page 4

page 8


Attack on Multi-Node Attention for Object Detection

This paper focuses on high-transferable adversarial attacks on detection...

Improving the Transferability of Adversarial Examples with Restructure Embedded Patches

Vision transformers (ViTs) have demonstrated impressive performance in v...

Delving into Data: Effectively Substitute Training for Black-box Attack

Deep models have shown their vulnerability when processing adversarial s...

Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability

The black-box adversarial attack has attracted impressive attention for ...

Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World

Deep learning models are vulnerable to adversarial examples. As a more t...

Boosting Adversarial Transferability via Fusing Logits of Top-1 Decomposed Feature

Recent research has shown that Deep Neural Networks (DNNs) are highly vu...

DTA: Physical Camouflage Attacks using Differentiable Transformation Network

To perform adversarial attacks in the physical world, many studies have ...

Please sign up or login with your details

Forgot password? Click here to reset