DeepAI AI Chat
Log In Sign Up

Inconspicuous Adversarial Patches for Fooling Image Recognition Systems on Mobile Devices

by   Tao Bai, et al.

Deep learning based image recognition systems have been widely deployed on mobile devices in today's world. In recent studies, however, deep learning models are shown vulnerable to adversarial examples. One variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities. Though adversarial patches achieve high attack success rates, they are easily being detected because of the visual inconsistency between the patches and the original images. Besides, it usually requires a large amount of data for adversarial patch generation in the literature, which is computationally expensive and time-consuming. To tackle these challenges, we propose an approach to generate inconspicuous adversarial patches with one single image. In our approach, we first decide the patch locations basing on the perceptual sensitivity of victim models, then produce adversarial patches in a coarse-to-fine way by utilizing multiple-scale generators and discriminators. The patches are encouraged to be consistent with the background images with adversarial training while preserving strong attack abilities. Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings through extensive experiments on various models with different architectures and training methods. Compared to other adversarial patches, our adversarial patches hold the most negligible risks to be detected and can evade human observations, which is supported by the illustrations of saliency maps and results of user evaluations. Lastly, we show that our adversarial patches can be applied in the physical world.


page 1

page 4

page 5

page 6

page 7

page 8


Generating Adversarial yet Inconspicuous Patches with a Single Image

Deep neural networks have been shown vulnerable toadversarial patches, w...

Adversarial Training against Location-Optimized Adversarial Patches

Deep neural networks have been shown to be susceptible to adversarial ex...

Harnessing Perceptual Adversarial Patches for Crowd Counting

Crowd counting, which is significantly important for estimating the numb...

Patch Attack for Automatic Check-out

Adversarial examples are inputs with imperceptible perturbations that ea...

IPatch: A Remote Adversarial Patch

Applications such as autonomous vehicles and medical screening use deep ...

Boosting Cross-task Transferability of Adversarial Patches with Visual Relations

The transferability of adversarial examples is a crucial aspect of evalu...

Adv-Inpainting: Generating Natural and Transferable Adversarial Patch via Attention-guided Feature Fusion

The rudimentary adversarial attacks utilize additive noise to attack fac...