Inconspicuous Adversarial Patches for Fooling Image Recognition Systems on Mobile Devices

06/29/2021
by   Tao Bai, et al.
10

Deep learning based image recognition systems have been widely deployed on mobile devices in today's world. In recent studies, however, deep learning models are shown vulnerable to adversarial examples. One variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities. Though adversarial patches achieve high attack success rates, they are easily being detected because of the visual inconsistency between the patches and the original images. Besides, it usually requires a large amount of data for adversarial patch generation in the literature, which is computationally expensive and time-consuming. To tackle these challenges, we propose an approach to generate inconspicuous adversarial patches with one single image. In our approach, we first decide the patch locations basing on the perceptual sensitivity of victim models, then produce adversarial patches in a coarse-to-fine way by utilizing multiple-scale generators and discriminators. The patches are encouraged to be consistent with the background images with adversarial training while preserving strong attack abilities. Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings through extensive experiments on various models with different architectures and training methods. Compared to other adversarial patches, our adversarial patches hold the most negligible risks to be detected and can evade human observations, which is supported by the illustrations of saliency maps and results of user evaluations. Lastly, we show that our adversarial patches can be applied in the physical world.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 7

page 8

research
09/21/2020

Generating Adversarial yet Inconspicuous Patches with a Single Image

Deep neural networks have been shown vulnerable toadversarial patches, w...
research
05/05/2020

Adversarial Training against Location-Optimized Adversarial Patches

Deep neural networks have been shown to be susceptible to adversarial ex...
research
09/16/2021

Harnessing Perceptual Adversarial Patches for Crowd Counting

Crowd counting, which is significantly important for estimating the numb...
research
05/19/2020

Patch Attack for Automatic Check-out

Adversarial examples are inputs with imperceptible perturbations that ea...
research
04/30/2021

IPatch: A Remote Adversarial Patch

Applications such as autonomous vehicles and medical screening use deep ...
research
04/11/2023

Boosting Cross-task Transferability of Adversarial Patches with Visual Relations

The transferability of adversarial examples is a crucial aspect of evalu...
research
08/10/2023

Adv-Inpainting: Generating Natural and Transferable Adversarial Patch via Attention-guided Feature Fusion

The rudimentary adversarial attacks utilize additive noise to attack fac...

Please sign up or login with your details

Forgot password? Click here to reset