DeepAI AI Chat
Log In Sign Up

Patch Attack for Automatic Check-out

05/19/2020
by   Aishan Liu, et al.
5

Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs). Recently, adversarial patch, with noise confined to a small and localized patch, has emerged for its easy feasibility in real-world scenarios. However, existing strategies failed to generate adversarial patches with strong generalization ability. In other words, the adversarial patches were input-specific and failed to attack images from all classes, especially unseen ones during training. To address the problem, this paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability, which exploits both the perceptual and semantic bias of models. Regarding the perceptual bias, since DNNs are strongly biased towards textures, we exploit the hard examples which convey strong model uncertainties and extract a textural patch prior from them by adopting the style similarities. The patch prior is more close to decision boundaries and would promote attacks. To further alleviate the heavy dependency on large amounts of data in training universal attacks, we further exploit the semantic bias. As the class-wise preference, prototypes are introduced and pursued by maximizing the multi-class margin to help universal training. Taking AutomaticCheck-out (ACO) as the typical scenario, extensive experiments including white-box and black-box settings in both digital-world(RPC, the largest ACO related dataset) and physical-world scenario(Taobao and JD, the world' s largest online shopping platforms) are conducted. Experimental results demonstrate that our proposed framework outperforms state-of-the-art adversarial patch attack methods.

READ FULL TEXT

page 2

page 9

page 11

07/03/2021

Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity

Deep neural networks (DNNs) have been found to be vulnerable to adversar...
06/29/2021

Inconspicuous Adversarial Patches for Fooling Image Recognition Systems on Mobile Devices

Deep learning based image recognition systems have been widely deployed ...
09/21/2020

Generating Adversarial yet Inconspicuous Patches with a Single Image

Deep neural networks have been shown vulnerable toadversarial patches, w...
08/11/2021

Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack

Adversarial patch attack against image classification deep neural networ...
12/26/2022

Simultaneously Optimizing Perturbations and Positions for Black-box Adversarial Patch Attacks

Adversarial patch is an important form of real-world adversarial attack ...
03/21/2023

Efficient Decision-based Black-box Patch Attacks on Video Recognition

Although Deep Neural Networks (DNNs) have demonstrated excellent perform...
11/19/2022

Phonemic Adversarial Attack against Audio Recognition in Real World

Recently, adversarial attacks for audio recognition have attracted much ...

Code Repositories