Harnessing Perceptual Adversarial Patches for Crowd Counting

09/16/2021
by   Shunchang Liu, et al.
5

Crowd counting, which is significantly important for estimating the number of people in safety-critical scenes, has been shown to be vulnerable to adversarial examples in the physical world (e.g., adversarial patches). Though harmful, adversarial examples are also valuable for assessing and better understanding model robustness. However, existing adversarial example generation methods in crowd counting scenarios lack strong transferability among different black-box models. Motivated by the fact that transferability is positively correlated to the model-invariant characteristics, this paper proposes the Perceptual Adversarial Patch (PAP) generation framework to learn the shared perceptual features between models by exploiting both the model scale perception and position perception. Specifically, PAP exploits differentiable interpolation and density attention to help learn the invariance between models during training, leading to better transferability. In addition, we surprisingly found that our adversarial patches could also be utilized to benefit the performance of vanilla models for alleviating several challenges including cross datasets and complex backgrounds. Extensive experiments under both digital and physical world scenarios demonstrate the effectiveness of our PAP.

READ FULL TEXT

page 1

page 3

page 6

page 7

research
04/22/2021

Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting

Crowd counting has drawn much attention due to its importance in safety-...
research
06/29/2021

Inconspicuous Adversarial Patches for Fooling Image Recognition Systems on Mobile Devices

Deep learning based image recognition systems have been widely deployed ...
research
04/13/2022

Defensive Patches for Robust Recognition in the Physical World

To operate in real-world high-stakes environments, deep learning systems...
research
04/11/2023

Boosting Cross-task Transferability of Adversarial Patches with Visual Relations

The transferability of adversarial examples is a crucial aspect of evalu...
research
03/01/2021

Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World

Deep learning models are vulnerable to adversarial examples. As a more t...
research
05/19/2020

Patch Attack for Automatic Check-out

Adversarial examples are inputs with imperceptible perturbations that ea...
research
09/21/2020

Generating Adversarial yet Inconspicuous Patches with a Single Image

Deep neural networks have been shown vulnerable toadversarial patches, w...

Please sign up or login with your details

Forgot password? Click here to reset