Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting

04/22/2021
by   Qiming Wu, et al.
0

Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems. Especially, deep neural network (DNN) methods have significantly reduced estimation errors for crowd counting missions. Recent studies have demonstrated that DNNs are vulnerable to adversarial attacks, i.e., normal images with human-imperceptible perturbations could mislead DNNs to make false predictions. In this work, we propose a robust attack strategy called Adversarial Patch Attack with Momentum (APAM) to systematically evaluate the robustness of crowd counting models, where the attacker's goal is to create an adversarial perturbation that severely degrades their performances, thus leading to public safety accidents (e.g., stampede accidents). Especially, the proposed attack leverages the extreme-density background information of input images to generate robust adversarial patches via a series of transformations (e.g., interpolation, rotation, etc.). We observe that by perturbing less than 6% of image pixels, our attacks severely degrade the performance of crowd counting systems, both digitally and physically. To better enhance the adversarial robustness of crowd counting models, we propose the first regression model-based Randomized Ablation (RA), which is more sufficient than Adversarial Training (ADT) (Mean Absolute Error of RA is 5 lower than ADT on clean samples and 30 lower than ADT on adversarial examples). Extensive experiments on five crowd counting models demonstrate the effectiveness and generality of the proposed method. Code is available at <https://github.com/harrywuhust2022/Adv-Crowd-analysis>.

READ FULL TEXT

page 2

page 5

page 8

research
09/16/2021

Harnessing Perceptual Adversarial Patches for Crowd Counting

Crowd counting, which is significantly important for estimating the numb...
research
07/12/2022

Backdoor Attacks on Crowd Counting

Crowd counting is a regression task that estimates the number of people ...
research
11/26/2019

Using Depth for Pixel-Wise Detection of Adversarial Attacks in Crowd Counting

State-of-the-art methods for counting people in crowded scenes rely on d...
research
02/25/2020

(De)Randomized Smoothing for Certifiable Defense against Patch Attacks

Patch adversarial attacks on images, in which the attacker can distort p...
research
10/18/2022

Inception-Based Crowd Counting – Being Fast while Remaining Accurate

Recent sophisticated CNN-based algorithms have demonstrated their extrao...
research
10/30/2022

Benchmarking Adversarial Patch Against Aerial Detection

DNNs are vulnerable to adversarial examples, which poses great security ...
research
03/25/2019

CODA: Counting Objects via Scale-aware Adversarial Density Adaption

Recent advances in crowd counting have achieved promising results with i...

Please sign up or login with your details

Forgot password? Click here to reset