Physical Adversarial Attacks for Surveillance: A Survey

05/01/2023
by   Kien Nguyen, et al.
0

Modern automated surveillance techniques are heavily reliant on deep learning methods. Despite the superior performance, these learning systems are inherently vulnerable to adversarial attacks - maliciously crafted inputs that are designed to mislead, or trick, models into making incorrect predictions. An adversary can physically change their appearance by wearing adversarial t-shirts, glasses, or hats or by specific behavior, to potentially avoid various forms of detection, tracking and recognition of surveillance systems; and obtain unauthorized access to secure properties and assets. This poses a severe threat to the security and safety of modern surveillance systems. This paper reviews recent attempts and findings in learning and designing physical adversarial attacks for surveillance applications. In particular, we propose a framework to analyze physical adversarial attacks and provide a comprehensive survey of physical adversarial attacks on four key surveillance tasks: detection, identification, tracking, and action recognition under this framework. Furthermore, we review and analyze strategies to defend against the physical adversarial attacks and the methods for evaluating the strengths of the defense. The insights in this paper present an important step in building resilience within surveillance systems to physical adversarial attacks.

READ FULL TEXT

page 1

page 4

page 9

page 12

page 13

page 14

page 16

page 18

research
01/26/2021

SkeletonVis: Interactive Visualization for Understanding Adversarial Attacks on Human Action Recognition Models

Skeleton-based human action recognition technologies are increasingly us...
research
07/22/2020

Threat of Adversarial Attacks on Face Recognition: A Comprehensive Survey

Face recognition (FR) systems have demonstrated outstanding verification...
research
06/16/2022

Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey

Adversarial attacks in deep learning models, especially for safety-criti...
research
10/29/2022

On the Need of Neuromorphic Twins to Detect Denial-of-Service Attacks on Communication Networks

As we are more and more dependent on the communication technologies, res...
research
01/17/2021

Adversarial Attacks On Multi-Agent Communication

Growing at a very fast pace, modern autonomous systems will soon be depl...
research
07/10/2019

Metamorphic Detection of Adversarial Examples in Deep Learning Models With Affine Transformations

Adversarial attacks are small, carefully crafted perturbations, impercep...

Please sign up or login with your details

Forgot password? Click here to reset