Detection of Adversarial Physical Attacks in Time-Series Image Data

04/27/2023
by   Ramneet Kaur, et al.
0

Deep neural networks (DNN) have become a common sensing modality in autonomous systems as they allow for semantically perceiving the ambient environment given input images. Nevertheless, DNN models have proven to be vulnerable to adversarial digital and physical attacks. To mitigate this issue, several detection frameworks have been proposed to detect whether a single input image has been manipulated by adversarial digital noise or not. In our prior work, we proposed a real-time detector, called VisionGuard (VG), for adversarial physical attacks against single input images to DNN models. Building upon that work, we propose VisionGuard* (VG), which couples VG with majority-vote methods, to detect adversarial physical attacks in time-series image data, e.g., videos. This is motivated by autonomous systems applications where images are collected over time using onboard sensors for decision-making purposes. We emphasize that majority-vote mechanisms are quite common in autonomous system applications (among many other applications), as e.g., in autonomous driving stacks for object detection. In this paper, we investigate, both theoretically and experimentally, how this widely used mechanism can be leveraged to enhance the performance of adversarial detectors. We have evaluated VG* on videos of both clean and physically attacked traffic signs generated by a state-of-the-art robust physical attack. We provide extensive comparative experiments against detectors that have been designed originally for out-of-distribution data and digitally attacked images.

READ FULL TEXT

page 1

page 2

page 3

page 7

research
06/14/2023

X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail

Object detection models, which are widely used in various domains (such ...
research
07/09/2019

Generating Adversarial Fragments with Adversarial Networks for Physical-world Implementation

Although deep neural networks have been widely applied in many applicati...
research
01/17/2022

Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems

Adversarial Examples (AEs) can deceive Deep Neural Networks (DNNs) and h...
research
02/23/2020

VisionGuard: Runtime Detection of Adversarial Inputs to Perception Systems

Deep neural network (DNN) models have proven to be vulnerable to adversa...
research
04/09/2020

TOG: Targeted Adversarial Objectness Gradient Attacks on Real-time Object Detection Systems

The rapid growth of real-time huge data capturing has pushed the deep le...
research
01/21/2022

Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World

Deep learning models have been shown to be vulnerable to recent backdoor...
research
01/20/2021

Fooling thermal infrared pedestrian detectors in real world using small bulbs

Thermal infrared detection systems play an important role in many areas ...

Please sign up or login with your details

Forgot password? Click here to reset