X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail

06/14/2023
by   Omer Hofman, et al.
0

Object detection models, which are widely used in various domains (such as retail), have been shown to be vulnerable to adversarial attacks. Existing methods for detecting adversarial attacks on object detectors have had difficulty detecting new real-life attacks. We present X-Detect, a novel adversarial patch detector that can: i) detect adversarial samples in real time, allowing the defender to take preventive action; ii) provide explanations for the alerts raised to support the defender's decision-making process, and iii) handle unfamiliar threats in the form of new attacks. Given a new scene, X-Detect uses an ensemble of explainable-by-design detectors that utilize object extraction, scene manipulation, and feature transformation techniques to determine whether an alert needs to be raised. X-Detect was evaluated in both the physical and digital space using five different attack scenarios (including adaptive attacks) and the COCO dataset and our new Superstore dataset. The physical evaluation was performed using a smart shopping cart setup in real-world settings and included 17 adversarial patch attacks recorded in 1,700 adversarial videos. The results showed that X-Detect outperforms the state-of-the-art methods in distinguishing between benign and adversarial scenes for all attack scenarios while maintaining a 0 and providing actionable explanations for the alerts raised. A demo is available.

READ FULL TEXT

page 7

page 9

research
06/20/2019

On Physical Adversarial Patches for Object Detection

In this paper, we demonstrate a physical adversarial patch attack agains...
research
04/27/2023

Detection of Adversarial Physical Attacks in Time-Series Image Data

Deep neural networks (DNN) have become a common sensing modality in auto...
research
12/23/2020

The Translucent Patch: A Physical and Universal Attack on Object Detectors

Physical adversarial attacks against object detectors have seen increasi...
research
06/25/2020

Can 3D Adversarial Logos Cloak Humans?

With the trend of adversarial attacks, researchers attempt to fool train...
research
01/21/2022

Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World

Deep learning models have been shown to be vulnerable to recent backdoor...
research
07/27/2023

Unified Adversarial Patch for Visible-Infrared Cross-modal Attacks in the Physical World

Physical adversarial attacks have put a severe threat to DNN-based objec...
research
07/15/2023

Unified Adversarial Patch for Cross-modal Attacks in the Physical World

Recently, physical adversarial attacks have been presented to evade DNNs...

Please sign up or login with your details

Forgot password? Click here to reset