SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations

07/08/2020
by   Giulio Lovisotto, et al.
0

Whilst significant research effort into adversarial examples (AE) has emerged in recent years, the main vector to realize these attacks in the real-world currently relies on static adversarial patches, which are limited in their conspicuousness and can not be modified once deployed. In this paper, we propose Short-Lived Adversarial Perturbations (SLAP), a novel technique that allows adversaries to realize robust, dynamic real-world AE from a distance. As we show in this paper, such attacks can be achieved using a light projector to shine a specifically crafted adversarial image in order to perturb real-world objects and transform them into AE. This allows the adversary greater control over the attack compared to adversarial patches: (i) projections can be dynamically turned on and off or modified at will, (ii) projections do not suffer from the locality constraint imposed by patches, making them harder to detect. We study the feasibility of SLAP in the self-driving scenario, targeting both object detector and traffic sign recognition tasks. We demonstrate that the proposed method generates AE that are robust to different environmental conditions for several networks and lighting conditions: we successfully cause misclassifications of state-of-the-art networks such as Yolov3 and Mask-RCNN with up to 98 distances. Additionally, we demonstrate that AE generated with SLAP can bypass SentiNet, a recent AE detection method which relies on the fact that adversarial patches generate highly salient and localized areas in the input images.

READ FULL TEXT

page 3

page 5

page 8

page 10

research
11/27/2020

Robust and Natural Physical Adversarial Examples for Object Detectors

Recently, many studies show that deep neural networks (DNNs) are suscept...
research
08/13/2021

Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks

Deep learning and convolutional neural networks allow achieving impressi...
research
07/24/2023

Why Don't You Clean Your Glasses? Perception Attacks with Dynamic Optical Perturbations

Camera-based autonomous systems that emulate human perception are increa...
research
01/09/2018

Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos

We propose a new real-world attack against the computer vision based sys...
research
05/16/2021

Real-time Detection of Practical Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are a prominent class of adve...
research
04/16/2018

Robust Physical Adversarial Attack on Faster R-CNN Object Detector

Given the ability to directly manipulate image pixels in the digital inp...
research
07/03/2019

Robust Synthesis of Adversarial Visual Examples Using a Deep Image Prior

We present a novel method for generating robust adversarial image exampl...

Please sign up or login with your details

Forgot password? Click here to reset