Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers

10/16/2018
by   Nicole Nichols, et al.
0

This work demonstrates a physical attack on a deep learning image classification system using projected light onto a physical scene. Prior work is dominated by techniques for creating adversarial examples which directly manipulate the digital input of the classifier. Such an attack is limited to scenarios where the adversary can directly update the inputs to the classifier. This could happen by intercepting and modifying the inputs to an online API such as Clarifai or Cloud Vision. Such limitations have led to a vein of research around physical attacks where objects are constructed to be inherently adversarial or adversarial modifications are added to cause misclassification. Our work differs from other physical attacks in that we can cause misclassification dynamically without altering physical objects in a permanent way. We construct an experimental setup which includes a light projection source, an object for classification, and a camera to capture the scene. Experiments are conducted against 2D and 3D objects from CIFAR-10. Initial tests show projected light patterns selected via differential evolution could degrade classification from 98 respectively. Subsequent experiments explore sensitivity to physical setup and compare two additional baseline conditions for all 10 CIFAR classes. Some physical targets are more susceptible to perturbation. Simple attacks show near equivalent success, and 6 of the 10 classes were disrupted by light.

READ FULL TEXT

page 3

page 4

research
12/10/2020

SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers

Light-based adversarial attacks aim to fool deep learning-based image cl...
research
03/24/2020

Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study

Deep learning-based systems have been shown to be vulnerable to adversar...
research
03/21/2019

Adversarial camera stickers: A Physical Camera Attack on Deep Learning Classifier

Recent work has thoroughly documented the susceptibility of deep learnin...
research
03/21/2019

Adversarial camera stickers: A physical camera-based attack on deep learning systems

Recent work has thoroughly documented the susceptibility of deep learnin...
research
11/26/2020

Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect

Physical adversarial examples for camera-based computer vision have so f...
research
06/21/2022

Natural Backdoor Datasets

Extensive literature on backdoor poison attacks has studied attacks and ...
research
09/26/2022

Totems: Physical Objects for Verifying Visual Integrity

We introduce a new approach to image forensics: placing physical refract...

Please sign up or login with your details

Forgot password? Click here to reset