Optical Adversarial Attack

08/13/2021
by   Abhiram Gnanasambandam, et al.
12

We introduce OPtical ADversarial attack (OPAD). OPAD is an adversarial attack in the physical space aiming to fool image classifiers without physically touching the objects (e.g., moving or painting the objects). The principle of OPAD is to use structured illumination to alter the appearance of the target objects. The system consists of a low-cost projector, a camera, and a computer. The challenge of the problem is the non-linearity of the radiometric response of the projector and the spatially varying spectral response of the scene. Attacks generated in a conventional approach do not work in this setting unless they are calibrated to compensate for such a projector-camera model. The proposed solution incorporates the projector-camera model into the adversarial attack optimization, where a new attack formulation is derived. Experimental results prove the validity of the solution. It is demonstrated that OPAD can optically attack a real 3D object in the presence of background lighting for white-box, black-box, targeted, and untargeted attacks. Theoretical analysis is presented to quantify the fundamental performance limit of the system.

READ FULL TEXT

page 2

page 3

page 6

page 7

page 8

page 10

page 12

research
12/18/2022

Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks

In this work, we study the black-box targeted attack problem from the mo...
research
09/02/2022

Adversarial Color Film: Effective Physical-World Attack to DNNs

It is well known that the performance of deep neural networks (DNNs) is ...
research
03/08/2022

Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon

Estimating the risk level of adversarial examples is essential for safel...
research
03/21/2019

Adversarial camera stickers: A Physical Camera Attack on Deep Learning Classifier

Recent work has thoroughly documented the susceptibility of deep learnin...
research
12/09/2019

Amora: Black-box Adversarial Morphing Attack

Nowadays, digital facial content manipulation has become ubiquitous and ...
research
06/18/2021

Light Lies: Optical Adversarial Attack

A significant amount of work has been done on adversarial attacks that i...
research
02/16/2022

Modeling Strong Physically Unclonable Functions with Metaheuristics

Evolutionary algorithms have been successfully applied to attacking Phys...

Please sign up or login with your details

Forgot password? Click here to reset