Catoptric Light can be Dangerous: Effective Physical-World Attack by Natural Phenomenon

09/19/2022
by   Chengyin Hu, et al.
0

Deep neural networks (DNNs) have achieved great success in many tasks. Therefore, it is crucial to evaluate the robustness of advanced DNNs. The traditional methods use stickers as physical perturbations to fool the classifiers, which is difficult to achieve stealthiness and there exists printing loss. Some new types of physical attacks use light beam to perform attacks (e.g., laser, projector), whose optical patterns are artificial rather than natural. In this work, we study a new type of physical attack, called adversarial catoptric light (AdvCL), in which adversarial perturbations are generated by common natural phenomena, catoptric light, to achieve stealthy and naturalistic adversarial attacks against advanced DNNs in physical environments. Carefully designed experiments demonstrate the effectiveness of the proposed method in simulated and real-world environments. The attack success rate is 94.90 environment. We also discuss some of AdvCL's transferability and defense strategy against this attack.

READ FULL TEXT

page 1

page 2

page 5

page 6

page 7

research
09/19/2022

Adversarial Color Projection: A Projector-Based Physical Attack to DNNs

Recent advances have shown that deep neural networks (DNNs) are suscepti...
research
03/11/2021

Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink

Though it is well known that the performance of deep neural networks (DN...
research
04/02/2022

Adversarial Neon Beam: Robust Physical-World Adversarial Attack to DNNs

In the physical world, light affects the performance of deep neural netw...
research
03/08/2022

Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon

Estimating the risk level of adversarial examples is essential for safel...
research
06/02/2022

Adversarial Laser Spot: Robust and Covert Physical Adversarial Attack to DNNs

Most existing deep neural networks (DNNs) are easily disturbed by slight...
research
07/14/2023

RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical World

Physical adversarial attacks against deep neural networks (DNNs) have re...
research
02/27/2023

Contextual adversarial attack against aerial detection in the physical world

Deep Neural Networks (DNNs) have been extensively utilized in aerial det...

Please sign up or login with your details

Forgot password? Click here to reset