Why Don't You Clean Your Glasses? Perception Attacks with Dynamic Optical Perturbations

07/24/2023
by   Yi Han, et al.
0

Camera-based autonomous systems that emulate human perception are increasingly being integrated into safety-critical platforms. Consequently, an established body of literature has emerged that explores adversarial attacks targeting the underlying machine learning models. Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems. However, the real world poses challenges related to the "survivability" of adversarial manipulations given environmental noise in perception pipelines and the dynamicity of autonomous systems. In this paper, we take a sensor-first approach. We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples. EvilEye exploits the camera's optics to induce misclassifications under a variety of illumination conditions. To generate dynamic perturbations, we formalize the projection of a digital attack into the physical domain by modeling the transformation function of the captured image through the optical pipeline. Our extensive experiments show that EvilEye's generated adversarial perturbations are much more robust across varying environmental light conditions relative to existing physical perturbation frameworks, achieving a high attack success rate (ASR) while bypassing state-of-the-art physical adversarial detection frameworks. We demonstrate that the dynamic nature of EvilEye enables attackers to adapt adversarial examples across a variety of objects with a significantly higher ASR compared to state-of-the-art physical world attack frameworks. Finally, we discuss mitigation strategies against the EvilEye attack.

READ FULL TEXT

page 1

page 4

page 5

page 8

research
03/22/2023

State-of-the-art optical-based physical adversarial attacks for deep learning computer vision systems

Adversarial attacks can mislead deep learning models to make false predi...
research
12/01/2018

FineFool: Fine Object Contour Attack via Attention

Machine learning models have been shown vulnerable to adversarial attack...
research
11/27/2020

Robust Attacks on Deep Learning Face Recognition in the Physical World

Deep neural networks (DNNs) have been increasingly used in face recognit...
research
07/08/2020

SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations

Whilst significant research effort into adversarial examples (AE) has em...
research
03/31/2023

Fooling Polarization-based Vision using Locally Controllable Polarizing Projection

Polarization is a fundamental property of light that encodes abundant in...
research
04/16/2018

Robust Physical Adversarial Attack on Faster R-CNN Object Detector

Given the ability to directly manipulate image pixels in the digital inp...
research
04/11/2023

Benchmarking the Physical-world Adversarial Robustness of Vehicle Detection

Adversarial attacks in the physical world can harm the robustness of det...

Please sign up or login with your details

Forgot password? Click here to reset