GhostImage: Perception Domain Attacks against Vision-based Object Classification Systems

01/21/2020 ∙ by Yanmao Man, et al. ∙ 3

In vision-based object classification systems, imaging sensors perceive the environment and then objects are detected and classified for decision-making purposes. Vulnerabilities in the perception domain enable an attacker to inject false data into the sensor which could lead to unsafe consequences. In this work, we focus on camera-based systems and propose GhostImage attacks, with the goal of either creating a fake perceived object or obfuscating the object's image that leads to wrong classification results. This is achieved by remotely projecting adversarial patterns into camera-perceived images, exploiting two common effects in optical imaging systems, namely lens flare/ghost effects, and auto-exposure control. To improve the robustness of the attack to channel perturbations, we generate optimal input patterns by integrating adversarial machine learning techniques with a trained end-to-end channel model. We realize GhostImage attacks with a projector, and conducted comprehensive experiments, using three different image datasets, in indoor and outdoor environments, and three different cameras. We demonstrate that GhostImage attacks are applicable to both autonomous driving and security surveillance scenarios. Experiment results show that, depending on the projector-camera distance, attack success rates can reach as high as 100

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 9

page 10

page 15

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.