Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

07/11/2019
by   Yulong Cao, et al.
10

Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions. Recent studies show that adversarial examples can pose a threat to real-world security-critical applications: a "physical adversarial Stop Sign" can be synthesized such that the autonomous driving cars will misrecognize it as others (e.g., a speed limit sign). However, these image-space adversarial examples cannot easily alter 3D scans of widely equipped LiDAR or radar on autonomous vehicles. In this paper, we reveal the potential vulnerabilities of LiDAR-based autonomous driving detection systems, by proposing an optimization based approach LiDAR-Adv to generate adversarial objects that can evade the LiDAR-based detection system under various conditions. We first show the vulnerabilities using a blackbox evolution-based algorithm, and then explore how much a strong adversary can do, using our gradient-based approach LiDAR-Adv. We test the generated adversarial objects on the Baidu Apollo autonomous driving platform and show that such physical systems are indeed vulnerable to the proposed attacks. We also 3D-print our adversarial objects and perform physical experiments to illustrate that such vulnerability exists in the real world. Please find more visualizations and results on the anonymous website: https://sites.google.com/view/lidar-adv.

READ FULL TEXT

page 2

page 7

page 8

page 11

page 12

page 13

page 14

research
09/19/2018

Generating 3D Adversarial Point Clouds

Machine learning models especially deep neural networks (DNNs) have been...
research
12/27/2018

DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems

Deep Neural Networks (DNNs) have been widely applied in many autonomous ...
research
10/17/2020

Finding Physical Adversarial Examples for Autonomous Driving with Fast and Differentiable Image Compositing

There is considerable evidence that deep neural networks are vulnerable ...
research
03/12/2019

Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models

Recent advances in machine learning, especially techniques such as deep ...
research
07/09/2019

Generating Adversarial Fragments with Adversarial Networks for Physical-world Implementation

Although deep neural networks have been widely applied in many applicati...
research
06/19/2019

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

Deep neural networks (DNNs) have achieved great success in various appli...
research
05/26/2019

Enhancing ML Robustness Using Physical-World Constraints

Recent advances in Machine Learning (ML) have demonstrated that neural n...

Please sign up or login with your details

Forgot password? Click here to reset