Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object Detection Models

01/26/2021
by   Mazen Abdelfattah, et al.
0

We propose a universal and physically realizable adversarial attack on a cascaded multi-modal deep learning network (DNN), in the context of self-driving cars. DNNs have achieved high performance in 3D object detection, but they are known to be vulnerable to adversarial attacks. These attacks have been heavily investigated in the RGB image domain and more recently in the point cloud domain, but rarely in both domains simultaneously - a gap to be filled in this paper. We use a single 3D mesh and differentiable rendering to explore how perturbing the mesh's geometry and texture can reduce the robustness of DNNs to adversarial attacks. We attack a prominent cascaded multi-modal DNN, the Frustum-Pointnet model. Using the popular KITTI benchmark, we showed that the proposed universal multi-modal attack was successful in reducing the model's ability to detect a car by nearly 73 in the understanding of what the cascaded RGB-point cloud DNN learns and its vulnerability to adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/17/2021

Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection

Most autonomous vehicles (AVs) rely on LiDAR and RGB camera sensors for ...
research
01/18/2023

PTA-Det: Point Transformer Associating Point cloud and Image for 3D Object Detection

In autonomous driving, 3D object detection based on multi-modal data has...
research
04/25/2021

3D Adversarial Attacks Beyond Point Cloud

Previous adversarial attacks on 3D point clouds mainly focus on add pert...
research
01/17/2021

Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving

Modern self-driving perception systems have been shown to improve upon p...
research
04/01/2020

Physically Realizable Adversarial Examples for LiDAR Object Detection

Modern autonomous driving systems rely heavily on deep learning models t...
research
05/26/2022

Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation

Adversarial attacks against deep learning-based object detectors have be...
research
02/11/2023

HateProof: Are Hateful Meme Detection Systems really Robust?

Exploiting social media to spread hate has tremendously increased over t...

Please sign up or login with your details

Forgot password? Click here to reset