Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving

01/17/2021
by   James Tu, et al.
2

Modern self-driving perception systems have been shown to improve upon processing complementary inputs such as LiDAR with images. In isolation, 2D images have been found to be extremely vulnerable to adversarial attacks. Yet, there have been limited studies on the adversarial robustness of multi-modal models that fuse LiDAR features with image features. Furthermore, existing works do not consider physically realizable perturbations that are consistent across the input modalities. In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle. We focus on physically realizable and input-agnostic attacks as they are feasible to execute in practice, and show that a single universal adversary can hide different host vehicles from state-of-the-art multi-modal detectors. Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features. Furthermore, we find that in modern sensor fusion methods which project image features into 3D, adversarial attacks can exploit the projection process to generate false positives across distant regions in 3D. Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly. However, we find that standard adversarial defenses still struggle to prevent false positives which are also caused by inaccurate associations between 3D LiDAR points and 2D pixels.

READ FULL TEXT

page 1

page 3

page 4

page 7

page 8

page 14

page 15

research
03/17/2021

Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection

Most autonomous vehicles (AVs) rely on LiDAR and RGB camera sensors for ...
research
06/23/2020

Towards Robust Sensor Fusion in Visual Perception

We study the problem of robust sensor fusion in visual perception, espec...
research
04/01/2020

Physically Realizable Adversarial Examples for LiDAR Object Detection

Modern autonomous driving systems rely heavily on deep learning models t...
research
02/11/2023

HateProof: Are Hateful Meme Detection Systems really Robust?

Exploiting social media to spread hate has tremendously increased over t...
research
01/26/2021

Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object Detection Models

We propose a universal and physically realizable adversarial attack on a...
research
12/22/2014

Multi-modal Sensor Registration for Vehicle Perception via Deep Neural Networks

The ability to simultaneously leverage multiple modes of sensor informat...
research
03/18/2021

Reading Isn't Believing: Adversarial Attacks On Multi-Modal Neurons

With Open AI's publishing of their CLIP model (Contrastive Language-Imag...

Please sign up or login with your details

Forgot password? Click here to reset