Fusion is Not Enough: Single-Modal Attacks to Compromise Fusion Models in Autonomous Driving

04/28/2023
by   Zhiyuan Cheng, et al.
3

Multi-sensor fusion (MSF) is widely adopted for perception in autonomous vehicles (AVs), particularly for the task of 3D object detection with camera and LiDAR sensors. The rationale behind fusion is to capitalize on the strengths of each modality while mitigating their limitations. The exceptional and leading performance of fusion models has been demonstrated by advanced deep neural network (DNN)-based fusion techniques. Fusion models are also perceived as more robust to attacks compared to single-modal ones due to the redundant information in multiple modalities. In this work, we challenge this perspective with single-modal attacks that targets the camera modality, which is considered less significant in fusion but more affordable for attackers. We argue that the weakest link of fusion models depends on their most vulnerable modality, and propose an attack framework that targets advanced camera-LiDAR fusion models with adversarial patches. Our approach employs a two-stage optimization-based strategy that first comprehensively assesses vulnerable image areas under adversarial attacks, and then applies customized attack strategies to different fusion models, generating deployable patches. Evaluations with five state-of-the-art camera-LiDAR fusion models on a real-world dataset show that our attacks successfully compromise all models. Our approach can either reduce the mean average precision (mAP) of detection performance from 0.824 to 0.353 or degrade the detection score of the target object from 0.727 to 0.151 on average, demonstrating the effectiveness and practicality of our proposed attack framework.

READ FULL TEXT

page 1

page 4

page 6

page 9

page 10

page 11

page 13

page 16

research
06/23/2020

Towards Robust Sensor Fusion in Visual Perception

We study the problem of robust sensor fusion in visual perception, espec...
research
03/17/2021

Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection

Most autonomous vehicles (AVs) rely on LiDAR and RGB camera sensors for ...
research
09/13/2021

Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection Sensor Fusion Models

A critical aspect of autonomous vehicles (AVs) is the object detection s...
research
08/02/2023

FusionAD: Multi-modality Fusion for Prediction and Planning Tasks of Autonomous Driving

Building a multi-modality multi-task neural network toward accurate and ...
research
03/30/2023

Understanding the Robustness of 3D Object Detection with Bird's-Eye-View Representations in Autonomous Driving

3D object detection is an essential perception task in autonomous drivin...
research
03/20/2022

FUTR3D: A Unified Sensor Fusion Framework for 3D Detection

Sensor fusion is an essential topic in many perception systems, such as ...
research
03/06/2023

Securing Autonomous Vehicles Under Partial-Information Cyber Attacks on LiDAR Data

Safety is paramount in autonomous vehicles (AVs). Auto manufacturers hav...

Please sign up or login with your details

Forgot password? Click here to reset