IMPOSITION: Implicit Backdoor Attack through Scenario Injection

06/27/2023
by   Mozhgan PourKeshavarz, et al.
0

This paper presents a novel backdoor attack called IMPlicit BackdOor Attack through Scenario InjecTION (IMPOSITION) that does not require direct poisoning of the training data. Instead, the attack leverages a realistic scenario from the training data as a trigger to manipulate the model's output during inference. This type of attack is particularly dangerous as it is stealthy and difficult to detect. The paper focuses on the application of this attack in the context of Autonomous Driving (AD) systems, specifically targeting the trajectory prediction module. To implement the attack, we design a trigger mechanism that mimics a set of cloned behaviors in the driving scene, resulting in a scenario that triggers the attack. The experimental results demonstrate that IMPOSITION is effective in attacking trajectory prediction models while maintaining high performance in untargeted scenarios. Our proposed method highlights the growing importance of research on the trustworthiness of Deep Neural Network (DNN) models, particularly in safety-critical applications. Backdoor attacks pose a significant threat to the safety and reliability of DNN models, and this paper presents a new perspective on backdooring DNNs. The proposed IMPOSITION paradigm and the demonstration of its severity in the context of AD systems are significant contributions of this paper. We highlight the impact of the proposed attacks via empirical studies showing how IMPOSITION can easily compromise the safety of AD systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/16/2023

SafeShift: Safety-Informed Distribution Shifts for Robust Trajectory Prediction in Autonomous Driving

As autonomous driving technology matures, safety and robustness of its k...
research
08/23/2023

Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack

In autonomous driving (AD), accurate perception is indispensable to achi...
research
06/14/2023

Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios

Recent deep neural networks (DNNs) have come to rely on vast amounts of ...
research
08/19/2021

Signal Injection Attacks against CCD Image Sensors

Since cameras have become a crucial part in many safety-critical systems...
research
03/06/2021

Hidden Backdoor Attack against Semantic Segmentation Models

Deep neural networks (DNNs) are vulnerable to the backdoor attack, which...
research
09/19/2020

It's Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception

Rain is a common phenomenon in nature and an essential factor for many d...
research
07/13/2023

A Scenario-Based Functional Testing Approach to Improving DNN Performance

This paper proposes a scenario-based functional testing approach for enh...

Please sign up or login with your details

Forgot password? Click here to reset