Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles

03/18/2020
by   Songan Zhang, et al.
0

Deep reinforcement learning methods have been widely used in recent years for autonomous vehicle's decision-making. A key issue is that deep neural networks can be fragile to adversarial attacks or other unseen inputs. In this paper, we address the latter issue: we focus on generating socially acceptable perturbations (SAP), so that the autonomous vehicle (AV agent), instead of the challenging vehicle (attacker), is primarily responsible for the crash. In our process, one attacker is added to the environment and trained by deep reinforcement learning to generate the desired perturbation. The reward is designed so that the attacker aims to fail the AV agent in a socially acceptable way. After training the attacker, the agent policy is evaluated in both the original naturalistic environment and the environment with one attacker. The results show that the agent policy which is safe in the naturalistic environment has many crashes in the perturbed environment.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/24/2019

Controlling an Autonomous Vehicle with Deep Reinforcement Learning

We present a control approach for autonomous vehicles based on deep rein...
research
04/07/2021

Improving Robustness of Deep Reinforcement Learning Agents: Environment Attacks based on Critic Networks

To improve policy robustness of deep reinforcement learning agents, a li...
research
05/02/2018

Robust Deep Reinforcement Learning for Security and Safety in Autonomous Vehicle Systems

To operate effectively in tomorrow's smart cities, autonomous vehicles (...
research
11/03/2021

Autonomous Attack Mitigation for Industrial Control Systems

Defending computer networks from cyber attack requires timely responses ...
research
12/28/2022

Learning When to Use Adaptive Adversarial Image Perturbations against Autonomous Vehicles

The deep neural network (DNN) models for object detection using camera i...
research
06/04/2022

Reward Poisoning Attacks on Offline Multi-Agent Reinforcement Learning

We expose the danger of reward poisoning in offline multi-agent reinforc...
research
10/11/2022

Adversarial Attack Against Image-Based Localization Neural Networks

In this paper, we present a proof of concept for adversarially attacking...

Please sign up or login with your details

Forgot password? Click here to reset