Adversarial Attack Against Image-Based Localization Neural Networks

10/11/2022
by   Meir Brand, et al.
0

In this paper, we present a proof of concept for adversarially attacking the image-based localization module of an autonomous vehicle. This attack aims to cause the vehicle to perform a wrong navigational decisions and prevent it from reaching a desired predefined destination in a simulated urban environment. A database of rendered images allowed us to train a deep neural network that performs a localization task and implement, develop and assess the adversarial pattern. Our tests show that using this adversarial attack we can prevent the vehicle from turning at a given intersection. This is done by manipulating the vehicle's navigational module to falsely estimate its current position and thus fail to initialize the turning procedure until the vehicle misses the last opportunity to perform a safe turn in a given intersection.

READ FULL TEXT

page 2

page 5

page 6

page 8

page 9

page 10

research
08/25/2020

An Adversarial Attack Defending System for Securing In-Vehicle Networks

In a modern vehicle, there are over seventy Electronics Control Units (E...
research
02/25/2018

An Intelligent Intersection

Intersections are hazardous places. Threats arise from interactions amon...
research
04/06/2021

Localization of Autonomous Vehicles: Proof of Concept for A Computer Vision Approach

This paper introduces a visual-based localization method for autonomous ...
research
04/03/2023

CV2X-LOCA: Roadside Unit-Enabled Cooperative Localization Framework for Autonomous Vehicles

An accurate and robust localization system is crucial for autonomous veh...
research
06/08/2021

Safe Deep Q-Network for Autonomous Vehicles at Unsignalized Intersection

We propose a safe DRL approach for autonomous vehicle (AV) navigation th...
research
03/18/2020

Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles

Deep reinforcement learning methods have been widely used in recent year...
research
06/11/2019

Mimic and Fool: A Task Agnostic Adversarial Attack

At present, adversarial attacks are designed in a task-specific fashion....

Please sign up or login with your details

Forgot password? Click here to reset