ISA4ML: Training Data-Unaware Imperceptible Security Attacks on Machine Learning Modules of Autonomous Vehicles

11/02/2018
by   Faiq Khalid, et al.
0

Due to big data analysis ability, machine learning (ML) algorithms are becoming popular for several applications in autonomous vehicles. However, ML algorithms possessinherent security vulnerabilities which increase the demand for robust ML algorithms. Recently, various groups have demonstrated how vulnerabilities in ML can be exploited to perform several security attacks for confidence reduction and random/targeted misclassification, by using the data manipulation techniques. These traditional data manipulation techniques, especially during the training stage, introduce the random visual noise. However, such visual noise can be detected during the attack or testing through noise detection/filtering or human-in-the-loop. In this paper, we propose a novel methodology to automatically generate an "imperceptible attack" by exploiting the back-propagation property of trained deep neural networks (DNNs). Unlike state-of-the-art inference attacks, our methodology does not require any knowledge of the training data set during the attack image generation. To illustrate the effectiveness of the proposed methodology, we present a case study for traffic sign detection in an autonomous driving use case. We deploy the state-of-the-art VGGNet DNN trained for German Traffic Sign Recognition Benchmarks (GTSRB) datasets. Our experimental results show that the generated attacks are imperceptible in both subjective tests (i.e., visual perception) and objective tests (i.e., without any noticeable change in the correlation and structural similarity index) but still performs successful misclassification attacks.

READ FULL TEXT

page 1

page 2

page 4

page 6

research
11/04/2018

FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning

Deep neural networks (DNN)-based machine learning (ML) algorithms have r...
research
01/29/2019

RED-Attack: Resource Efficient Decision based Attack for Machine Learning

Due to data dependency and model leakage properties, Deep Neural Network...
research
02/14/2018

Attack RMSE Leaderboard: An Introduction and Case Study

In this manuscript, we briefly introduce several tricks to climb the lea...
research
08/13/2021

Robustness testing of AI systems: A case study for traffic sign recognition

In the last years, AI systems, in particular neural networks, have seen ...
research
12/16/2017

Using Machine Learning to Enhance Vehicles Traffic in ATN (PRT) Systems

This paper discusses new techniques to enhance Automated Transit Network...
research
08/30/2023

Explainable and Trustworthy Traffic Sign Detection for Safe Autonomous Driving: An Inductive Logic Programming Approach

Traffic sign detection is a critical task in the operation of Autonomous...
research
11/09/2018

Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering

While machine learning (ML) models are being increasingly trusted to mak...

Please sign up or login with your details

Forgot password? Click here to reset