Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses

The rapid development of artificial intelligence, especially deep learning technology, has advanced autonomous driving systems (ADSs) by providing precise control decisions to counterpart almost any driving event, spanning from anti-fatigue safe driving to intelligent route planning. However, ADSs are still plagued by increasing threats from different attacks, which could be categorized into physical attacks, cyberattacks and learning-based adversarial attacks. Inevitably, the safety and security of deep learning-based autonomous driving are severely challenged by these attacks, from which the countermeasures should be analyzed and studied comprehensively to mitigate all potential risks. This survey provides a thorough analysis of different attacks that may jeopardize ADSs, as well as the corresponding state-of-the-art defense mechanisms. The analysis is unrolled by taking an in-depth overview of each step in the ADS workflow, covering adversarial attacks for various deep learning models and attacks in both physical and cyber context. Furthermore, some promising research directions are suggested in order to improve deep learning-based autonomous driving safety, including model robustness training, model testing and verification, and anomaly detection based on cloud/edge servers.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 7

page 15

08/06/2021

Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles

In recent years, many deep learning models have been adopted in autonomo...
03/16/2021

Adversarial Driving: Attacking End-to-End Autonomous Driving Systems

As the research in deep neural networks advances, deep convolutional net...
03/22/2020

Guardauto: A Decentralized Runtime Protection System for Autonomous Driving

Due to the broad attack surface and the lack of runtime protection, pote...
10/21/2021

Generalized Out-of-Distribution Detection: A Survey

Out-of-distribution (OOD) detection is critical to ensuring the reliabil...
07/15/2021

Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving

Deep neural networks (DNNs) have accomplished impressive success in vari...
09/08/2020

5G NR-V2X: Towards Connected and Cooperative Autonomous Driving

This paper is concerned with the key features and fundamental technology...
12/03/2021

Adversarial Attacks against a Satellite-borne Multispectral Cloud Detector

Data collected by Earth-observing (EO) satellites are often afflicted by...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

With the development of artificial intelligence technologies, autonomous driving has been receiving considerable attention in both academia and industry. From 1987 to 1995, the Eureka PROMETHEUS Project (PROgraMme for a European Traffic of Highest Efficiency and Unprecedented Safety) [1], one of the earliest autonomous driving projects, was carried out by Daimler-Benz. In 2005, a famous autonomous driving competition called DARPA [2] Grand Challenge was organized. Since then, numerous development/refinement on advanced autonomous driving systems (ADSs) have been proposed. For now, autonomous vehicles are still going through the transformation through five levels, from level 0 (no automation) to level 4 (high self-driving automation). Most of companies like Tesla [3] focus on the development of level 3 ADSs that could achieve limited self-driving in some conditions (e.g., on highway). The top runner Google Waymo [4] is currently committed to research and industrializing on Level 4 ADSs that do not require human interaction in most circumstances. More importantly, a consensus has been reached that the advent of autonomous vehicles will improve people’s driving experience significantly. However, research on self-driving vehicles is still in its infancy stage. Some critical issues, especially for issues related to safety, need to be well tackled before proceeding to the full-scale of industrialization. For instance, the recent Uber’s vehicle’s fatal accident [5] reveals the importance of prioritizing the research on the safety of autonomous driving.

Deep learning, the most popular technique of artificial intelligence, is widely applied in autonomous vehicles to fulfill different perception tasks as well as making real-time decisions. Figure 1

demonstrates the workflow and architecture of a deep learning-based ADS. In a nutshell, raw data collected by diverse sensors and high-definition (HD) map information from the cloud are first fed into deep learning models in the perception layer to extract the ambient information of the environment, after which different designated deep/reinforcement learning models in the decision layer kicks off the real-time decisions making process. For example, in Baidu Apollo 

[6], which is the ADS applied in Baidu Go Robotaxi service [7], several deep learning models are used in perception and decision modules. Tesla also deploys advanced AI models for object detection to implement Autopilot [8]

. However, there exist a number of issues against the further development of deep learning-based ADSs adopting this pipeline structure. First of all, sensors are vulnerable to numerous physical attacks, under which most of the sensors are no longer able to function as normal to collect data in good quality, or they may be adversely instructed to collect fake data, leading to a severe degradation of performance of all learning-based models in the following layers. Furthermore, recent research shows that deep neural networks are vulnerable to

adversarial attacks [9] that are designed specifically to induce learning-based models to wrong predictions. The most common adversarial attack is by constructing the so-called adversarial examples

that only have slight difference from the original inputs to baffle the neurons in the model. There are some results available from prior research literature that focus on investigating such adversarial attacks

[9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], exhibiting the level of significance of these threats to the safety of deep learning-based ADSs.

The potential risks of ADSs have the effects on the development and deployment of autonomous vehicles in industry. If autonomous vehicles cannot ensure safety when they are running, they will not be accepted by the public. Therefore, it is essential to figure out whether deep learning-based ADSs are vulnerable, how they could be attacked, how much damage can be caused by attacks, and what measurements have been proposed to defend these attacks. The industry needs this information and further insights to improve their development of safety and robustness of ADSs. Though safety threats and defenses of autonomous vehicles and autonomous vehicular networks have been studied before References [22, 23], none of these investigated on security problems in deep learning-based ADSs. On the other hand, most of researchers on safety deep learning focus on adversarial attacks on the image classification task. For example, in [24] and [25]

, adversarial attacks and defenses for computer vision tasks were thoroughly introduced. However, related analysis on attacks and defenses on deep learning systems for more complicated autonomous driving tasks were not covered in these works.

Therefore, in this paper, we conduct a comprehensive survey that pulls together the recent research efforts on the workflow of deep learning-based ADSs, the state-of-the-art attacks and the corresponding defending strategies. The contributions of this paper are listed as follows:

  • A variety of attacks towards the pipeline of deep learning-based ADSs are reviewed and analyzed in detail.

  • The state-of-the-art attacks and the defending methods in deep learning-based ADSs are comprehensively elucidated.

  • Future research directions of applying new attacks as well as securing and improving the robustness of deep learning-based ADSs are proposed.

The paper is organized as follows: Section II introduces the detail of pipeline in deep learning-based ADSs and possible threat models adopted by adversaries against the systems. Section III walks through different attacks that could occur in the pipeline as well as their threat models. Section IV summarizes defenses corresponding to the mentioned attacks and discusses their effectiveness in protecting ADSs. Section V reveals future research directions for securing ADSs. Section VI draws the conclusion.

Ii Workflow of deep learning-based ADSs

A deep learning-based ADS is normally composed of three functional layers, including a sensing layer, a perception layer and a decision layer, as well as an additional cloud service layer as shown in Figure 1. In the sensing layer, heterogeneous sensors such as GPS, camera, LiDAR, radar, and ultrasonic sensors are used to collect real-time ambient information including the current position and spatial-temporal data (e.g. time series image frames). The perception layer, on the other hand, contains deep learning models to analyze the data collected by the sensing layer and then extract useful environmental information from the raw data for further process. The decision layer would act as a decision-making unit to output instructions concerning the change of speed and steering angle based on the extracted information from the perception layer. The following part of this section will unveil the workflow of a deep learning-based ADS.

Fig. 1: ADS architecture

Ii-a The sensing layer

The sensing layer encompasses heterogeneous sensors to collect surrounding information around an autonomous vehicle. The most preferred sensors adopted and deployed by leading autonomous driving vehicle companies like Baidu are GPS/Inertial Measurement Units (IMU), cameras, Light Detection and Ranging (LiDAR), Radio Detection and Ranging (Radar), and ultrasonic sensors. More specifically, GPS could provide absolute position data through the help of geostationary satellites, while IMU provides orientation, velocity and acceleration data.

Cameras are also used to capture visual information around an autonomous vehicle, providing abundant information to the perception layer to analyze so that the vehicle could recognize traffic signs and obstacles. Furthermore, LiDAR is also applied to help detect objects by measuring distances between objects and the vehicle based on the reflection of light. It is also helpful for more accurate real-time localization. Additionally, radar and ultrasonic sensors are also used to detect objects by electromagnetic pulses and ultrasonic pulse waves.

Ii-B The perception layer

In the perception layer, semantic information is extracted from raw data by algorithms such as optical flow [26] and deep learning models. Currently, image data from cameras and cloud point data from LiDAR are widely used by deep learning models in the perception layer for various tasks such as localization, object detection and semantic segmentation.

Ii-B1 Localization

Localization plays a critical role in the route planning task in an ADS. By leveraging localization technologies, the autonomous vehicle is capable of obtaining its accurate location on the map and understand the real-time ambient environment. Currently, localization is mostly implemented by the fused data from GPS, IMU, LiDAR point clouds, and HD map. Specifically, the fused data is used for odometry estimation and map reconstruction tasks. These tasks aim to estimate the movement of an autonomous vehicle, reconstruct the map of the vehicle’s surroundings, and finally determine the current location of the vehicle. In 

[27], CNN and RNN were used to estimate the movement and poses of a vehicle, through continuous images taken by a camera. In [28]

, a deep autoencoder was applied to encode observed images into a compact format for map reconstruction and localization.

Ii-B2 Road object detection and recognition

Road object detection is a key issue for autonomous vehicles owing to the complexity of detecting large amounts of objects with different shape such as lanes, traffic signs, other vehicles, and pedestrians correctly in real time and ever-changing surrounding environments. In the object detection field, Faster RCNN [31] is considered effective to detect objects in images. You Only Look Once (YOLO) [32] is another famous object detection algorithm that converts the detection task to a regression issue. Currently, LiDAR-based object detection deep learning models are studied extensively by both researchers and industry practitioners. VoxelNet [33] is the first end-to-end model that directly predicts objects based on LiDAR point cloud. PointRCNN [34] adapts the architecture of RCNN to take 3D point cloud as input for object detection and achieves a superior performance.

Ii-B3 Semantic segmentation

Semantic segmentation in autonomous driving semantically segments different parts of an image into specific classes such as vehicles, pedestrians and ground. It is helpful for localizing the vehicle, detecting objects, marking lanes and reconstructing the map. In the semantic segmentation field, Fully Convolutional Network (FCN) [35] is a basic deep learning model able to achieve good performance, which essentially modifies the fully connected layer in a normal CNN to convolutional layer. Also, PSPNet [36] is a famous semantic segmentation network that applies a Pyramid pooling architecture to better extract information from images.

Ii-C The cloud service

The cloud server is commonly used as a service provider for many resource-reliant services in the autonomous driving field. First, a prior HD Map, which could be deployed at the cloud, is constructed by autonomous driving companies using LiDAR as well as other sensors. The HD Map contains much valuable information like road lanes, signs and obstacles. Therefore, the vehicle could use such data to initiate the pre-route planning and enhance the perception of the surrounding environment. Meanwhile, real-time raw data and perception data of other autonomous vehicles could be uploaded to the cloud by Vehicle to Everything (V2X) service to help keep HD Maps up-to-date, enabling HD Maps to provide more relevant real-time information such as surrounding vehicles on the same road. On the other hand, all deep learning models applied in an autonomous vehicle are first trained on the cloud in a simulation environment. When these models are verified, the cloud provides Over-the-Air (OTA) update to upgrade their software and deep learning models in autonomous vehicles remotely.

Ii-D The decision layer

Ii-D1 Path planning and object trajectory prediction

Path planning is considered as a basic task for autonomous vehicles with respect to deciding a route between a start location and the desired destination and the object trajectory prediction task requires autonomous vehicles to predict trajectories of perceived obstacles with the help of sensors and perception layer. Recently, some researchers have tried to use Inverse Reinforcement Learning in order to achieve a superior results in path planning. By learning reward functions from human drivers, the vehicle is trained to be capable of generating a route more like a human being [37]. For trajectory prediction, some variations of RNN and LSTM [38] are proposed to achieve high prediction accuracy and efficiency. In addition, 3D spatial-temporal data and single CNN are tried by Luo at al. to forecast car trajectories [39].

Ii-D2 Vehicle control via deep reinforcement learning

Traditional rule-based algorithms cannot simply cover all complex driving scenarios. Deep reinforcement learning that trains an agent to learn how to act under different scenarios is thus more promising in autonomous driving scenarios. In [40], a CNN-based Inverse Reinforcement Learning model to plan a driving path using 2D and 3D data collected in many normal driving scenarios was proposed. In [41], a DQN based RL model was proposed for autonomous driving steering control.

Ii-D3 End-to-End driving

An E2E driving model is a special deep learning model that combines the perception and decision processes. In this scenario, the model predicts the current steering angle and driving speed based on the ambient sensing information. In [42], a CNN architecture E2E driving model called DAVE-2 system takes front-face camera images as the input and predicts the current steering angle.

Iii Attacks in ADSs

In this section, we introduce various attacks towards ADSs in detail. Figure 2 demonstrates the overview of attacks on each part in an ADS, which will be introduced in detail in this section. Table I and II summarize both physical and adversarial attacks on ADSs.

Fig. 2: An overview of attacks on each part in an ADS

Iii-a Physical attacks on sensors

The sensing layer, commonly considered as the frontier layer of an ADS, is naturally seen as an attack target by adversaries. Attackers intend to degrade the quality of sensor data by adding noise signals or making sensors collect fake data by counterfeiting data signals. The low-quality or even fake data would affect the performance of deep learning models in the perception layer and the decision layer and further influence the behaviors of an autonomous vehicle. In this threat model, adversaries are assumed with a certain knowledge of hardware and specification of sensors applied on an autonomous vehicle, but they do not need to know details of deep learning models in other layers. Therefore, physical attacks on the sensing layer could be seen as black-box attacks on the deep learning-based ADS.

For physical attacks on sensors, attackers could disturb the data collected from sensors or fabricate signals to fool sensors using some external hardware. There are two most common physical attacks in this context, namely jamming attacks and spoofing attacks.

Iii-A1 Jamming attack

The jamming attack is deemed as the most basic physical attack that uses specific hardware to add noises into the environment to degrade sensors’ data quality so as to make objects in the environment undetectable.

Jamming attacks on a camera was experimented in [47] and the camera is blinded by emitting the intense light into it. When the camera receives a much stronger incoming light than the normal environment, the auto-exposure function of the camera will not work normally, and the captured images would then be overexposed and not recognizable by deep learning models in the perception layer. In the experiment, front/side attacks with different distances and different light intensities were set. The results show that blinding attacks at a short distance in the dark environment setting could severely damage the quality of the captured images, which means that the perception system is not capable of recognizing objects effectively when such attack occurs. Another blinding attack was experimented in [49], where attackers used a laser to cause temperature damages on cameras. A blinding attack for LiDAR was proposed in [50], in which the LiDAR is exposed under a strong light source that has the same wavelength as the LiDAR. Then, the LiDAR failed on perceiving objects from the direction of the light source. Jamming attacks on ultrasonic sensors and radars were experimented in [49], where a roadside attack is launched through an ultrasound jammer to attack the parking assistance system of four vehicles. The results showed that under jamming attacks, the vehicles were incapable of detecting the surrounding obstacles. To attack the radar, a signal generator and frequency multiplier were used to generate electromagnetic waves against the Tesla Autopilot system in which the Autopilot system was also compromised. A jamming attack on ultrasonic sensors was simulated in [51]. It was shown that other opposite placed ultrasonic sensors could substantially interfere the readings of the target ultrasonic sensor. In [52], the sound noise attacks on Gyroscopic sensors were launched, which were heavily used in the Unmanned Aerial Vehicles (UAV), leading to the fall of one UAV. In [53], GPS signals were found that they were vulnerable towards attacks from GPS jamming devices capable of producing large radio noises, which could adversely affect the navigation system.

Iii-A2 Spoofing attack

Spoofing is a type of attack where adversaries use hardware to fabricate or inject signals during sensors data collection phase. The forged signal data could affect the perception of the environment and further cause abnormal behaviours of autonomous vehicles. In [47], a spoofing attack on LiDAR was tested. Specifically, as LiDAR could distinguish different objects at different positions by listening reflections of light achieving objects and echoing back, the counterfeit signal could return back ahead of the real signal. Consequently, LiDAR received with the counterfeit signal would lead to the wrong distance calculation between the vehicle and the object. Based on this idea, in the experiment, the real output signal of a wall was delayed and a counterfeited signal of the wall was created to produce the wrong distance information and successfully made LiDAR detect objects at the wrong distance. In [48], a spoofing attack against LiDAR was implemented by injecting deceiving physical signals into the victim sensor, which makes the LiDAR ignore legitimate inputs. Similarly, ultrasound pulses and radar signals fabricated [49] to attack ultrasound sensors and a radar. GPS is another victim under the threat of spoofing attacks. In 2013, a yacht encountered a GPS spoofing attack, causing it to deviate away from the pre-set route [54]. In [55]

, an open-source GPS spoofing generator was proposed, which can block all the legitimate signals. In 

[56], a similar GPS spoofing device was implemented to successfully attack commercial civilian GPS receivers. In [57], a GPS spoofing attack designed specifically for manipulating the navigation system was proposed. A GPS spoofing device, which could slightly shift the GPS location and then further manipulate the routing algorithm of the navigation system, was implemented. Subsequently, the autonomous vehicle would deviate from the original route. In addition to the attacks towards sensors, there are also spoofing attacks on cameras. In [58], a spoofing attack, aiming at the optical-flow sensing of UAV, was proposed. Attackers could alter the appearance of the ground plane, which would be captured by optical-flow cameras. Then altered images could adversely influence how the algorithms process the optical-flow information, and attackers could take over the control of the UAV by adopting this simple approach. There is another type of spoofing attack called relaying attack that usually occurs on LiDAR, aiming to deceive the target sensors by re-sending the original signal again from another position. The experiment in [47] showed that two ghost walls in different locations were detected by LiDAR because of relaying attacks. In [59], a projector was used to project spoofed traffic signs on cameras of a vehicle to make the vehicle interpret spoofed traffic signs as real signs.

Attack Target sensor Action Implication Examples
Jamming
attack
Camera Extensive light blinding attack
Make images overexposed and not recognizable;
Cause temperature damage on cameras
[47]
LiDAR
Blinding attacks by strong light
with same wavelength as LiDAR
LiDAR cannot perceive objects from
the direction of light source
[50]
Ultrasonic sensor Ultrasonic jamming device Obstacles cannot be detected [49]
Ultrasonic sensor
Putting another ultrasound sensor
opposite to the target one
Both ultrasonic sensors cannot collect
accurate data
[51]
Radar Generating electromagnetic waves Detected obstacles are disappeared [49]
Gyroscopic sensor Sound noise An UAV fall down [52]
GPS GPS jamming device Navigation system cannot work normally [53]
Spoofing
attack
LiDAR
Relaying signals of objects from
another position
LiDAR detects ’ghost’ objects [47]
LiDAR, Radar;
Ultrasonic sensor
Fabricating fake signals
Sensors detect objects at wrong distance;
LiDAR ignores legitimate objects
[47], [49]
[48]
GPS
Using GPS-spoofing device
to inject fake signals
Navigation system is manipulated
[54], [57],
[55, 56]
Optical-flow
camera
Altering the appearance of
ground plane
UAV is taken over [58]
Camera
Using a projector to project
deceptive traffic signs onto ADAS
The vehicle recognized the deceptive
traffic signs as real signs
[59]
TABLE I: Physical attacks on ADSs

Iii-B Cyberattacks on cloud services

The cloud could be the target for many attacks from adversaries’ perspective because of continuous communications between the cloud and autonomous vehicles, consequently resulting in instability of autonomous vehicles.

Note that an HD Map could be updated in real time by information from other vehicles via V2X. This process is possibly controlled by attackers. For instance, Sybil attacks [60] and message falsification attacks [60] are designed to interfere the efficiency of the automatic navigation. Precisely, Sybil attacks focus on the real-time HD map updating in V2X, creating a large number of “fake drivers” in the target location system with fake GPS information. These attacks are designed to delude the system through the traffic jam and further interferes localization and navigation tasks in the vehicle. For the message falsification attacks, they intercept and tamper the traffic information updated from vehicle to the HD map server and spoofs other vehicles when updating the HD map information through this server.

Traditional cloud attacks are threatening the V2X network in which autonomous vehicles are connected to exchange information. Both Denial of Service (DoS) and Distributed DoS (DDoS) [61, 62] could cause the exhaustion of service resources, resulting in high latency or even the network unavailability of the V2X network. In this situation, autonomous vehicles may not be able to connect to the HD map for accurate navigation and perception service, which substantially endangers the safety of the autonomous vehicles.

One variation of attack aims at the over-the-air (OTA) channel in the cloud, where attackers could hijack the data transfer channel between the cloud and an autonomous vehicle and inject the malware into the vehicle [63].

However, as attacks for cloud services are more relative to cyberattacks, we will not detail on such attacks and corresponding defending methods in this survey.

Iii-C Adversarial attacks on deep learning models in perception and decision layers

Recent research shows that deep learning models are particularly vulnerable to adversarial examples that add imperceptible noises on original input images. Even though adversarial examples look similar to normal images from human’s view, they could mislead deep learning models to produce wrong predictions. By definition, an adversarial attack is a type of attack to construct such adversarial examples. Therefore, adversarial attacks pose considerable threats to ADSs due to the widespread usage of deep learning models in both the perception layer and the decision layer.

In this section, we first introduce the definition of adversarial attacks along with some relevant concepts. Then we summarize the literature review the progress of adversarial attacks on different deep learning models in ADSs.

Iii-C1 Introduction to adversarial attacks

Depending on attackers’ ability, adversarial attacks could be categorized as either white-box or black-box attacks. In white-box attacks, attackers are assumed to know all the details of the target deep learning model including training data, neural network architecture, parameters, and hyper-parameters, while having the privilege to visit the gradients and results of the model at run time.

There are two types of adversarial attacks, i.e., adversarial evasion attacks occurring at the model inference time and poisoning attacks that happen in the model training period. Adversarial evasion attacks to deep learning models are first investigated for image classification tasks. Given a target deep learning model and an original image with its class , an adversarial attack could construct a human imperceptible perturbation to form an adversarial example , which could delude the model to make a wrong prediction .

Commonly, there are three different kinds of white-box methods to generate adversarial examples, namely, gradient-based methods, optimization-based methods and generative model-based methods.

  • Gradient-based methods: These attack methods [10, 11, 12, 13] are based on the Fast Gradient Sign Method (FGSM), as shown in Equation (1), to directly generate adversarial examples by adding the gradients of loss with respect to each pixel on original images [10].

    (1)
  • Optimization-based methods: These attack methods [9, 14, 16] solve an optimization problem as

    (2)

    where the first part is the distance between an original image and an adversarial image, and the second part is the constraint on the loss of the adversarial image [64]. By solving this optimization problem, one could generate an adversarial image that is close to in

    distance but be classified as

    .

  • Generative model-based methods: This type of attack [19, 20] leverages generative models to generate targeted adversarial examples from original images. These methods normally learn a generative model by optimizing an objective function as

    (3)

    where denotes the cross-entropy loss between classification of adversarial examples and targeted class, and measures the similarity between adversarial examples and original images.

When it comes to the black-box attacks, attackers are assumed not having prior knowledge of the target model, but they can query the model and obtain the output of the model unlimitedly. There are also three different approaches to generate black-box adversarial examples:

  • Transfer-based methods: It was discovered that adversarial images targeted on a specific model were also found effective when dealing with other deep learning models, and this attribute is called the transferability of adversarial examples [64]. Therefore, attackers could implement a similar model based on the input and output derived from the target model, and then initiate white-box attacks on their own model instead. The adversarial examples constructed based on their own model could be utilized to attack the target black-box model.

  • Score-based methods: Although gradients information in a black-box model cannot be directly retrieved, the value of gradients could be estimated based on the probability score output of the target model and then used to craft adversarial examples 

    [65].

  • Decision-based methods: These methods only rely on the final decision (e.g., top-1 classification result) of the target model to craft adversarial examples based on a randomly generated large perturbation and then iteratively reduce the perturbation while keeping adversarial features [66].

For attacks on ADSs, black-box attacks are more realistic. In addition, attacks on ADSs should occur in the physical world where sensors collect environmental information (e.g. images and point clouds) from different angles, light conditions, and distances. Therefore, this paper intends to cover physical black-box evasion attacks experimented in both simulation environment and the real world.

Iii-C2 Adversarial evasion attacks on ADSs

This section first reviewed related attacks that were experimented in simulation environments, either real-world recording data or in simulated real-world scenarios. In addition, research experimented in the real world was also reviewed, which showed the harm of adversarial evasion attacks on ADSs in real life.

In [67], an approach called DeepBillboard to attack end-to-end driving models was proposed by replacing the original billboards on the roadside with adversarial perturbations. Specifically, the adversarial billboards were generated by the aforementioned optimization-based method. The method was tested on two end-to-end driving models on three datasets, along with different scenarios where billboards are positioned at different positions and angles. The result showed that their attack could make steering angle predictions deviate at most 23 degrees. In [68], a Bayes Optimization-based approach was proposed to generate the painting of black lines on the road to counterfeit lane lines and make the vehicle deviate from the original orientation. Experiments were conducted in CARLA simulator [69], and results showed that E2E driving models were attacked and deviated to the orientation chosen by attackers. An updated approach proposed in [70] applied gradient-based optimization algorithm again to achieve quicker generation of black lines with higher deviation. In [71], a decision-based approach was proposed to search and craft adversarial texture of vehicles. The average prediction score and the precision of object detectors in ADSs decreased sharply when presenting vehicles with adversarial texture (shown as Figure 3). Apart from that, some works also investigate attacks on LiDAR-based object detection in the simulation environment. In [72], a white-box optimization-based method was proposed to generate adversarial points and demonstrated how to inject these points into the original point cloud of an obstacle through laser. Experiments were conducted using LiDAR sensor data through a simulator released by Baidu Apollo. Experiment results showed that the average success rates of the attack were up to 90%, while the number of injected adversarial points was larger than 60. The first black-box attack on LiDAR was proposed in [73], aiming to insert attack traces into point clouds to baffle LiDAR-based object detector. The experiment result on KITTI dataset achieved mean success rate at around 80%.

Fig. 3: Left: The vehicle can be detected normally; Right: The vehicle with adversarial texture cannot be recognized (image credit: [71])

In addition to research conducted under simulation environments, others study adversarial evasion attacks in the real world. For instance, an approach called ShapeShifter was proposed in [74] to attack object detection model Faster R-CNN. The adversarial perturbation was generated by solve an optimization problem named Expectation over Transformation that aims to create a robust perturbation when it is captured from different angles with different lighting conditions. In the experiment, traffic signs with adversarial perturbations were printed in real world. The targeted attack success rate and the non-targeted one were reported to be and , respectively. In [75], a method to generate robust physical perturbations was proposed. In the experiments, attackers could print the perturbed road signs and replace the true road signs with the perturbed ones (subtle poster attacks), or only print the physical perturbations as stickers with different colors and attach them on road signs (camouflage abstract art attacks). In the road test, success rates for the subtle poster attacks and camouflage abstract art attacks reached and , respectively, both of which used a CNN model called LISA-CNN [76].

Fig. 4: The stop sign with an adversarial sticker cannot be recognized from different distance and angles (image credit: [77])

A generative model-based approach called Perceptual-Sensitive GAN was also proposed in [21]

in which an attention model was incorporated into the GAN to generate adversarial patches. The experiments conducted based on the physical world in a black-box setting showed that attacks could reduce classification accuracy from 86.7% to 17.2% on average. Similarly, methods proposed in 

[77] can generate robust adversarial stickers to attack object detectors in two modes: Hiding attack that makes detectors fail to detect objects, and Appearing attack that makes detectors recognize wrong objects. Besides object detectors, E2E driving models were attacked in the physical world settings as revealed in [79]. A method called PhysGAN was proposed to generate realistic billboard similar to the original one, but it could make autonomous vehicles deviating from their original route. The experiment results showed that billboards generated by PhysGAN could deviate steering angle predictions of E2E driving models up to 19.17 degrees.

Fig. 5: Top Left: original billboard; Top Right: adversarial billboard generated by PhysGAN; Bottom: placing adversarial billboard in real world (image credit: [79])

Iii-C3 Adversarial poisoning attacks on ADSs

Poisoning attacks also fall into the types of adversarial attacks. More specifically, a poisoning attack works by injecting malicious data with triggers and misleading labels into original training data to make models learn specific patterns of triggers. During inference time, models are induced to produce wrong predictions when inputs contain malicious triggers. The poisoning attack is also considered to resemble the Trojan attack or backdoor attack. In [88], the Trojan attack for E2E driving models was simulated. Adversarial triggers such as a square or an Apple logo were constructed and put on the corner of original input images. Experiment results showed that if the road images contained these malicious triggers, the vehicle could easily deviate from the pre-planned track. In [89], poisoning attacks were conducted with four different triggers on four traffic sign recognition datasets, where one specific class of traffic signs was targeted. The experiment results showed that the CNN model could learn patterns of triggers and achieve more than 95% accuracy on those poisoned images when the ratio of poisoned images was more than 5%. Meanwhile, the overall accuracy of the total test dataset was more than 99%, suggesting that it was difficult to determine if a model encounters poisoning attacks by only observing the results of test accuracy. In [90], a poisoning attack on deep generative models like GAN for raindrop removing was proposed. Malicious data pairs were injected into original training data to make GAN learn the wrong map from the input domain to the output domain. The experiment result showed that when GAN removed raindrops, it simultaneously transformed red traffic light to green, or altered the number on speed limit sign.

Iii-D Analysis of attacks

1. Physical attacks are straightforward but limited in a certain range. Physical attacks on sensors could disrupt deep learning models by interfering the data collecting process. However, this type of attack requires the target in the proximity of the adversaries. For example, the camera blinding attack only occurs if the laser light is placed in front of the target vehicles, which makes such attacks difficult to implement.

2. Cyberattacks are harmful while challenging. Cyberattacks on the cloud could affect numerous autonomous vehicles connected in V2X network and thus result in severe consequences. However, for cyberattacks on the cloud, adversaries need to fabricate data transferred between the cloud and the vehicle or implement DDoS attacks by large Botnet. However, the encryption of data transmission process could hinder both attacks, and the cloud could deploy detection systems like [91, 92] to defend DDoS attacks to some extent.

3. Adversarial attacks are effective and pose threats in real world. Adversarial attacks, especially evasion attacks, would pose considerable risks to deep learning models in ADSs due to the existence of adversarial perturbations in the black-box setting. Table II shows some research works implemented black-box evasion attacks and experimented the effectiveness of their methods for attacking E2E driving models or object detectors in the perception layer of ADSs from different angles, distances, and light conditions in simulation environment or in the real world. For this line of attacks, adversaries could arbitrarily make malicious stickers and stealthily paste them everywhere. Adversarial poisoning attacks could occur in a scenario where corporate espionage has the chance to pollute training data, in which the attack could also be stealthy and hazardous. Therefore, it is essential to put a summary of current research on defenses against adversarial attacks. From the perspective of attacks, there may exist more powerful attacks to destroy autonomous vehicles, from which further research can be drawn.

Attack type Attack objective Literature Method Attack setting Experiment setting
Envasion
attacks
E2E driving model [67]
Replacing original billboard with adversarial billboard by
solving an optimization problem
White-box Digital dataset
[68]
Drawing black strips on the road by Bayesian
Optimization method
Black-box
Simulation
environment
[70]
Drawing black strips on the road by Gradient-based
Optimization method
Black-box
Object detection [71]
Drawing adversarial texture on other vehicles by
a discrete search method
Black-box
3D Object detection [72]
Generating adversarial points by optimization-
based method
White-box
[73] Inserting attack trace into original point clouds Black-box Digital dataset
Traffic sign recognition [74]
Replacing true traffic signs with adversarial traffic signs
generated by solving an optimization problem
White-box Real world
[75]
Pasting adversarial stickers that generated by optimization-based
approach on traffic signs
White-box Real world
[21] Generating transferable adversarial patches by GAN Black-box Real world
[77]
Generate transferable adversarial traffic signs and stickers by
Feature-interference reinforcement
Black-box Real world
E2E driving model [79] Generate adversarial billboard by GAN White-box Real-world
Poisoning
attacks
E2E driving model [64] Adding poisoning images with triggers into training data White-box
Simulation
environment
Traffic sign recognition [89] White-box Digital dataset
Rain drop removing [90] Adding poisoning image pairs with triggers into training data White-box Digital dataset
TABLE II: Adversarial attacks on autonomous driving

Iv Defense methods

In this section, we take a close look at some existing defense methods against both physical attacks and adversarial attacks. We also briefly discuss about defenses for cloud services. The limitations of current defenses against adversarial evasion and poisoning attacks are further analyzed and summarized in Table III.

Iv-a Defense against physical sensor attacks

Among all the countermeasures for physical sensor attacks, redundancy [47, 49, 51] is the most promising strategy to defend jamming attacks. Redundancy means that a number of the same sensors are deployed to collect a designated type of data and fuse them as the final input for the perception layer. For example, when attackers commit the blinding attack on one camera, others could still collect normal images for the environment perception. Undoubtedly, this method leads to more financial costs. Also, sensor data fusion is generally considered as an intractable research issue. To improve the robustness of cameras, another approach is that a near-infrared-cut filter is applied to filter the near-infrared light in the daytime to improve the quality of collected images [47], which is however unable to work effectively at night time. Alternatively, using photochromic lenses to filter a specific type of light is also an option to upgrade cameras. Subsequently, jamming attacks on these cameras could be mitigated. For ultrasonic sensors and radars, as noises hardly occur in a normal working environment, it is not difficult to build a detection system to detect the incoming jamming attacks [49]. Moreover, a jamming detection system for GPS was proposed in [53], which expedites GPS information from multiple sources such as roadside monitoring stations and mobile phones to improve the accuracy of GPS information.

One effective way to defend spoofing attacks is to introduce randomness into data collection [47, 49]. For example, attackers could commit accurate attacks on LiDAR because there is a fixed probe window for LiDAR to receive signals. If the probe time is set to be random, it then becomes harder for adversaries to send fake signals. PyCRA is a spoofing detection system based on this idea [93]. Furthermore, data fusion mechanism is considered effective to defend spoofing attacks. Therefore, fusing data from cameras, LiDAR, radars and ultrasonic sensors could help stabilize the performance of the perception layer.

There are some obvious limitations to the existing sensor attacks. For instance, many attacks require external hardware to generate noises and fake signals within a short distance near the target vehicle. A human may recognize the occurrence of attacks such as the camera blinding attack from the front of the vehicle and take over the vehicle to avoid accidents. Therefore, even if the development of autonomous vehicles achieves a highly automated level, it is still necessary to set a security guard in the vehicle as a guarantee.

Iv-B Defense for cloud services

In the V2X map updating process, the HD map needs to be secured for authenticity and integrity. Each map package should contain the unique identity of the service provider. The integrity and confidentiality should also be ensured during the updating to prevent stealing or changing of data. In [57], encryption and authentication are applied for GPS data during transmission to defend message falsification attacks. In [94], a symmetric key encryption-based update technique was proposed to apply a link key between the service supplier and vehicles to form a secure package updating connection. In [95], a hash function-based update technique was proposed. This technique first divided the package into several data fragments and then created a hash chain of these data fragments in the decreasing order. Before the package being collected by the vehicle, elements in the hash chain were encoded using the pre-shared encryption key.

Iv-C Defense against adversarial evasion attacks

Currently, many defenses against adversarial evasion attacks are proposed. In this survey, we reviewed existing defenses and divided them into different categories. Adversarial defenses can be categorized into proactive and reactive methods. The former focuses on improving the robustness of the targeted deep learning models, while the latter aims to detect and counter adversarial examples before they are fed into models. There are five main types of proactive defense methods, namely, adversarial training, network distillation, network regularization, model ensemble, and certified defense. The primary reactive defenses are called adversarial detection and adversarial transformation. Though most of defenses have only experimented on image classification tasks, ideas of these defenses is a good generalization to other tasks in autonomous driving, considering the similar approaches in improving the robustness of models or pre-processing model inputs that are not limited on image classification. To validate whether these defenses can be applied in ADSs, we analyzed and compared them in Section IV-E.

Name Function Example Analysis
Proactive
defenses
Adversarial training
Train a new robust model based on new
dataset that involves adversarial examples.
[10] [11] [13]
Increasing time and resource consumption
for autonomous driving model training;
only effective for simple attacks
Defensive distillation
Train a new robust model by distilling hidden
layer information from the original model
[96]
Model ensemble
Ensemble multiple models for making the final
prediction to improve the robustness
[97] [98] [99] Increasing resource consumption
Network regularization
Train a robust model based on a new objective
function containing perturbation-based regularizer
[100] [101] [102]
Increasing time and resource consumption
for autonomous driving model training;
only effective for simple attacks
Certified robustness
Change the architecture of the model to make
it provably robustness against certain adversarial examples
[103] [104] [105]
Reactive
defenses
Adversarial detection
Detect adversarial examples by a detector or
verifying the feature representation of inputs;
Detect hijacked image with triggers or identify
poisoning attack in the model
[106] [107] [108]
[115] [116] [117]
Detector is not available if it requires much
resource
Adversarial transformation
Apply transformation to convert adversarial examples
back to clean images
[109] [110] [111] [112]
They may reduce the performance of
autonomous driving models under normal conditions
TABLE III: Summary of adversarial defenses

Iv-C1 Proactive defenses

Adversarial training was initially proposed in [10]. This defense method targeted to re-train a more robust model on the dataset that combines original data and adversarial examples. In [11], the experiment result showed that adversarial training was just useful to defend against one-step attacks that generate adversarial examples by only one-time operation. In [13], a method to combine multiple attacks together was proposed to generate adversarial examples for adversarial training. However, it failed to improve the robustness of models against unseen attacks.

Defensive distillation was proposed in [96]

. This defense method trained a new model by using probability logits information of the original model as the soft labels. The new model trained in this way is less sensitive to the change of gradients, so it is more robust against adversarial examples. However, a new optimization-based attacks was proposed in 

[14] to bypass the defense.

Network regularization methods train models against adversarial examples by adding another adversarial-perturbation based regularizer into the original objective function [100]. In [101]

, contractive autoencoders were proposed and generalized into neural networks by using

norm of the layer-wise Jacobian matrices as the regularizer. In [102], a parameter

was introduced to control the overall Lipschitz constant of the whole model. Experiments on CIFAR-10/CIFAR-100

[113] showed that such network regularized models have higher robustness against FGSM attack than the original models.

Model ensemble methods were designed to improve the robustness by constructing an ensemble model that aggregates several individual models [97]. In [98], a random self-ensemble approach was proposed to derive the final prediction results by averaging predictions over random noises that are injected into the model. This approach is equivalent to ensembling infinite number of noisy models. In [99], an adaptive approach was proposed to train individual models with larger diversity. Then the ensemble of individual models could achieve better robustness because the attack is more difficult to transfer among individual models.

Certified robustness methods aim to provide provable defense against adversarial attacks with adversarial examples generated by several threat models [103, 104, 105]. In [103], a method called PixelDP was proposed as certified defense. This method adds an additional noise layer into the original model to serve the purpose of random perturbation whose size is smaller than a threshold on original inputs or features representations. The new model is then more robust against adversarial examples if the injected perturbations smaller than the pre-defined threshold.

Iv-C2 Reactive defense

Adversarial detection could detect adversarial examples by introducing another classifier that could differentiate the feature representation of adversarial examples from natural images. In [106]

, an intrinsic-defender (I-defender) was proposed to identify adversarial examples from original images under unknown attack methods. I-defender explores an intrinsic property of the target model, e.g., the distribution of hidden states of normal training sets, and then uses the intrinsic property to detect adversarial examples. Similarly, an effective method for DNNs with the softmax layer was proposed in 

[107] to detect abnormal samples including out-of-distribution (OOD) and adversarial examples. The idea was to use Gaussian discriminant analysis [114] to measure the probability density of test samples on feature spaces of DNNs. In [108], an approach called Feature Squeezing was proposed to detect adversarial examples by squeezing the color bit depth of each pixel. If the difference between the predictions on the original input and the squeezed input is over a threshold, the original input is more likely an adversarial example.

Adversarial transformation is a set of approaches that could apply transformations on adversarial examples to reconstruct them back to clean images. In [109], the effects of five image transformations for defending against FGSM, I-FGSM, DeepFool and C&W attacks were investigated. The results showed that transformations were partially effective to defend against adversarial perturbations, while randomized (e.g., image cropping) and non-differentiable (e.g., total variation minimization) transformations were stronger defenses. In [110], a framework called defense-GAN was proposed, which learns the underlying distribution of the image dataset and can generate images falling in this distribution. When an adversarial example was fed into the target model, the defense-GAN generated many images that are similar to the adversarial example in distance and then search the optimal one as the input of the target model. In [111], another GAN model called Adversarial Perturbation Elimination GAN (APE-GAN) was proposed to denoise adversarial examples, which uses adversarial examples as the input to directly output their corresponding denoised images . The experiments confirmed that APE-GAN is able to defend against common attacks. In [112], the High-Level Representation Guided Denoiser (HGD) was proposed to transform adversarial examples through an auto-encoder network. The key idea of HGD is that it does not minimize the distance between the generated image and original image but shortens the distance of feature representations at -th layer of the target model . The experiment result shows that HGD ranks first in NIPS adversarial defenses competition [97].

Iv-D Defense against adversarial poisoning attacks

A number of defense methods for poisoning attacks have been proposed in some recent research works. The general philosophy is to only detect whether the current input image is a hijacked image with triggers. Another high level thought is to identify the poisoning attack in the model and then remove the backdoor or Trojan. Both of the ideas belong to reactive adversarial detection defenses. In [115], a detection method called STRIP was proposed, which compared the predictions of the original input image and a perturbed input image that is generated by superimposing another clean image from training data. If the input image did not contain a trigger, the predictions of input image and perturbed image should be different. However, if the input image was deemed to contain a trigger, the predictions should be same because the perturbed image also contains the trigger that dominates the prediction of the model. In this manner, the hijacked image with a trigger could thus be detected.

In [116], a detection method was proposed to distinguish the clean input image from the malicious ones with a trigger. The method was based on an observation that even though clean images and hijacked images were classified to have the same label, the final output of the last activation layer is drastically different. Due to the observation, the method adopted a clustering algorithm to group the poisonous data owing to this difference. In [117], a comprehensive method was proposed to identify and mitigate poisoning attacks at the model level. Firstly, different triggers were created to attack each label, and the weights of neurons activated by the detected trigger were then removed to makes the trigger ineffective. The experiment results illustrated that this approach could significantly reduce attack success rates, even going down from over 90% to 0% for some poisoning attacks.

Iv-E Analysis of defenses

1). Defense against physical sensor attacks is costly but effective. Redundancy defence requires to use numerous sensors in the same type to collect the target data and combines the data together before sending it to the perception layer. Even though it leads to a significant expenditure on the sensors, redundancy is considered as a simple yet effective way to defence jamming attack. Apart from the cost, the technical issue of data fusion also needs to be taken into account.

2). Current adversarial defense methods are not suitable in autonomous vehicles. Table III summarizes the reviewed defense techniques. For proactive methods, adversarial training and defensive distillation need to train a new robust model following the original model training. However, the training of autonomous driving models generally requires large datasets and incurs a significant training time. Importing these techniques will undoubtedly result in the resource overhead. Moreover, adversarial training and defensive distillation are only effective when dealing with simple adversarial attacks like FGSM. As stated in the preceding section, model ensemble methods take advantage of results from multiple models to improve the robustness, which also cause large extra resource overhead. On the other hand, network regularization and robustness methods can be integrated into the training process of autonomous driving models without incurring large extra resource overhead. Yet, it is worth mentioning that such methods mostly experimented on DL models with simple network architecture, and its effectiveness needs to be further verified in ADS settings. For reactive methods, adversarial transformation process could achieve a satisfactory result when applying on adversarial examples. Still, the performance may degrade on normal inputs, which is unacceptable for safety-critical autonomous vehicles. When it comes to the adversarial detection, some techniques suggest to take advantage of other classifiers to detect adversarial examples, which is also infeasible as the classifier requires additional computation resources and might violate stringent timing constraints in ADSs. Therefore, other adversarial detection methods that do not cause considerable resource overhead could be incorporated in autonomous driving models. Also, other techniques that are helpful for improving the robustness of autonomous driving models should be explored in the future. In addition, as autonomous driving is a real-time interactive process, thus real-time monitoring and defense are of great importance in order to keep the safety of autonomous vehicles.

Fig. 6: Overview of defense framework on ADS

V Future Directions

In this survey, we conduct a comprehensive review on some existing attacks, including both physical and adversarial ones, as well as corresponding defense methods along with the detailed analysis of their availability and limitation in the deep learning-based ADSs. This survey discusses various adversarial attacks that could be detrimental to deep learning autonomous driving models and identify relevant safety threats. In this section, we uncover further research directions for possible attacks on ADSs and strategies for improving the robustness of ADSs against adversarial attacks. In particular, we propose the potential detection mechanisms explicitly applicable for current autonomous vehicles to defend against adversarial attacks as the majority of existing adversarial defense methods are not designated for deep learning-based ADSs in the first place.

V-a Potential attacks in future research

V-A1 Adversarial attacks on the whole ADS

Most of existing attack-related research normally focus on single target (e.g., physical attacks on cameras or GPS) or a sub-task in an ADS (e.g., adversarial attacks on object detectors). Some research simplify an ADS as an E2E driving model for attacking. However, as an ADS is composed of several layers, and inputs from different sensors are tend to be fused at first to provide environment information. The success launch of attaching on one sensor or one deep learning model does not necessarily mean that it would effectively make the ADS produce wrong control decisions. For example, object detection in autonomous vehicle could be realized through the fusion of camera-based and LiDAR-based deep learning models, and only attacking either one of them may not affect the final recognition results. Therefore, it is essential to investigate attacks against models based on multi-modal inputs and attacks against full-stack ADS like Apollo and Autoware.

V-A2 Semantic adversarial attacks

Currently, some research starts to investigate semantic adversarial attacks that focus on changing specific attributes such as light conditions and clarity of the input to generate natural adversarial examples. The existence of semantic adversarial attacks demonstrates that deep learning models tend to make mistakes in real-world even without adversary, meaning that weather, light, or other conditions could easily be turned into semantic adversarial attributes in coincidence. Such uncertainty would pose unexpected threats to autonomous vehicles. Therefore, the research of semantic adversarial attacks is necessary in terms of achieving better performance and robustness of deep learning models applied in ADSs.

V-A3 Reverse-engineering attacks

Other than adversarial attacks, reverse-engineering attacks on ADSs are another possible research direction. For instance, an approach to construct a metamodel was proposed in [118] for predicting attributes of black-box classifiers. Based on extracted attributes, adversarial examples could be created to attack black-box classifiers. Also, the parameters of a neural network could be recovered by using the side-channel analysis technique [119]. Since deep learning models are now widely adopted in the industry, the valuable information contained in the structure and parameters should be protected securely. Simply put, the model should be robust enough against various reverse-engineering attacks to preserve both integrity and stability of the model.

V-B Strategies for robustness improvement

Based on reviewed attacks and defenses, we propose a defense framework to improve the robustness of ADSs as shown in Figure 6. The framework could be applied as practice in industry. Specifically, we propose four strategies hardware redundancy, robust model training, model testing and verification, and anomaly detection that could be investigated in the future.

V-B1 Hardware redundancy

As discussed in Section V-A1, current attacks only focus on one specific target in ADSs, applying multiple sensors to perceive the environment hence is a good way to improve the robustness. In addition, with the development of V2X, an autonomous vehicle can receive information from roadside units like surveillance cameras or from other nearby vehicles. By fusing sensor data from V2X clients and data collected by sensors on the vehicle, the perceived environment information would be more robust against being turned into adversarial input.

V-B2 Model robustness training

From the perspective of adversarial defense, training autonomous driving models that are naturally robust against adversarial examples is a promising research direction. For instance, network regularization follows this line of thought. However, many network regularization methods merely focus on specific adversarial examples. Recently, in [120], a new regularization method was proposed by introducing surrogate loss to improve the robustness of models. This method won the first place in the NeurIPS 2018 Adversarial Vision Challenge to defend adversarial examples. Another assuring approach to improve the robustness is to modify the network architecture of models.

V-B3 Model testing and verification

After the model training stage, it is also essential to apply viable testing and verification techniques on the trained models to measure their performance against adversarial examples. Data-driven deep learning models are vastly different from traditional software and thus difficult to benefiting from the existing software engineering test methods [122]. Currently, some testing and verification tools have been developed to cope with this issue. For example, in [123], a white-box framework was proposed to exhaustively search adversarial examples. Therefore, applying testing and verification techniques to prevent the adversarial examples is another promising research direction.

V-B4 Adversarial attacks detection in real time

Lastly, before deploying a robust ADS, sound adversarial attack detection and monitoring system are urgently needed as the last-line defense against a variety of attacks for autonomous vehicles in real time. Current adversarial attack detection methods usually rely on an auxiliary model to detect adversarial examples, which may not be feasible for the resource-constrained autonomous vehicles. Therefore, detecting abnormal behavior caused by adversarial examples without incurring the resource overhead is an important research direction. Adversarial detection techniques such as the one in  [107] explored in Section IV

do not introduce new models or layers into original autonomous driving models, they hence do not cause large overhead. However, these works were only experimented on the public datasets like MNIST and CIFAR-10. It is essential to conduct comprehensive experiments on datasets of real-world autonomous driving tasks. Another possible research direction is to deploy an anomaly detection system on the Cloud/Edge server to monitor and analyze the data uploaded by autonomous vehicles. The Cloud/Edge server has powerful computation so we could implement more accurate detection methods to detect adversarial examples. However, how to ensure the timely response, handle time synchronization and deal with a large amount of sensor data in an anomaly detection system at the running time remain unsolved. In 

[124], a decentralized swift vigilance framework was proposed to recognize abnormal inputs with ultra-low latency. In [128], a highly scalable anomaly detection mechanism was created to enable the gathering and compression of event data in a highly distributed environment, in which a desired balance between response time and accuracy is well achieved.

Vi Conclusion

The deep learning-based ADS is the key to realize a more intelligent self-driving system. However, the system is vulnerable to diverse attacks. In this survey, potential safe-threatening attacks are analyzed in the workflow of the deep learning-based ADS, including physical attacks, cyberattacks and adversarial attacks. The physical attack is straightforward but shows certain limits that could be dealt with defence methods effectively. The cyberattack is considered difficult to launch in large scale, while system defence methods are easy to implement. The adversarial attack is effective, and more defence methods against it are needed, as traditional defence methods are not suitable in the self-driving context. In future research, adversarial attacks on LiDAR and deep reinforcement models and reverse-engineering attacks are potential attacks must be researched. To improve the robustness of the ADS, model robustness training, model testing and verification and adversarial attacks detection in real-time should also be studied thoroughly.

References

  • [1] Eureka, Programme for a European traffic system with highest efficiency and unprecedented safety, https://www.eurekanetwork.org/, (Accessed: 1 Dec. 220).
  • [2] M. Buehler, K. Iagnemma, and S. Singh, The 2005 DARPA grand challenge: the great robot race, Springer, 2007.
  • [3] Tesla, Telsa autopilot, https://www.tesla.com/autopilot, (Accessed: 30 Sept. 2019).
  • [4] Waymo, Waymo llc, https://waymo.com/, (Accessed: 30 Sep. 2019).
  • [5] M. Berboucha, Uber self-driving car crash: What really happened, https://bit.ly/2YKu9WN, (Accessed: 30 Sep. 2019).
  • [6] Baidu, ApolloAuto, https://github.com/ApolloAuto/apollo, 2020.
  • [7] Global Times, Baidu fully opens Apollo Go Robotaxi services in Beijing, https://www.globaltimes.cn/content/1203174.shtml, (Accessed: 1 Mar. 2021).
  • [8] Tesla, Autopilot, https://www.tesla.com/en_AU/autopilotAI, (Accessed: 1 Mar. 2021).
  • [9] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in Proc. ICLR, Banff, AB, Canada, Apr. 2014.
  • [10] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in Proc. ICLR, San Diego, CA, USA, May. 2015.
  • [11]

    A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” in

    Proc. ICLR, Toulon, France, Apr. 2017.
  • [12] A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Proc. ICLR, Toulon, France, Apr. 2017.
  • [13] F. Tramèr, A. Kurakin, N. Papernot, I. J. Goodfellow, D. Boneh, and P. D. McDaniel, “Ensemble adversarial training: Attacks and defenses,” in Proc. ICLR, Vancouver, BC, Canada, Apr. 2018.
  • [14] N. Carlini and D. A. Wagner, “Towards evaluating the robustness of neural networks,” in Proc. SP, San Jose, CA, USA, May 2017, pp. 39–57.
  • [15] P. Y. Chen, Y. Sharma, H. Zhang, J. F. Yi, and C. Hsieh, “EAD: Elastic-net attacks to deep neural networks via adversarial examples,” in Proc. AAAI, New Orleans, Louisiana, USA, Feb. 2018, pp. 10–17.
  • [16] S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “DeepFool: A simple and accurate method to fool deep neural networks,” in Proc. CVPR, Las Vegas, NV, USA, Jun. 2016, pp. 2574–2582.
  • [17] J. Su, D. V. Vargas, and K. Sakurai, “One pixel attack for fooling deep neural networks,”

    IEEE Trans. Evolutionary Computation

    , vol. 23, no. 5, pp. 828–841, Oct. 2019.
  • [18] S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal adversarial perturbations,” in Proc. CVPR, Honolulu, HI, USA, Jul. 2017, pp. 86–94.
  • [19] O. Poursaeed, I. Katsman, B. Gao, and S. J. Belongie, “Generative adversarial perturbations,” in Proc. CVPR, Salt Lake City, UT, USA, Jun. 2018, pp. 4422–4431.
  • [20] C. Xiao, B. Li, J. Zhu, W. He, M. Liu, and D. Song, “Generating adversarial examples with adversarial networks,” in Proc. IJCAI, Stockholm, Sweden, Jul. 2018, pp. 3905–3911 .
  • [21] A. Liu, X. Liu, J. Fan, Y. Ma, A. Zhang, H. Xie, and D. Tao, “Perceptual-sensitive GAN for generating adversarial patches,” in Proc. AAAI, Honolulu, Hawaii, USA, Feb. 2019, vol. 33, pp. 1028–1035.
  • [22] K. Ren, Q. Wang, C. Wang, Z. Qin, and X. Lin, “The security of autonomous driving: Threats, defenses, and future directions,” Proceeding of the IEEE, vol. 108, no. 2, pp. 357–372, 2019.
  • [23] M. Pham and K. Xiong, “A Survey on security attacks and defense techniques for connected and autonomous vehicles,” CoRR, vol. abs/2007.08041, 2020.
  • [24] N. Akhtar and A. Mian, “Threat of adversarial attacks on deep learning in computer vision: A survey,” IEEE Access, vol. 6, pp. 14410–14430, 2018.
  • [25] X. Yuan, P. He, Q. Zhu and X. Li, “Adversarial examples: Attacks and defenses for deep learning,” IEEE Trans. Neural Networks Learn. Syst., vol. 30, no. 9, pp. 2805–2824, 2019.
  • [26] A. Agarwal, S. Gupta, and D. K. Singh, “Review of optical flow technique for moving object detection,” in Proc. IC3I, Noida, India, Dec. 2016, pp. 409–413.
  • [27] S. Wang, R. Clark, H. Wen, and N. Trigoni, “DeepVO : Towards end-to-end visual odometry with deep recurrent convolutional neural networks,” CoRR, vol. abs/1709.08429, 2017.
  • [28] M. Bloesch, J. Czarnowski, R. Clark, S. Leutenegger, and A. J. Davison, “CodeSLAM-Learning a compact, optimisable representation for dense visual SLAM,” in Proc. CVPR, Salt Lake City, UT, USA, Jun. 2018, pp. 2560–2568.
  • [29] M. Lu, W. Chen, X. Shen, H.-C. Lam, and J. Liu, “Positioning and tracking construction vehicles in highly dense urban areas and building construction sites,” Automat. Constr., vol. 16, no. 5, pp. 647–656, Aug. 2007.
  • [30] F. Ghallabi, F. Nashashibi, G. El-Haj-Shhade, and M. Mittet, “Lidar-based lane marking detection for vehicle positioning in an HD map,” in Proc. ITSC. Maui, HI, USA, Nov. 2018, pp. 2209–2214.
  • [31] R. B. Girshick, “Fast R-CNN,” in Proc. ICCV, Santiago, Chile, Dec. 2015, pp. 1440–1448.
  • [32] J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. CVPR, Las Vegas, NV, USA, Jun. 2016, pp. 779–788.
  • [33] Y. Zhou and O. Tuzel, “Voxelnet: End-to-end learning for point cloud based 3D object detection,” in Proc. CVPR, Salt Lake City, UT, USA, Jun. 2018, pp. 4490–4499.
  • [34] S. Shi, X. Wang, and H. Li, “PointRCNN: 3D object proposal generation and detection from point cloud,” in Proc. CVPR, Long Beach, CA, USA, Jun. 2019, pp. 770–779.
  • [35] J.Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proc. CVPR, Boston, MA, USA, Jun. 2015, pp. 3431–3440.
  • [36] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proc. CVPR, Honolulu, HI, USA, Jul. 2017, pp. 6230–6239.
  • [37] T. Y. Gu, J. M. Dolan, and J. Lee, “Human-like planning of swerve maneuvers for autonomous vehicles,” in Proc. IV, Gotenburg, Sweden, Jun. 2016, pp. 716–721.
  • [38]

    A. Gupta, J. Johnson, F. F. Li, S. Savarese, and A. Alahi, “Social GAN: Socially acceptable trajectories with generative adversarial networks,” in

    Proc. CVPR, Salt Lake City, UT, USA, Jun. 2018, pp. 2255–2264.
  • [39] W. Luo, B. Yang, and R. Urtasun, “Fast and furious: Real time end-to-end 3D detection, tracking and motion forecasting with a single convolutional net,” in Proc. CVPR, Salt Lake City, UT, USA, , Jun. 2018, pp. 3569–3577.
  • [40] M. Wulfmeier, D. Z. Wang, and I. Posner, “Watch this: Scalable cost-function learning for path planning in urban environments,” in Proc. IROS, Daejeon, South Korea, Oct. 2016, pp. 2089–2095.
  • [41] P. Wolf, C. Hubschneider, M. Weber, A. Bauer, J. Härtl, F. Durr, and J. M. Zöllner, “Learning how to drive in a real world simulation with deep Q-Networks,” in Proc. IV, Los Angeles, CA, USA, Jun. 2017, pp. 244–250.
  • [42] M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. K. Zhang, X. Zhang, J. Zhang, and K. Zieba, “End to end learning for self-driving cars,” CoRR, vol. abs/1604.07316, Apr. 2016.
  • [43]

    A. Hussein, M.M. Gaber, E. Elyan, and C. Jayne, “Imitation learning: A survey of learning methods,”

    ACM Computing Surveys, vol. 50, no. 2, pp. 1–35, Jun. 2017.
  • [44] F. Codevilla, M. Miiller, A. López, V. Koltun, and A. Dosovitskiy, “End-to-end driving via conditional imitation learning,” in Proc. ICRA, Brisbane, Australia, May. 2018, pp. 1–9.
  • [45] H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” in Proc. CVPR, Honolulu, HI, USA, Jul. 2017, pp. 2174–2182.
  • [46] M. Sundermeyer, R. Schlüter, and H. Ney, “LSTM neural networks for language modeling,” in Proc. ISCA, Portland, OR, USA, Sept. 2012, pp. 194–197.
  • [47] J. Petit, B. Stottelaar, M. Feiri, and F. Kargl, “Remote attacks on automated vehicles sensors: Experiments on camera and lidar,” Black Hat Europe, Amsterdam, Netherlands, Nov. 2015.
  • [48] Y. Park, S. Yunmok, S. Hocheol, and D. Kim and Y. Kim, “This ain’t your dose: Sensor spoofing attack on medical infusion pump,” in Proc. WOOT, Austin, TX, USA, Aug. 2016.
  • [49] C. Yan, W. Xu, and J. Liu, “Can you trust autonomous vehicles: Contactless attacks against sensors of self-driving vehicle,” DEF CON, Paris, France, Aug. 2016.
  • [50] H. Shin, D. Kim, Y. Kwon, and Y. Kim, “Illusion and dazzle: Adversarial optical channel exploits against lidars for automotive applications,” in Proc. CHES, Taipei, Taiwan, Sept. 2017, pp. 445–467.
  • [51] B. S. Lim, S. L. Keoh, and V. L. L. Thing, “Autonomous vehicle ultrasonic sensor vulnerability and impact assessment,” in Proc. IoTWF, Singapore, Feb. 2018, pp. 231–236.
  • [52] Y. Son, H. Shin, D. Kim, Y. Park, J. Noh, K. Choi, J. Choi, and Y. Kim, “Rocking drones with intentional sound noise on gyroscopic sensors,” in Proc. USENIX, Washington, D.C., USA, Aug. 2015, pp. 881–896.
  • [53] G. Kar, H. A. Mustafa, Y. Wang, Y. Chen, W. Xu, M. Gruteser, and T. Vu, “Detection of on-road vehicles emanating GPS interference,” in Proc. SIGSAC, Scottsdale, AZ, USA, Nov. 2014, pp. 621–632.
  • [54] M. Psiaki and T. Humphreys, “Protecting GPS from spoofers is critical to the future of navigation,” IEEE Spectrum, vol. 10, Jul. 2016.
  • [55]

    Q. Meng, L. T. Hsu, B. Xu, X. Luo, and A. El-Mowafy, “A GPS Spoofing generator using an open sourced vector tracking-based receiver,”

    Sensors, vol. 19, no. 18, p. 3993, May. 2019.
  • [56] J. S. Warner and G. Roger, “A Simple Demonstration that the Global Positioning System ( GPS ) is Vulnerable to Spoofing,” Journal of security administration, vol. 25, no. 22, pp. 19–27, 2002.
  • [57] K. Zeng, S. Liu, Y. Shu, D. Wang, H. Li, Y. Dou, G. Wang, and Y. Yang, “All your GPS are belong to us: Towards stealthy manipulation of road navigation systems,” in Proc. USENIX, Baltimore, MD, USA, Aug. 2018, pp. 1527–1544.
  • [58] D. Davidson, H. Wu, R. Jellinek, V. Singh, and T. Ristenpart, “Controlling UAVs with sensor input spoofing attacks,” in Proc. UNISEX Workshop, Austin, TX, USA, Aug. 2016.
  • [59] D. Nassi, R. B. Netanel, Y .Elovici, B. Nassi, “MobilBye: Attacking ADAS with camera spoofing,” CoRR, vol. abs/1906.09765, 2019.
  • [60] M. B. Sinai, N. Partush, S. Yadid, and E. Yahav, “Exploiting social navigation,” CoRR, vol. abs/1410.0151, Oct. 2014.
  • [61] M. Long, C. Wu, and J. Y. Hung, “Denial of service attacks on network-based control systems: impact and mitigation,” IEEE Trans. Industrial Informatics, vol. 1, no. 2, pp. 85–96, May 2005.
  • [62] M. Du and K. Wang, “An sdn-enabled pseudo-honeypot strategy for distributed denial of service attacks in industrial internet of things,” IEEE Trans. Industrial Informatics, vol. 16, no. 1, pp. 648–657, Jan. 2020.
  • [63] L. B. Othmane, H. Weffers, M. Mohamad, and M. Wolf, “A survey of security and privacy in connected vehicles,” in Wireless Sensor and Mobile Ad-hoc Networks, pp. 217–247, 2015.
  • [64] Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into transferable adversarial examples and black-box attacks,” in Proc. ICLR, Toulon, France, Apr. 2017
  • [65] P. Chen, H. Zhang, Y. Sharma, J. Yi, and C. Hsieh, “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models,” in Proc. AISec, New York, NY, USA, Aug. 2017, pp. 15–26.
  • [66]

    W. Brendel, J. Rauber, and M. Bethge, “Decision-based adversarial attacks: Reliable attacks against black-box machine learning models,” in

    Proc. ICLR, Vancouver, BC, Canada, Feb. 2018.
  • [67] H. Zhou, W. Li, Y. Zhu, Y. Zhang, B. Yu, L. Zhang, and C. Liu, “Deepbillboard: Systematic physical-world testing of autonomous driving systems,” in Proc. ICSE, Seoul, South Korea, Jun. 2020.
  • [68] A. Boloor, K. Garimella, X. He, C. Gill, Y. Vorobeychik, and X. Zhang, “Attacking vision-based perception in end-to-end autonomous driving models,” Journal of Systems Architecture, 101766.
  • [69] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: An open urban driving simulator,” CoRR, vol. abs/1711.03938, 2017.
  • [70] J. Yang, A. Boloor, A. Chakrabarti, X. Zhang, and Y. Vorobeychik, “Finding physical adversarial examples for autonomous driving with fast and differentiable image compositing,” CoRR, vol. abs/2010.08844, 2020.
  • [71] T. Wu, X. Ning, W. Li, R. Huang, H. Yang, and Y. Wang, “Physical adversarial attack on vehicle detector in the carla simulator,” CoRR, vol. abs/2007.16118, 2020.
  • [72] Y. Cao, C. Xiao, B. Cyr, Y. M. Zhou, W. Park, S. Rampazzi, Q. A. Chen, K. Fu, and Z. M. Mao, “Adversarial sensor attack on lidar-based perception in autonomous driving,” in Proc. CCS, London, UK, Nov. 2019, pp. 2267–2281.
  • [73] J. Sun, Y. Cao, Q.A. Chen, and Z.M. Mao, “Towards robust lidar-based perception in autonomous driving: General black-box adversarial sensor attack and countermeasures,” in Proc. USENIX Security Symposium, Aug. 2018, pp. 877–894.
  • [74] S. T. Chen, C. Cornelius, J. Martin, and D. H. Chau, “Shapeshifter: Robust physical adversarial attack on faster R-CNN object detector,” in Proc. ECML PKDD, Dublin, Ireland, Sept. 2018, pp. 3354–3361.
  • [75] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. W. Xiao, A. Prakash, T. Kohno, and D. Song, “Robust physical-world attacks on deep learning visual classification,” in Proc. CVPR, Salt Lake City, UT, USA, Jun. 2018, pp. 1625–1634.
  • [76] A. Møgelmose, M. M. Trivedi, and T. B. Moeslund, “Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey,” IEEE Trans. Intelligent Transportation Systems, vol. 13, no. 4, pp. 1484–1497, Dec. 2012.
  • [77] Y. Zhao, H. Zhu, R. Liang, Q. Shen, S. Zhang, and K. Chen, “Seeing isn’t Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors,” in Proc. SIGSAC, London, UK, Nov. 2019, pp. 1989-2004.
  • [78] N. Papernot, P. D. McDaniel, I. J. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proc. AsiaCCS, Abu Dhabi, United Arab Emirates, Apr. 2017, pp. 506–519.
  • [79] Z. Kong, J. Guo, A. Li, and C. Liu, “PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving,” in Proc. CVPR, Seattle, WA, USA, Jun. 2020, pp. 14254–14263.
  • [80] M. Wicker and M. Kwiatkowska, “Robustness of 3D deep learning in an adversarial setting,” in Proc. CVPR, Long Beach, CA, USA, Jun. 2019, pp. 11767–11775.
  • [81] C. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” in Proc. CVPR, Honolulu, HI, USA, Jul. 2017, pp. 77–85.
  • [82]

    D. Maturana and S. Scherer, “Voxnet: A 3D convolutional neural network for real-time object recognition,” in

    Proc. IROS, Hamburg, Germany, Sept. 2015, pp. 922–928.
  • [83] C. Xiang, C. R. Qi, and B. Li, “Generating 3D adversarial point clouds,” in Proc. CVPR, Long Beach, CA, USA, Jun. 2019, pp. 9136–9144.
  • [84] S. H. Huang, N. Papernot, I. J. Goodfellow, Y. Duan, and P. Abbeel, “Adversarial attacks on neural network policies,” in Proc. ICLR, Toulon, France, Apr. 2017
  • [85] V. Mnih, A. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous methods for deep reinforcement learning,” in Proc. ICML, New York City, NY, USA, Jun. 2016, pp. 1928–1937.
  • [86] J. Kos and D. Song, “Delving into adversarial attacks on deep policies,” in Proc. ICLR, Toulon, France, Nov. 2017.
  • [87] Y. Lin, Z. Hong, Y. Liao, M. Shih, M. Liu, and M. Sun, “Tactics of adversarial attack on deep reinforcement learning agents,” in Proc. ICLR, Toulon, France, Nov. 2017.
  • [88] Y. Liu, S. Ma, Y. Aafer, W. Lee, J.Zhai, W. Wang, and X. Zhang, “Trojaning attack on neural networks,” in Proc. NDSS, San Diego, California, USA, Feb. 2018.
  • [89] H. Rehman, A. Ekelhart, and R. Mayer, “Backdoor attacks in neural networks - A systematic evaluation on multiple traffic sign datasets,” in Proc. CD-MAKE, Canterbury, UK, Aug. 2019, pp. 285–300.
  • [90] S. Ding, Y. Tian, F Xu, Q Li, and S. Zhong, “Poisoning Attack on Deep Generative Models in Autonomous Driving“ in Proc. of EAI SecureComm, Oct. 2019.
  • [91] F. Zhang, H. Kodituwakku, J. W. Hines, and J. Coble, “Multilayer data-driven cyber-attack detection system for industrial control systems based on network, system, and process data,” IEEE Trans. Industrial Informatics, vol. 15, no. 7, pp. 4362–4369, Jul. 2019.
  • [92] Q. Sun, K. Zhang, and Y. Shi, “Resilient model predictive control of cyber-physical systems under dos attacks,” IEEE Trans. Industrial Informatics, vol. 16, no. 7, pp. 4920–4927, Jul. 2020.
  • [93] Y. Shoukry, P. Martin, Y. Yona, S. N. Diggavi, and M. B. Srivastava, “Pycra: Physical challenge-response authentication for active sensors under spoofing attacks,” in Proc. SIGSAC, Denver, CO, USA, Oct. 2015, pp. 1004–1015.
  • [94] S. Mahmud, S. Shanker, and I. Hossain, “Secure software upload in an intelligent vehicle via wireless communication links,” in Proc. IV, Las Vegas, NV, USA, 2005, pp. 588–593.
  • [95] D. Nilsson and U. E. Larson, “Secure firmware updates over the air in intelligent vehicles,” in Proc. ICC Workshops, Beijing, China, 2008, pp. 380–384.
  • [96] N. Papernot, P. D. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in Proc. SP, San Jose, CA, USA, May 2016, pp. 582–597.
  • [97] A. Kurakin, I. Goodfellow, S. Bengio, Y. Dong, F. Liao, M. Liang, and J. Wang, “Adversarial attacks and defences competition,” CoRR, vol. abs/1804.00097, 2018.
  • [98] X. Liu, M. Cheng, H. Zhang, and C. J. Hsieh, “Towards robust neural networks via random self-ensemble,” in Proc. ECCV, Munich, Germany, Nov. 2018, pp. 369–385.
  • [99] T. Pang, K. Xu, C. Du, N. Chen, and J.. Zhu, “Improving adversarial robustness via promoting ensemble diversity,” in Proc. ICML, Long Beach, CA, USA, May. 2019, pp. 4970–4979.
  • [100] Z. Yan, Y. Guo, and C. Zhang, “Deep Defense: Training DNNs with improved adversarial robustness,” in Proc. NeurIPS, Montréal, Canada, Dec. 2018, pp. 417–426.
  • [101] S. X. Gu and L. Rigazio, “Towards deep neural network architectures robust to adversarial examples,” in Proc. ICLR, San Diego, CA, USA, May 2015,
  • [102] M. Cissé, P. Bojanowski, E. Grave, Y. N. Dauphin, and N. Usunier, “Parseval networks: Improving robustness to adversarial examples,” in Proc. ICML, Sydney, NSW, Australia, Aug. 2017, vol. 70, pp. 854–863.
  • [103] M. Lecuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana, “Certified robustness to adversarial examples with differential privacy,” in Proc. SP, San Francisco, CA, USA, May 2019, pp. 656–672.
  • [104] A. Raghunathan, J. Steinhardt, and P. Liang, “Certified defenses against adversarial examples,” CoRR, vol. abs/1801.09344, 2018.
  • [105] E. Wong, and Z. Kolter, “Provable defenses against adversarial examples via the convex outer adversarial polytope,” in Proc. ICML, Stockholm, Sweden, Jul. 2018, pp. 5286–5295.
  • [106] Z. Zheng and P. Hong, “Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks,” in Proc. NeurIPS, Montréal, Canada, Dec. 2018, pp. 7924–7933.
  • [107] K. Lee, K. Lee, H. Lee, and J. Shin, “A simple unified framework for detecting out-of-distribution samples and adversarial attacks,” in Proc. NeurIPS, Montréal, Canada, Dec. 2018, pp. 7167–7177.
  • [108] W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,” CoRR, vol. abs/1704.01155, 2017.
  • [109] C. Guo, M. Rana, M. Cissé, and L. V. D. Maaten, “Countering adversarial images using input transformations,” in Proc. ICLR, Vancouver, BC, Canada, Apr. 2018.
  • [110] P. Samangouei, M. Kabkab, and R. Chellappa, “Defense-gan: Protecting classifiers against adversarial attacks using generative models,” in Proc. ICLR, Vancouver, BC, Canada, Apr. 2018.
  • [111] G. Jin, S. Shen, D. Zhang, F.Dai, and Y. Zhang, “APE-GAN: Adversarial perturbation elimination with GAN,” in Proc. ICASSP, Brighton, United Kingdom, May. 2019, pp. 3842–3846.
  • [112] F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, and J. Zhu, “Defense against adversarial attacks using high-level representation guided denoiser,” in Proc. CVPR, Salt Lake City, UT, USA, Jun. 2018, pp. 1778–1787.
  • [113] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. Rep., Apr. 2009.
  • [114] T. Hastie and R. Tibshirani, “Discriminant analysis by gaussian mixtures,” J R Stat Soc Series B Stat Methodol, vol. 58, no. 1, pp. 155–176, 1996.
  • [115] Y. S. Gao, C. G. Xu, D. R. Wang, S. P. Chen, D. C. Ranasinghe, and S. Nepal, “STRIP: A defence against trojan attacks on deep neural networks,” in Proc. ACSAC, San Juan, PR, USA, Dec. 2019, pp. 113–125.
  • [116] B. Chen, W. Carvalho, N. Baracaldo, H. Ludwig, B. Edwards, T. Lee, I. Molloy, and B. Srivastava, “Detecting backdoor attacks on deep neural networks by activation clustering,” in Proc. AAAI Workshop, Honolulu, HI, USA, Jan. 2019,
  • [117] B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Zhao, “Neural cleanse: Identifying and mitigating backdoor attacks in neural networks,” in Proc. SP, San Francisco, CA, USA, May 2019, pp. 707–723.
  • [118] S. J. Oh, B. Schiele, and M. Fritz, “Towards reverse-engineering black-box neural networks,” in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Sept. 2019, pp. 121–144.
  • [119] L. Batina, S. Bhasin, D. Jap, and S. Picek, “CSI NN: Reverse engineering of neural network architectures through electromagnetic side channel,” in Proc. UNISEX, Santa Clara, CA, USA, Aug. 2019, pp. 515–532.
  • [120] H. Y. Zhang, Y. D. Yu, J. T. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan, “Theoretically principled trade-off between robustness and accuracy,” in Proc. ICML, Long Beach, California, USA, vol. 97, Jun. 2019, pp. 7472–7482.
  • [121] M. Lécuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana, “Certified robustness to adversarial examples with differential privacy,” in Proc. SP, San Francisco, CA, USA, May 2019, pp. 656–672.
  • [122] C. Murphy, G. E. Kaiser, and M. Arias, “An approach to software testing of machine learning applications,” in Proc. SEKE, Boston, Massachusetts, USA, Jul. 2007.
  • [123] X. W. Huang, M. Kwiatkowska, S. Wang, and M. Wu, “Safety verification of deep neural networks,” in Proc. CAV, Heidelberg, Germany, Jul. 2017, pp. 3–29.
  • [124] G. Li, K. Ota, M. Dong, J. Wu, and J. Li, “Desvig: Decentralized swift vigilance against adversarial attacks in industrial artificial intelligence systems,” IEEE Trans. Industrial Informatics, vol. 16, no. 5, pp. 3267–3277, May 2020.
  • [125] X. Zheng, C. Julien, R. M. Podorozhny, F. Cassez, and T. Rakotoarivelo, “Efficient and scalable runtime monitoring for cyber-physical system,” IEEE Systems Journal, vol. 12, no. 2, pp. 1667–1678, Jun. 2018.
  • [126] W. Lu, Y. Zhou, G. Wan, S. Hou, and S. Song, “L3-net: Towards learning based lidar localization for autonomous driving,” in Proc. CVPR, Long Beach, CA, USA, Jun. 2019, pp. 6389–-6398.
  • [127] W. Zhang and C. Xiao, “Pcan: 3d attention map learning using contextual information for point cloud based retrieval,” in Proc. CVPR, Long Beach, CA, USA, Jun. 2019, pp. 12436–-12445.
  • [128] X. Zheng, C. Julien, R. M. Podorozhny, F. Cassez, and T. Rakotoarivelo, “Efficient and scalable runtime monitoring for cyber-physical system,” IEEE Systems Journal, vol. 12, no. 2, pp. 1667–1678, Jun. 2018.