A Survey of Deep Learning Applications to Autonomous Vehicle Control

12/23/2019 ∙ by Sampo Kuutti, et al. ∙ University of Surrey jaguar land rover 41

Designing a controller for autonomous vehicles capable of providing adequate performance in all driving scenarios is challenging due to the highly complex environment and inability to test the system in the wide variety of scenarios which it may encounter after deployment. However, deep learning methods have shown great promise in not only providing excellent performance for complex and non-linear control problems, but also in generalising previously learned rules to new scenarios. For these reasons, the use of deep learning for vehicle control is becoming increasingly popular. Although important advancements have been achieved in this field, these works have not been fully summarised. This paper surveys a wide range of research works reported in the literature which aim to control a vehicle through deep learning methods. Although there exists overlap between control and perception, the focus of this paper is on vehicle control, rather than the wider perception problem which includes tasks such as semantic segmentation and object detection. The paper identifies the strengths and limitations of available deep learning methods through comparative analysis and discusses the research challenges in terms of computation, architecture selection, goal specification, generalisation, verification and validation, as well as safety. Overall, this survey brings timely and topical information to a rapidly evolving field relevant to intelligent transportation systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 8

page 22

Code Repositories

Seminararbeit

Der Quelltext zu der Seminararbeit: "Inwiefern existieren Unterschiede zwischen Deep Learning und Reinforcement Learning beim Trainieren eines Autonomen Fahrzeugs in Theorie und Programmierung?" von 2020


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In 2016, traffic accidents resulted in 37,000 fatalities in the United States [123] and 25,500 fatalities in the European Union [54]. With the steady increase in the number of vehicles on the road, issues such as traffic congestion, pollution, and road safety are becoming critical issues [211]. Autonomous vehicles have gained significant interest as solutions to these challenges [50, 187, 193, 116]

. For instance, 90% of all car accidents are estimated to be caused by human errors, while only 2% are caused by vehicle failures

[172]. Further benefits from autonomous vehicles in terms of better fuel economy [105, 135], reduced pollution, car sharing [151], increased productivity, and improved traffic flow [40] have also been reported.

Some of the earliest autonomous vehicle projects were presented in 1980s by Carnegie Mellon University for driving in structured environments [184] and the University of Bundeswehr Munich for highway driving [42]. Since then, projects such as DARPA Grand Challenges [186, 25] have continued to drive forward research in autonomous vehicles. Outside of academia, car manufacturers and tech companies have also carried out research to develop their own autonomous vehicles. This has led to multiple Advanced Driver Assistance Systems such as Adaptive Cruise Control (ACC), Lane Keeping Assistance, and Lane Departure Warning technologies, which provide modern vehicles with partial autonomy. These technologies not only increase the safety of modern vehicles and make driving easier but also pave the way for fully autonomous vehicles which do not require any human intervention.

Early autonomous vehicle systems were heavily reliant on accurate sensory data, utilising multi-sensor setups and expensive sensors such as LIDAR to provide accurate environment perception. Control of these autonomous vehicles was handled via rule-based controllers, where the parameters are set by the developers and hand-tuned after simulation and field testing [94, 130, 133]. The downside of this approach is the time intensive hand-tuning of parameters [88] and the difficulty of such rule-based controllers to generalise to new scenarios [170]. Also, the highly non-linear nature of driving means that control methods based on linearisation of the vehicle model or other algebraic analytical solutions are often infeasible or do not scale well [226, 41]. Recently, deep learning has gained attention due to the numerous state-of-the-art results it has achieved in fields such as image classification and speech recognition [87, 69, 177]. This has led to increasing use of deep learning in autonomous vehicle applications, including planning and decision making [166, 106, 199, 27, 43], perception [229, 195, 77, 16, 222], as well as mapping and localisation [104, 85, 91]

. The performance of Convolutional Neural Networks (CNNs) with raw camera inputs has the potential to reduce the number of sensors used by autonomous vehicles. This has led to some organisations investigating autonomous vehicles without expensive sensors such as LIDAR, instead employing extensive use of deep learning for scene understanding, object recognition, semantic segmentation, and motion estimation. The strong results of deep learning in these perception problems have also sparked interest in using Deep Neural Networks (DNNs) to produce control actions in autonomous vehicles. Indeed, autonomous vehicle control often has a strong link to perception, as many techniques use CNNs to predict control actions based on images of the scene, without any separate perception module, thereby removing the separation between the perception and control layer.

Deep learning offers several benefits for vehicle control. The ability to self-optimise its behaviour from data and adapt to new scenarios makes deep learning well suited to control problems in complex and dynamic environments [98, 100, 145]. Rather than having to tune each parameter iteratively while trying to maintain performance in all foreseeable scenarios, deep learning enables developers to describe the desired behaviour and teach the system to perform well and generalise to new environments through learning [113, 6, 182, 96, 163]. For these reasons, there has been significant interest in deep learning for autonomous vehicle control in recent years. There are a variety of different sensor configurations; whilst some researchers aim to control the vehicle with camera vision only, others utilise lower dimensional data from ranging sensors, and some use multi-sensor set ups. There are also some differences in terms of the control objective, some formulate the system as a high-level controller which provides, for example, desired acceleration, which is then realised through a low-level controller, often using classical control techniques. Others aim to learn driving end-to-end, mapping observations directly to low-level vehicle control interface commands. Although there has been a large variety of different approaches used to tackle autonomous vehicle control via deep learning, currently there is a lack of analysis and comparison between these different techniques. This manuscript aims to fill this gap in the literature, by reviewing the deep learning approaches to vehicle control and analysing their performance. Furthermore, the manuscript will evaluate the current state of the field, identify the main research challenges, and make recommendations for the direction of future research.

The remainder of this manuscript is structured as follows. Section II provides a brief introduction to deep learning methods and approaches relevant to autonomous vehicles. Section III discusses recent approaches to autonomous vehicle control using deep learning, which is broken into three categories: (A) lateral, (B) longitudinal, and (C) simultaneous lateral and longitudinal control. Section IV presents the main research challenges from the previous section’s discussion. Finally, Section V concludes the current state of the field and provides recommendations for the direction of future research.

Ii Review of Deep Learning

In this section, we briefly introduce the deep learning techniques and approaches related to the works discussed in later sections. A brief summary on learning strategies, datasets, and tools for deep learning in autonomous vehicles is given. Since a full description on all deep learning algorithms used in autonomous vehicles would be out of the scope of this manuscript, we refer the interested reader to the insightful texts on this topic in [59, 128, 96, 163, 178, 7, 101].

Ii-a Supervised Learning

In deep learning, the objective is to update the weights of a deep neural network during training, such that the model learns to represent a useful function for its task. There are numerous learning algorithms available, but most algorithms described in this manuscript can be classified as supervised or reinforcement learning. Supervised learning utilises labelled data, where an expert demonstrates performing the chosen task at hand. Each data point in the set includes an observation-action pair, which the neural network then learns to model. During training, the network approximates its own action for each observation, and compares the error to the labelled action by the expert. The advantage of supervised learning is speed of training convergence and no need to specify how the task should be performed. While the simplicity of the supervised approach is appealing, the approach has some disadvantages. Firstly, during training the network makes predictions on the control action in an offline framework, where the network’s predictions do not affect the states seen during training. However, once deployed, the network’s actions will affect future states, breaching the i.i.d. assumption made by most learning algorithms

[23, 152, 38]. This leads to a distribution shift between training and operation, which can lead to the network making mistakes due to the unfamiliar state distributions seen during operation. Secondly, learning a behaviour from demonstration leaves the network susceptible to biases in the data set. For complex tasks, such as autonomous driving, the diversity of the data set should be ensured if the aim is to train a generalisable model which can drive in all different environments [189, 64].

Ii-B Reinforcement Learning

Reinforcement learning enables the model to learn to perform the task through trial and error. Reinforcement learning can be modelled as a Markov decision process, formally described as a tuple (

S, A, P, R), where S denotes the state space, A represents the action space of possible actions, P

denotes the state transition probability model, and

R represents the reward function. At each time-step the agent observes a set of states , takes an action from possible actions A, and then the environment transitions according to P. The agent then observes a new set of states and receives a reward . The aim of the agent is to learn a policy mapping observations to actions such that the accumulated rewards are maximised. Therefore, the agent can learn from its own actions through interactions with the environment and receives an estimate of its performance through the reward function. The advantage of this approach is that no labelled data sets are required and a behaviour which generalises well to new scenarios can be learned through reinforcement learning. The downside of reinforcement learning is its low sample efficiency [206], which means converging to an optimal policy can be slow, thereby requiring time-intensive simulations or costly real-world training [178].

Reinforcement learning algorithms can be divided into three classes: value-based, policy gradient, and actor-critic algorithms [86]. Value-based algorithms (e.g. Q-learning [207]) estimate the value function , which represents the value (expected reward) of being in a given state. If the state transition dynamics P are known, the policy can choose actions which bring it to states such that the expected rewards are maximised. However, in most reinforcement learning settings the environment model is not known. Therefore, the state-action value or quality function , which estimates the value of a given action in a given state, is used instead. The optimal policy is then found by greedily maximising the state-action value function . The disadvantage of this approach is that there is no guarantee on the optimality of the learned policy [61, 190]. Policy gradient algorithms (e.g REINFORCE [210]

) do not estimate a value function, but instead parametrise the policy and then update the parameters to maximise the expected rewards. This is done by constructing a loss function and estimating a gradient of the loss function with respect to the network parameters. During training, the network parameters are then updated in the direction of the policy gradient. The main disadvantage of this approach is the high variance in the estimated policy gradients

[179, 148, 171]. The third class, actor-critic algorithms (e.g. A3C [112]), are hybrid methods which combine the use of a value function with a parametrised policy function. This creates a trade-off between the disadvantages of the high variance of policy gradients and the bias of value-based methods [7, 62, 165]. Another separating factor between different reinforcement learning algorithms is the type of reward function used. The reward function used can be either sparse or dense. In a sparse reward function, the agent only receives a reward following specific events, such as success or failure in its task. The benefit of this approach is that the success (e.g. reaching a goal location) or failure (e.g. colliding with another object) is easy to define for most tasks. However, this can further exacerbate the sample complexity issue in reinforcement learning, since the agent would only receive a reward relatively rarely, resulting in slow convergence. On the other hand, in a dense reward function the agent is given a reward at every time-step based on the state it is in. This means that the agent receives a continuous learning signal, estimating how useful the chosen actions were in their respective states.

Ii-C Datasets and Tools for Deep Learning

The rapid progress in the implementation of deep learning systems on autonomous vehicles has led to the availability of diverse deep learning data sets for autonomous driving and perception. Perhaps the most well known data set for autonomous driving is the KITTI benchmark suite [56], [57], which includes multiple data sets for evaluation of stereo vision, optical flow, scene flow, simultaneous localisation and mapping, object detection and tracking, road detection and semantic segmentation. Other useful data sets include the Waymo Open [208], Oxford Robotcar [107], ApolloScape [72], Udacity [192], ETH Pedestrian [51], and Caltech Pedestrian [46] data sets. For a more complete overview of available autonomous driving data sets, see the survey by Yin & Berger [218]

. Besides public data sets, there are also a number of other tools available for the development of deep learning in autonomous vehicles. The current leading Artificial Intelligence (AI) platform for autonomous driving is the NVIDIA Drive PX2

[129], which provides two Tegra system-on-chips (SoC) and two Pascal graphics processors with dedicated memory and specialised support for DNN calculations. For more diverse tasks, the MobilEye EyeQ5 [114] provides four fully programmable accelerators, each optimised for a different family of machine learning algorithms. This diversity can be useful in systems where different families of deep learning algorithms have been used. On the other hand, Altera’s Cyclone V [74] SoC provides a driving solution optimised for sensor fusion. For a more in-depth review of autonomous driving hardware platforms, see the discussion by Liu et al. [103].

Iii Deep Learning Applications to Vehicle Control

The motion control of a vehicle can be broadly divided into two tasks; lateral motion of the vehicle is controlled by the steering of the vehicle, whilst longitudinal motion is controlled through manipulating the gas and brake pedals of the vehicle. Lateral control systems aim to control the vehicle’s position on the lane, as well as carry out other lateral actions such as lane changes or collision avoidance manoeuvres. In the deep learning domain, this is typically achieved by capturing the environment using the images from on-board cameras as the input to the neural network. Longitudinal control manages the acceleration of the vehicle such that it maintains the desirable velocity on the road, keeps a safe distance from the preceding vehicle, and avoids rear-end collisions. While lateral control is typically achieved through vision, the longitudinal control relies on measurements of relative velocity and distance to the preceding/following vehicles. This means that ranging sensors such as RADAR or LIDAR are more commonly used in longitudinal control systems. The majority of the current research projects have chosen to focus on only one of these actions, thereby simplifying the control problem. Moreover, both types of control systems have different challenges and differ in terms of implementation (e.g. sensor setups, test/use cases). For these reasons this section is split into three subsections, with the first two subsections discussing lateral and longitudinal control systems, independently, and the third subsection focusing on techniques which have attempted to combine both longitudinal and lateral control.

Iii-a Lateral Control Systems

One of the earliest applications of artificial neural networks to the vehicle control problem was the Autonomous Land Vehicle in a Neural Network (ALVINN) system by Pomerleau in 1989 which was first described in [138] and further extended in [137]

. ALVINN utilised a feedforward neural network, with a 30x32-neuron input layer, one hidden layer with four neurons, and a 30-neuron output layer in which each neuron represents a possible discrete steering action. The system used the input from a camera together with the steering commands of the human driver as training data. To increase the amount of data and variety of scenarios available, the author employed data augmentation methods to increase the available training data without recording any additional footage; each image was shifted and rotated, so as to make the vehicle appear to be situated at a different part of the road laterally. Additionally, to avoid bias towards recent inputs (e.g. if a training session ends in a long right hand turn, the system could be biased to turn right more often) a buffering solution was used where previously encountered training patterns were retained in the buffer. The buffer contained 4 patterns of previous data at any time, which were periodically replaced such that the patterns in the buffer had no right or left bias on average. Both the image shifting as well as buffering solutions were shown to significantly improve the system performance. The system was trained on a 150m stretch of road, after which it was tested on a separate stretch of road at speeds ranging from 5 to 55mph allowing steering without intervention for distances of up to 22 miles. The system was shown to be able to remain, on average, 1.6cm distance from the centre of the road compared to that of 4.0cm under human control. This demonstrated that neural networks can learn to steer a vehicle from recorded data.

The first to suggest reinforcement learning for vehicle steering was the work carried out by Yu [219]. Yu proposed a road following system based on Pomerleau’s work utilising reinforcement learning to design a controller. The advantage of which was the ability to learn from previous experiences to drive in new environments and continuously learn and improve its road following ability through online learning. Combining supervised learning and reinforcement learning, Moriarty et al. [120] developed a lane-selection strategy for a highway environment. The results showed that the vehicles with learned controllers managed to maintain speeds close to the desired speed and resulted in less lane-changes. Moreover, the learned control strategy resulted in better traffic flow than manually constructed controllers.

The neural networks utilised in the aforementioned early works are significantly smaller when compared to what is feasible with today’s technology [21]. Indeed, while neural networks are hardly new, the research interest and adoption to various applications has exploded in recent years due to increased computing power, especially through parallel graphics processing units (GPUs) which can significantly reduce training time and improve performance. Moreover, the availability of large public data sets and hardware solutions optimised for deep learning have made training and validation of neural network systems easier. Overall, these recent advancements have enabled better performance through more complex systems with vastly increased amounts of training data and episodes.

Utilising deeper models with CNNs, Muller et al. [122] trained a sub-scale radio controlled car to navigate off-road in the DARPA Autonomous VEhicle (DAVE) project. The model was trained with training data collected from two forward-facing cameras while a human was controlling the vehicle. Using a 6-layer CNN, the model learned to navigate around obstacles when driving at speeds of 2m/s. Building on the approach of DAVE, NVIDIA utilised a CNN to create an end-to-end control system for steering of a vehicle through supervised learning [21]. The system is capable of self-optimising the system performance and detecting useful environmental features (e.g. detection of roads and lanes). The CNN used (see Fig. 1) can learn the steering policy without explicit manual decomposition of the environmental features, path planning, or control actions using a small amount of training data. The training data set consisted of recorded camera footage and steering signals from a human driven vehicle. The CNN consisted of 9 layers, including a normalisation layer, 5 convolutional layers and 3 fully connected layers, with a total of 27 million connections and 250,000 parameters. This method achieved a 98% autonomy in initial testing and 100% autonomy during a 10-mile highway test, measured based on the number of interventions required over a given test time. However, it should be noted that this measure does not include lane changes or turns, and therefore only evaluates the system’s ability to stay in its current lane.

Fig. 1: Convolutional Neural Network utilised in the NVIDIA end-to-end steering system. (Figure recreated based on [21]).

A further example of supervised learning for steering of an autonomous vehicle is the work by Rausch et al. [145], where supervised learning was employed to create an end-to-end lateral vehicle controller. Rausch et al. utilised a CNN with four hidden layers, three convolutional layers and one fully connected layer. The training data was the steering angle and front-facing camera footage which was provided by a human steering a vehicle in a CarSim [109]

simulation, with imaging captured at 12 frames per second (FPS) at a resolution of 1912x1036. The data collection was collected from a 15-minute simulation run resulting in a total of 10,800 frames. Inappropriate frames caused by bad driving behaviour or graphic errors (e.g. due to a fault in the simulator) were removed from the training data manually. Then, the neural network was trained with three different optimisation algorithms to update the network weights, namely Stochastic Gradient Descent (SGD)

[24], Adam [80], and Nesterov’s Accelerated Gradient (NAG) [175]. During training Adam resulted in the best loss convergence, while during the evaluation, the NAG trained network performed the best in terms of keeping the vehicle in the centre of the lane. Therefore, convergence of the loss function is not necessarily representative of a well-trained neural network. The neural networks were shown to learn good estimations of the human driver’s steering policy, however by comparing the steering angles, it could be seen that the steering signal of the neural networks included noisy behaviour. A potential reason is that the system estimates the required steering angle at each frame, with no context regarding previous states or actions. This results in the steering signals between subsequent time steps varying significantly from each other, causing noisy output. This could be resolved by utilising a RNN to provide memory of previous inputs and outputs for the system, giving it temporal context.

Introducing temporal context to a deep learning steering model, Eraqi et al. [49]

utilised a Convolutional Long Short-Term Memory Recurrent Neural Network (C-LSTM) to learn to steer a vehicle based on visual and dynamic temporal dependencies. The network was trained to predict steering angles based on image inputs, and then compared it to a simple CNN architecture used in

[153]. Experimental results showed improved accuracy and smoother steering variations when using the C-LSTM network. However, the model was only evaluated offline by comparing the predicted control action against ground truth, which does not necessarily give an accurate evaluation of driving quality [33]. Live testing, where the model can control the vehicle to test the learned driving behaviour, should be used instead.

There has also been lateral control techniques for lane change manoeuvrers presented. Wang et al. [205] used reinforcement learning to train an agent to execute lane change manoeuvrers using a Deep Q-Network (DQN). The network uses host vehicle speed, longitudinal acceleration, position, yaw angle, target lane, lane width and road curvature to provide a continuous value for the desired yaw acceleration. To ensure Q-learning could be used to output continuous action values, a modified Q-learning approach was used to support continuous action values, where the Q-function was a quadratic function approximated by three single hidden layer feedforward neural networks. The proposed approach was tested in a simulated highway environment, with preliminary results showing effective lane change manoeuvrers learned by the agent.

A summary of the research works covered in this section can be seen in Table I. Due to the advancements mentioned previously, the recent trend has been to move to deeper models with increased amounts of training data. Recent works have also investigated introducing temporal cues into the learning model, but this suffers from instability in training. Moreover, many of the models developed so far have been trained and evaluated in relatively simple environments. For instance, most researchers have decided to focus on lateral control for a single task. For example in models trained for lane keeping no decision-making for e.g. lane changes or turns to different roads have been incorporated in these systems. This opens possible avenues for future research where multiple actions could be carried out by the same DNN. It should also be noted that the majority of these works were trained and evaluated in simulated environments, which further simplifies the task and would require further tests to validate their real world performance. Nevertheless, there have been important developments in this field and these results show great promise for the use of deep learning for autonomous vehicle control.

Ref. Learning Strategy Network Inputs Outputs Pros Cons Experiments
[138], [137] Supervised Learning Feedforward network with 1 hidden layer Camera image Discretised steering angles First promising results for neural network-based vehicle controllers Simple network and discretised steering angle outputs degrade performance Real & Simulation
[219] Reinforcement Learning Feedforward network with 1 hidden layer Camera image Discretised steering angles Supports online learning Simple network and discretised steering angle outputs degrade performance Simulation
[122] Supervised Learning 6-layer CNN Camera images Steering angle Robust to environmental diversity Large errors, trained and tested on a sub-scale vehicle model Real world (sub-scale vehicle)
[21] Supervised Learning 9-layer CNN Camera image Steering angle values High level of autonomy during field tests Only considers lane following, requires interventions by the driver Real world & Simulated
[145] Supervised Learning 8-layer CNN Camera image Steering angle values Learns from minimal training data Noisy behaviour of the steering signal Simulation
[49] Supervised Learning C-LSTM Camera image Steering angle values Considers temporal dependencies RNNs can be difficult to train, lack of live testing No live testing, tested on data set image examples only
[205] Reinforcement Learning 3 feedforward networks Host vehicle states and road geometry Vehicle yaw acceleration Executes lane changes successfully Limited testing or results, lack of comparison to other lane change algorithms Simulation
TABLE I: A Comparison of Lateral Control Techniques.

Iii-B Longitudinal Control Systems

Machine learning methods have also shown promise in applications to vehicle longitudinal control, such as ACC design. The ACC can be described as an optimal tracking control problem for a complex nonlinear system [194, 117] and therefore is poorly suited to control systems based on linear vehicle models or other algebraic analytical solutions [176]. Such traditional control systems provide poor adaptability in complex environments and do not conform to the driver’s habits [30]. The strong nonlinear nature of the system makes it difficult to build a vehicle model without significant uncertainty, limiting the effectiveness of model-based solutions. However, neural networks have shown great potential for optimising nonlinear, high-dimensional control systems [98, 100, 202, 136, 159, 201, 147, 223]. For instance, reinforcement learning can learn an optimal control policy through interaction with the environment, without knowledge of the system model [178]. Furthermore, the strong adaptive capacity and model-free capability of reinforcement learning makes it an attractive solution for ACC design. In early works, Dai et al. [37] proposed a fuzzy reinforcement learning method for longitudinal control of an autonomous vehicle. The method combines a Q estimator network (QEN) with a Takagi-Sugeno-type Fuzzy Inference System (FIS). The QEN is used to estimate the optimal action value function whilst the FIS gets the control output based on the estimated action value function. The described approach was evaluated in a simulation of a car-following scenario where the lead vehicle varies its velocity over time with a maximum episode duration of 80s. The controller was shown to be able to successfully drive the vehicle without failing after 68 trials. However, the reward function of the proposed approach by Dai et al. is only based on the spacing between the lead and the following vehicle. The reward function is the key to a successful reinforcement learning approach as it is the means by which the developer indicates the desirability of being in any given state. Therefore, the reward function needs to accurately capture the task to be performed and the manner in which it should be completed. For longitudinal control, the reward function should motivate the agent to adopt a safe and efficient driving strategy. For these reasons, a reward function with only one parameter such as inter-vehicle spacing may not be sufficient in real-time applications.

There are several works in which the use of multi-objective reward functions have been explored. For example, Desjardins & Chaib-Draa [41] used a multi-objective reward function based on time headway (distance in time from the lead vehicle) and time headway derivative. The agent was encouraged through the reward function to keep a 2s time headway to the lead vehicle, and the time headway derivative provided information regarding whether the vehicle is moving closer to or farther from the lead vehicle, and allowed it to adjust its driving strategy accordingly. Taking the time headway derivative into consideration in the reward function encourages the agent to choose actions which help it progress toward the desired state (ideal time headway). The authors used this reward function in a policy-gradient method for a Cooperative Adaptive Cruise Control (CACC) system. The neural network architecture chosen had two inputs, a single hidden layer of 20 neurons, and an output layer with 3 discrete actions (brake, accelerate, do nothing). In the learning process, an average of over 2.2 million iterations were obtained over ten learning simulations. The chosen method was shown to be efficient in CACC, providing average time headway errors of 0.039s in an emergency braking scenario. While the magnitude of the time headway errors remain small, it should be noted that the velocity profile of the subject vehicle showed oscillatory behaviour. This would make the system uncomfortable for the passengers as well as pose a potential safety risk. Potential solutions for this could include utilising continuous action values, the use of RNNs, or negative rewards for changes in acceleration to help smooth the velocity profile of the vehicle. Similarly, Sun [176] proposed a CACC system based on rewards from time headway and time headway derivative in a Q-learning algorithm. This approach was shown to reduce the learning time of the neural network. Over one hundred learning simulations, the best performing policy (the policy which obtained the highest reward) was chosen for evaluation. The algorithm was evaluated in a simulation of a stop-and-go environment in which the lead vehicle accelerated and decelerated periodically. The agent was shown to provide adequate performance in a platoon scenario. However, whilst such multi-objective reward functions are an improvement over single objective reward functions such as the one proposed by Dai et al. [37], this reward function does not consider passenger comfort which could lead to harsh accelerations or decelerations.

Huang et al. [73] presented a Parameterised Batch Actor-Critic (PBAC) reinforcement learning algorithm for longitudinal control of autonomous vehicles based on actor-critic algorithms. A multi-objective reward function was designed to reward the algorithm for tracking precision and drive smoothness. The method was validated by field experiments on various driving environments (e.g. flat, slippery, sloping, etc.) and the results suggested the method can track time-varying speeds more precisely than traditional Proportion-Integration (PI) or Kernel-based Least Square Policy Iteration (KLSPI) controllers trained with reinforcement learning [216, 204]. This was due to lower sensitivity to noise of speeds and accelerations. Moreover, smooth driving was achieved using the proposed method. The addition of driving smoothness in the reward function makes these systems more comfortable for passengers. However, the method was evaluated in an environment without adjacent vehicles or other obstacles. This allowed the authors to not consider safety parameters in the reward function, which leaves the algorithm susceptible to crashes in environments with other vehicles present. Therefore, additional terms for safety would be required in the reward function to ensure safe behaviour of the autonomous vehicle.

One such reward function was proposed by Chae et al. [29], who proposed an autonomous braking system for collision avoidance based on a DQN approach. The reward function balances two conflicting objectives: avoiding collision and getting out of high risk situations. To speed up convergence, a replay memory was used to store a number of episodes of which some are chosen randomly to help train the network. Additionally, a ’trauma memory’ of rare critical events (e.g. collision) was used to improve stability and make the agent more reliable. The system was evaluated in situations where the vehicle had to avoid collision with a pedestrian, using various Time-to-Collision (TTC) values with 10,000 tests for each TTC value. It was shown that for TTC values above 1.5s, collisions were avoided every time, whereas at 0.9s (lowest TTC value used) the collision rate was as high as 61.29%. Additionally, the system was evaluated in a test procedure specified by the Euro NCAP test protocol (CVFA and CVNA tests [52]) and the system passed these tests without collision. Therefore, the system was considered to exhibit desirable and consistent brake control behaviour. In addition, Chen et al. [30] presented a personalised ACC which can learn from human demonstration. The proposed algorithm is based on Q-learning with a reward function based on distance to the front vehicle, vehicle speed, and acceleration. The authors used a Q-learning algorithm based on a feedforward artificial neural network to estimate the Q-function and calculated the desired velocity, which is then converted to low-level control commands by a Proportional Integral Derivative (PID) controller. The neural network used to estimate the Q-function consists of an input layer with 5 nodes, a hidden layer with 3 nodes, and an output layer with 1 node which predicts the desired velocity. The performance of the system was evaluated based on comfort and driving smoothness in simulation with different velocities and desired inter-vehicle clearances. The system was shown to provide better performance when compared to traditional ACC approaches. Similarly, Zhao et al. [227] proposed a personalised ACC approach which considers safety, comfort, as well as personalised driving styles. The reward function considers the driver habits, passenger comfort, and safety in an effort to find a good tradeoff between safety and comfort. The proposed approach uses a Model-free Optimal Control (MFOC) algorithm based on an actor-critic neural network structure. By optimising the algorithm to drive in a more human-like fashion, the human driver is more likely to trust the system and continue using it. For this purpose, the network would also be capable of learning from the human driver when the cruise control feature was switched off to better tune its parameters and to adopt a driving strategy based on the owner’s driving habits. The proposed algorithm was tested in a simulation under various environments and was shown to perform better than PID and Linear Quadratic Regulator (LQR) based controllers. For instance, in an emergency braking test scenario shown in Fig. 2, the MFOC maintained a safer clearance compared to PID, while the LQR failed the test by causing a rear end collision. However, while conforming to individual driving habits can be useful to ensure the user feels safe and comfortable in the car, strategies for mitigating the negative effects of learning bad driving habits should also be considered to ensure the long term reliability and safety of the system.

Fig. 2: MFOC Controller compared to PID and LQR controllers in an emergency braking scenario. (a) Clearance between the lead and follower vehicle. (b) Velocity profiles of the lead vehicle and the three controllers [227].

Reinforcement learning has been shown to be an effective approach for vehicle longitudinal control systems as shown by the discussion above. However, the main drawback for reinforcement learning is the time-intensive training [178, 78]. In contrast, supervised learning methods simplify the learning process with the use of prior knowledge of the supervisor, but lack the level of adaption that makes reinforcement learning attractive to complex decision-making systems such as autonomous driving. For these reasons, there are multiple examples in the literature that combine reinforcement and supervised learning to exploit the advantages of both approaches; reinforcement learning allows for self-adaptation in new and complex environments whilst the prior knowledge of supervised learning speeds up the learning process. For example, Zhao et al. [226, 225, 200]

introduced a supervised reinforcement learning algorithm for an ACC system. By utilising actor-critic methods, the authors propose a novel supervised actor-critic (SAC) learning scheme, which is then implemented with feed-forward neural networks into a hierarchical acceleration controller. The proposed approach was evaluated in a simulation for an emergency braking scenario. The network was trained for emergency braking in dry conditions, whilst it was evaluated in both dry and wet road conditions and results were compared to the performance of a PID controller. The simulation results demonstrated that the SAC algorithm has superior performance compared to that of the PID controller as well as a supervised learning based controller (without reinforcement learning), and can adapt to changing road conditions. This shows the benefits of combining supervised learning with reinforcement learning to leverage the combined advantages of both methods. Pre-training the network via supervised learning helps reduce the training time of reinforcement learning and improves the convergence of the algorithm, both of which are common problems in reinforcement learning algorithms. Meanwhile, by exploring different actions through trial and error, reinforcement learning improves the performance beyond what supervised learning can provide. Also, the authors stated that using an actor-critic network architecture was beneficial as the evaluation of actions by the critic boosts the system’s performance in critical scenarios such as emergency braking.

A summary of the longitudinal control methods can be seen in Table II. In contrast to lateral control systems, vision-based inputs are not generally used for longitudinal control. Instead sensor inputs from ranging sensors (e.g. RADAR, LIDAR) and host vehicle states are more commonly used. These lower dimensional inputs (e.g. time headway or relative distance) can then easily be used to define a reward function for reinforcement learning. The second major difference between lateral and longitudinal control algorithms is the choice of learning strategies. While lateral control techniques favour supervised learning techniques trained on labelled datasets, longitudinal control techniques favour reinforcement learning methods which learn through interaction with the environment. However, as seen in this section, the reward function in reinforcement learning needs to be carefully designed. Safety, performance, and comfort all need to be considered. Poorly designed reward functions result in poor performance or the model not converging. Another challenge with reinforcement learning algorithms is the trade-off between exploration and exploitation. During training, the agent must take random actions to explore the environment. However, to perform well in its task the agent should exploit its knowledge to find the optimal action. Example solutions for this are the -greedy exploration policies and the Upper Confidence Bound (UCB) algorithm. -greedy strategies choose a random action with a probability , which decreases overtime as the agent learns its environment. On the other hand, UCB encourages exploration in states with high uncertainty, whilst exploitation is encouraged in regions with high confidence. Therefore, intrinsic motivation is implemented in the system, encouraging the agent to learn about its environment, whilst exploitation can be taken advantage of in states which have already been explored adequately [7, 92, 14, 162]. Other approaches have sought to use supervised learning as a pre-training step to get the advantages of both reinforcement and supervised learning.

Ref. Learning Strategy Network Inputs Outputs Pros Cons Experiments
[37] Fuzzy Reinforcement Learning Feedforward network with 1 hidden layer Relative distance, relative speed, previous control input Throttle angle, brake torque model-free, continuous action values Single term reward function Simulation
[41] Reinforcement Learning Feedforward network with 1 hidden layer Time headway, headway derivative Accelerate, brake, or no-op Maintains a safe distance Oscillatory acceleration behaviour, no term for comfort in reward function Simulation
[73] Reinforcement Learning Actor-Critic Network with feedforward networks Velocity, velocity tracking error, acceleration error, expected acceleration Gas and brake commands Learns from minimal training data Noisy behaviour of the acceleration signal Real world
[29] Reinforcement Learning Feedforward network with 5 hidden layers Vehicle velocity, relative position of the pedestrian for past 5 time steps Discretised deceleration actions Reliably avoids collisions Only considers collision avoidance with pedestrians, high rate of collision at low TTC Simulation
[30] Reinforcement Learning Feedforward network with 1 hidden layer Relative distance, relative velocity, relative acceleration (normalised) Desired acceleration Provides smooth driving styles, learns personal driving styles No methods for preventing learning of bad habits from human drivers Simulation
[227] Reinforcement Learning Actor Critic Network with feedforward networks Relative distance, host velocity, relative velocity, host acceleration Desired acceleration Performs well in a variety of scenarios, safety and comfort considered, learns personal driving styles Adapting unsafe driver habits could degrade safety Simulation
[226] Supervised Reinforcement Learning Actor-Critic Network with feedforward networks Relative distance, relative velocity Desired acceleration Pre-training by supervised learning accelerates learning process and helps guarantee convergence, performs well in critical scenarios Requires supervision to converge, driving comfort not considered Simulation
TABLE II: A Comparison of Longitudinal Control Techniques.

Iii-C Simultaneous Lateral & Longitudinal Control Systems

The previous sections demonstrated that DNNs can be trained for either longitudinal or lateral control of a vehicle. However, for autonomous driving, the vehicle must be able to control both steering and acceleration simultaneously. In early works towards full vehicle control through deep learning, Xia et al. [214] introduced an autonomous driving system based on Q-learning combined with learning from the experience of a professional driver. The reward value of the professional driver’s strategy and the Q-value learned through the Q-learning method were combined in the pre-training phase to improve the speed of convergence during training. A filtered experience replay stores a limited number of episodes and allows elimination of poor experimental rounds from memory, improving convergence on a control strategy. The proposed Deep Q-learning with filtered experiences (DQFE) approach was compared to a naive neural fitted Q-iteration (NFQ) [149] algorithm without pre-training by an experienced driver. During training, it was shown that the DQFE approach reduced the training time by 71.2% for the 300 training episodes. Moreover, during 50 tests on a competition track, the proposed approach completed the track 49 times, compared to only 33 with NFQ. Additionally, DQFE performed better in terms of mean distance from centre of the track. Therefore, the addition of filtered experience replay improved the speed of convergence as well as performance of the algorithm. Comparing two neural networks for lane keeping systems, Sallab et al. [158] investigated the effects of discretised and continuous actions. Two approaches, DQN and a Deep Deterministic Actor Critic (DDAC) algorithm, were evaluated in a TORCS simulator [183]. In the two networks developed by the authors, the DQN could only output discretised values (steer, gear, brake, and acceleration), while the DDAC supports continuous action values. The DDAC consisted of two networks; an Actor Network which is a neural network responsible for taking actions based on perceived states and the Critic Network which criticises the value of the action taken. The experimental results showed that the DQN algorithm suffered in performance due to the fact that it cannot support continuous actions or state spaces. The DQN algorithm is suitable for continuous (input) states, however it still requires discrete actions since it finds the action that maximises the action-value function. This would require an iterative process at every time step for continuous action spaces [102]. As shown in Fig. 3, the ability to support continuous action values allowed the DDAC algorithm to follow curved tracks more smoothly and stay closer to the centre of the lane when compared to the DQN algorithm, thereby producing better performance for lane keeping.

Fig. 3: The lane keeping performance of (a) the DQN with discretised outputs and (b) DDAC with continuous output values [158].

Vision based vehicle control using CNNs has also been researched. For instance, Zhang et al. [221]

proposed a supervised learning method, SafeDAgger, for training a CNN to drive in a TORCS simulation. The proposed method is based on the Dataset Aggregation (DAgger) imitation learning algorithm

[152]. In DAgger, the agent first learns a primary policy through traditional supervised learning, with the training set generated by a reference policy. Then, the algorithm iteratively generates new training examples through the learned policies, which are then labelled by the reference policy. The new expanded dataset can then be used to update the learned policy through supervised learning. This has the advantage that states which were not reached in the initial training set can be covered in the new extended training set. The primary policy is then iteratively fine-tuned using the new training set. Zhang et al. proposed an extension to this method, called SafeDAgger, where the system estimates (in any given state) whether the primary policy is likely to deviate from the reference policy. If the primary policy is likely to deviate by more than a specified threshold, the reference policy is used to drive the vehicle instead. The safety policy is estimated by a fully connected network where the input is the last convolutional layer’s activation. The authors used this method to train a CNN to predict a continuous steering wheel angle and a binary decision for braking (brake or do not brake). The authors then evaluated supervised learning, DAgger, and SafeDAgger by driving them on three test tracks, with up to three laps on each track. Out of the three algorithms evaluated, SafeDAgger was found to perform best in terms of the number of completed laps, number of collisions, and mean squared error of steering angles. In another work, Pan et al. [132] used DAgger-like imitation learning to learn to drive at high speeds autonomously, with continuous actions for both steering and acceleration. The reference policy for the dataset was obtained from a model predictive controller operated using expensive high resolution sensors, which the CNN then learned to imitate using only low cost camera sensors for observations. The technique was first tested in Robot Operation System (ROS) Gazebo [83] simulations, followed by a real-world 30m long dirt track with a 1/5-scale vehicle. The sub-scale vehicle successfully learned to drive at speeds up to 7.5m/s around the track. Instead of using direct vision for control, Wang et al. [203]

demonstrated that DAgger can be used to train an object-centric policy, which uses salient objects in the image (e.g. vehicles, pedestrians) to output a control action. The trained control policy was tested in Grand Theft Auto V simulation, with a discrete control action (left, straight, right, fast, slow stop) which was then translated to a continuous control with a PID controller. The test results demonstrated improved performance with the object-centric policy compared to models without attention or those based on heuristic object selection. Vision based techniques have also been used to mitigate collisions by Porav & Newman

[139], who built on the previous work by Chae et al. [29]

by using a deep reinforcement learning algorithm for collision mitigation which can provide continuous control actions for both velocity and steering. The system uses a Variational AutoEncoder (VAE) coupled with an RNN to predict the movement of obstacles and learns a control policy with Deep Deterministic Policy Gradient (DDPG) to mitigate collisions in low TTC scenarios. The network used a semantically segmented image to predict continuous steering and deceleration actions. The proposed technique shows improvement over braking-only policies for TTC values between 0.5 and 1.5s, and up to 60% reduction in collision rates.

Inverse Reinforcement Learning (IRL) approaches have also been investigated in the context of control systems as a way to overcome the difficulty of defining an optimal reward function. IRL is a subset of reinforcement learning, in which the reward function is not specified, but the agent attempts to learn it from an expert’s demonstrations. In IRL, the agent assumes that the expert is completing the task by following an unknown reward function. It then estimates a reward function in which the demonstrators’ trajectory is the most likely one. This has the advantage that instead of requiring the developer to explicitly specify a reward function, they simply have to demonstrate the intended behaviour. This can be advantageous since in large and complex tasks, defining an adequate reward function to provide optimal agent behaviour can be both difficult and time consuming [228]. IRL approaches have been shown to not only reduce the amount of time required for design and optimisation, but also improve the system performance by creating more robust reward functions. Abbeel & Ng [2] showed that when IRL was applied to a problem where the agent learned by observing an expert, the agent performed as well as the expert when evaluated with respect to the reward function used by the expert, even if the reward function derived from observations was not the expert’s true reward function. Moreover, it was shown that in a simplistic highway driving scenario with 5 different actions for lane selection available to the agent and multiple driving styles demonstrated, the IRL algorithm successfully learned to mimic the demonstrated driving behaviours. Further, Silver et al. [170] used an IRL algorithm based on Maximum Margin Planning [144] which was shown to be effective in a demonstration of an autonomous vehicle in unstructured terrain. The vehicle was shown to perform better than an agent based on traditional reinforcement learning with a hand-tuned reward function. Additionally, the IRL approach was shown to require significantly less time to design and optimise compared to the reinforcement learning agent. Kuderer et al. [88] proposed a vehicle controller that can learn individual driving styles from demonstration using IRL. The algorithm assumes that the demonstrator is driving in a way to maximise an unknown reward function. From this, the learning model estimates the weights in a linear reward function based on 9 features for driving. Initially, the weights were equally set and were then updated based on demonstrations of 8 minutes per driver. After finding the driving policy, the chosen trajectories were compared to those observed from human drivers. The system was shown to learn drivers’ personal driving styles from minimal training data and performed adequately in simulated testing.

Building on the IRL approaches, Wulfmeier et al. [213] proposed an IRL approach for deep learning. The proposed algorithm is based on the Maximum Entropy [230] model for a trajectory planner, and uses CNNs to infer the reward functions from expert demonstration. The approach was trained on a dataset collected over the course of one year with a total of 120km of driving a modified golfcart on walkways and cycle lanes. The input to the network was the LIDAR point cloud map, which was represented on a discretised grid map. The output of the network was a discrete set of actions. The proposed approach was demonstrated to work better than a manually constructed cost function. Moreover, the learned algorithm was shown to be more robust to sensor noise. This shows that the use of DNNs in an IRL algorithm for trajectory planning was beneficial overall. Therefore IRL techniques could be considered as a potential way to overcome the difficulties of designing an optimal reward function for driving.

However, there are some challenges for IRL approaches in practical applications. Firstly, there is no guarantee of optimality of the demonstrations. For example, in a driving demonstration, no human driver can carry out the driving tasks optimally every time. Therefore, the training data will include suboptimal demonstrations which will affect the final reward function constructed. There are some solutions to minimising the effect of suboptimal demonstrations; using multiple trajectories and averaging over multiple sets to find a reward function or removing the assumption of global optimality [99]. Secondly, reward ambiguity can lead to further problems in IRL approaches. Given expert demonstrations of driving strategies, there can be multiple reward functions that explain the expert’s behaviour. Therefore, an effective IRL algorithm must find a reward function that considers the expert’s trajectory optimal and rejects other possible trajectories. Thirdly, the reward function derived through IRL methods may not be safe, as noted by Abbeel et al. [1], who used IRL to operate an autonomous helicopter and had to manually tune the reward function for safety. Therefore, hand tuning of the derived reward function may be required to ensure safe behaviour. Lastly, the computational burden of IRL methods can be heavy since they often require iteratively solving reinforcement learning problems with each new reward function derived [228]. Nevertheless, in tasks where an adequately accurate reward function cannot be easily defined, IRL approaches can provide an effective solution.

While the previously mentioned works in this section demonstrate that a DNN can be trained to drive a vehicle, training a vehicle to simply follow a road or keep in its lane without any outside context is not sufficient for deploying fully autonomous vehicles. Humans drive vehicles with the goal of arriving at our target destination, and learning to drive from camera images to imitate human driving behaviour is not enough to understand the full context behind the human driver’s action. For instance, it has been reported [138], that upon reaching a fork in the road end-to-end driving techniques tend to oscillate between the two possible driving directions. Not only is this impractical if our goal is to continue in the left direction, but can result in unsafe behaviour where the DNN oscillates between left and right but never picking either direction. Aiming to provide autonomous vehicles with contextual awareness, Hecker et al. [66] collected a data set with a 360-degree view from 8 cameras and a driver following a route plan. This data set was then used to train a DNN to predict steering wheel angle and velocities from example images and route plans in the data set. Qualitative testing was done to evaluate learning on instances from the data set, suggesting the model was learning to imitate the human driver, but no live testing was completed to validate performance. With a similar aim, Codevilla et al. [34] trained a supervised learning algorithm, which uses both images and a high-level navigational command for its driving policy. The network was trained through end-to-end supervised learning, conditioned by a high-level command which could be follow road, go straight, turn left, or turn right. The authors tested two network architectures which could take the navigational command into account; one where the command was an additional input to the network, and one where the network branched at the end into multiple sub-modules (feedforward layers), one for each possible command. The authors noted that the latter architecture performed better. The resulting network was initially tested in CARLA [47] simulation, followed by real-world testing on a 1/5-scale car. The resulting policy successfully learned to turn the correct way at intersections as commanded. The authors noted that data augmentation and noise injection during training was key to learning a robust control policy. This method was further extended in [35], by using an extra module for velocity prediction, which helps the network in some situations, such as when the vehicle is stopped at a traffic light, to predict the expected vehicle velocity from visual cues and prevent it from getting stuck when the vehicle comes to a full stop. Further improvements to the model were a deeper network architecture and a larger training set, which reduced the variance in training. A slightly different approach was explored using reinforcement learning by Paxton et al.[134] where the high-level command is provided by another DNN responsible for decision making. The system consisted a DDPG network for low-level control and a DQN for a stochastic high level policy subject to linear temporal logic constraints. The aim of the vehicle was to navigate a busy intersection, where some lanes had stopped vehicles so that the host vehicle had to successfully change lanes as well. The system was tested in 100 simulated intersections with and without stopped cars ahead, for a total of 200 tests. Without stopped cars the agent succeeded every time, whereas with stopped cars ahead, 3 collisions occurred.

Moving away from end-to-end approaches, researchers at Waymo recently presented ChauffeurNet [12]. ChauffeurNet uses mid-to-mid learning to learn a driving policy, where the input is a pre-processed top-down view of the surrounding environment which represents useful features such as roadmap, traffic lights, a route plan to follow, dynamic objects, and past agent poses. The agent then processes these inputs through an RNN to provide a heading, speed, and waypoint, which are then achieved through a low-level controller. This had the advantage that pre-processed inputs could be obtained either from simulation or real-world data, which makes transferring driving policies from simulation to the real world easier [131, 121]. Furthermore, synthesising perturbations to model recoveries from incorrect lane positions or even scenarios such as collisions or driving off-road provides the model with robustness to errors and allows the model to learn to avoid such scenarios.

An overview of full vehicle control approaches can be seen in Table III. Unlike previous sections, a variety of learning strategies have been utilised here, however supervised learning is still the preferred approach. An important note on the works where full vehicle control via neural networks is researched, is that robust and high performing models still seem out of reach. For instance, techniques which implement full vehicle control tend to have poorer performance on steering than techniques which only consider steering. This is explained by the significant increase in the complexity of the task which the neural network is trained to perform. For this reason, several of the works summarised in this section have been trained and evaluated in simplified simulated environments. While full vehicle control should be the end goal of autonomous vehicle control techniques, current approaches have yet to achieve adequate performance in complex and dynamic environments. Therefore future research is required to further improve the control performance of neural network-driven autonomous vehicles.

Ref. Learning Strategy Network Inputs Outputs Pros Cons Experiments
[214] Supervised Reinforcement Learning Feedforward network with 2 hidden layers Not mentioned Steering, acceleration, braking Fast training Unstable (Can steer off the road) Simulation
[158] Reinforcement Learning Fully connected / Actor-Critic Network with feedforward networks Position in lane, velocity Steering, gear, brake, and acceleration values (discretised for DQN) Continuous policy provides smooth steering Simple simulation environment Simulation
[221] Supervised Learning CNN / Feedforward Simulated camera image Steering angle, binary braking decision Estimates safety of the policy in any given state, DAgger provides robustness to compounding errors Simple simulation environment, simplified longitudinal output Simulation
[132] Supervised Learning CNN Camera image Steering and throttle High speed driving, Learns to drive on low cost cameras, Robustness of DAgger to compounding errors Trained only for elliptical race tracks with no other vehicles, Requires iteratively building the dataset with the reference policy Real world (sub-scale vehicle) & Simulation
[203] Supervised Learning CNN Image 9 discrete actions for motion Object-centric policy provides attention to important objects Highly simplified action space Simulation
[139] Reinforcement Learning VAE-RNN Semantically segmented image Steering, acceleration Improves collision rates over braking only policies Only considers imminent collision scenarios Simulation
[213] Inverse Reinforcement Learning CNN LIDAR point clouds on a grid map Discrete motions Robust to noise, avoids handcrafting of cost function Increased computation burden of IRL, no guarantee of cost function optimality No live testing
[66] Supervised Learning CNN 360-degree view camera image, route plan Steering angle, velocity Takes route plan into account Lack of live testing No live testing, tested on data set image examples only
[34] Supervised Learning CNN Camera image, navigational command Steering angle, acceleration Takes navigational commands into account, generalises to new environments Occasionally fails to take correct turn on first attempt Real (sub-scale vehicle) & Simulation
[134] Reinforcement Learning Feedforward network with 1 hidden layer Host vehicle states, set of features for each nearby vehicle, vehicle position and priority in intersection steering angle rate, acceleration Considers decision making provided by another DNN Large number inputs which would be difficult to extract in reality, Not collision free Simulation
[12] Supervised Learning CNN-RNN Pre-processed top-down image of surroundings Heading, velocity, waypoint Ease of transfer from simulation to real world, robust to deviations from trajectory Can output waypoints which make turns infeasible, can be over aggressive with other vehicles in new scenarios Real world & Simulation
TABLE III: A Comparison of Full Vehicle Control Techniques.

Iv Challenges

The previous section discussed various examples of deep learning applied to vehicle controller design. While this shows that there is a significant amount of interest in the research of such systems, they are still far from ready for commercial application. There remains a number of challenges that must be overcome before learned autonomous vehicle technology is ready for widespread commercial use. This section is dedicated to discussing the technological challenges for deep learning based control of autonomous vehicles. It is worth remembering that besides these technological challenges, issues such as user acceptance, cost efficiency, machine ethics for artificial intelligence technologies, and lack of legislation/regulation for autonomous vehicles must also be addressed. However, the aim of this manuscript is to focus on deep learning based autonomous vehicle control methods and their technical challenges, therefore general and non-technological challenges for autonomous vehicles are out of the scope of this manuscript, for further reading on these topics, see [108, 53, 10, 67, 22, 3].

Iv-a Computation

The major drawback for deep learning methods is the large amount of data and time required for adequate training, especially for reinforcement learning methods. This can lead to long training periods which can cause delays and additional cost in the design of an autonomous vehicle. The common solution to reduce training data requirements or the time required for training is to combine reinforcement learning with supervised learning, which helps reduce the training time whilst still providing good adaptability. Nevertheless, for a fully autonomous vehicle, the amount of training data required to build a reliable and robust system can be vast. It is challenging to train a vehicle to drive in all possible scenarios that it could encounter in the real world due to the huge quantity of data that needs to be collected. There are several companies researching autonomous driving using machine learning and collaborating and sharing data would be the fastest route to move from experimental systems to commercial ones. However, this is unlikely as companies researching autonomous vehicles are not willing to share their resources due to fear of diluting their competitive advantage [81]. However, while increasing the amount of available data is useful to learn more complex behaviours, using larger data sets brings its own challenges, such as ensuring diversity of the data. If the amount of data used for training the model is increased, without ensuring variety in the data set, the risk of overfitting to the data set increases. For instance, Codevilla et al. [35] compared 4 driving models trained with 2, 10, 50, and 100 hours of data, and it was shown that the model trained with 10 hours of driving data performed best in most scenarios. This is due to many of the instances in the training set being very similar, captured in typical driving conditions. As the data set size increases, rare driving scenarios (where the model is more likely to fail) are encountered increasingly rarely during training. Therefore, when generating large data sets, diversity in the data set must be ensured.

Further computational complexity is caused by the continuous states and actions in which the agent has to operate. As stated in the previous section, continuous action values are necessary for a deployable vehicle control system to have adequate performance. However, as the number of dimensions grows, the computational complexity grows exponentially [82]

, this is known as the Curse of Dimensionality

[15]. In the high-dimensional problems of vehicle control, this has a significant effect on the computational complexity of any solution. Although discretisation of the system can reduce the complexity, as seen in previous examples, this can lead to degradation in system performance. Other solutions include using multiple learners to reduce learning time [63, 13], evolution strategies which are highly parallelisable [157], or removing unnecessary data from the training and system input data [217].

Overall, the high computational burden of DNNs is a challenge to not only the development and training of the networks but also the deployment of such systems in vehicles. The high computational overhead of the deep learning algorithms will require high computing capabilities on-board, driving up the system cost and power requirements, which must be kept in mind during the system design.

Iv-B Architectures

Another challenge with deep learning is selecting the architecture of the neural networks. There are no clear guidelines for ’good’ neural network architecture for a given task. For instance, in terms of size and number of layers, it has been shown that too few neurons will lead to a system with poor performance. However, too many neurons may overfit to the training data and therefore not generalise well. Also, given that additional neurons will lead to increased computational complexity, finding an optimal number of neurons would be of great benefit to deep learning methods [95, 119]. Other parameters can also have an effect on the performance, training, and convergence of the system. The fundamental architecture, training method, learning rate, loss function, batch size etc., all need to be decided upon and defined, which affect the performance of the agent. However, there are few methods for choosing these parameters, and often trial-and-error and heuristics are the only viable options for optimising each parameter due to the complexity of DNNs [128]

. This is generally achieved by choosing a range of values for the hyperparameters in the neural network, and finding the best performing values. However, using such trial-and-error methods for exploring the hyperparameter space can be slow, given the amount of computation required for each training run.

Solutions to this challenge currently being researched include computerised ways of finding optimal values for these parameters, either by trialling across a range or using model-based methods to converge on the best values. There are several methods for changing the parameters over the chosen range, such as Coordinate Descent [17], Grid Search [17, 97], and Random Search [19]. Coordinate Descent keeps all hyperparameters except one fixed, and finds the best value for one parameter at a time. Grid Search optimises every parameter simultaneously, including the cross-product of all intervals. However, this vastly increases the computational expense by requiring a large number of neural network models to be trained and therefore is only suitable when the models can be trained quickly. Random Search often finds a good set of parameters faster than a Grid Search by sampling the chosen interval randomly [19, 140]. However, this has the disadvantage that the parameter space is often not covered completely, and some sample points can be very close to each other. These disadvantages can be solved by using quasi-random sequences [167]. Alternatively, one can use model-based hyperparameter optimisation methods, such as Bayesian optimisation or tree-structured Parzen estimators, which tend to yield better results but are more time intensive [167, 173, 142, 18]. Other proposed approaches focus on automated hyperparameter tuning by eliminating undesirable regions of the hyperparameter search space in order to converge to optimal values [89, 65]. Recent research has also explored neural architecture search methods which take hardware efficiency into account by incorporating the hardware feedback into the learning signal [26, 181, 212, 161]. This has resulted in neural network architectures which are specialised for specific hardware platforms, and demonstrate a hardware efficiency benefit over non-specialised architectures. Such methods could also be extended to find efficient network architectures for vehicle on-board hardware platforms. It should be noted that automated neural architecture search is an active area of research, for further discussion on this topic we refer the reader to the survey by Elsken et al. [48].

While architecture selection is a general problem for many deep learning applications, a complex task such as autonomous driving also brings its own challenges. Currently, most end-to-end driving systems have been limited to smaller networks. This is due to the relatively small datasets used, which would cause deeper networks to overfit to the training data. However, as noted in [35], when large amounts of data are available, deeper architectures can reduce both bias and variance in training, resulting in more robust control policies. Further thought should be given to architectures specifically designed for autonomous driving, such as the conditional imitation learning model [34], where the network included a different final network layer for each high-level command used for driving. These challenges translate to mid-to-mid approaches as well, as the selection of high-level features represented in the input to the network must be chosen carefully. Future works investigating specialised network architectures for autonomous driving can therefore be expected.

Iv-C Goal Specification

Adequate goal specification is a challenge specific to reinforcement learning methods. One of the advantages of reinforcement learning is that the behaviour of the agent does not need to be specified implicitly as it would be in rule-based systems. Only the reward function, which can often be easier to define than the value function, and the control action (e.g. steering, acceleration, braking) need to be defined. However, the goal of reinforcement learning is to maximise the long term accumulated reward as defined by the reward function. Therefore, the desired behaviour of the agent must be accurately captured by the reward function, otherwise unexpected and undesired behaviour might occur. For instance, instead of using binary rewards for successful or unsuccessful completion of tasks, intermediate rewards can be used to guide the agent towards desired behaviour, this process in known as reward shaping

[124, 93]. For example, Desjardins & Chaib-Draa [41] used the time headway derivative to reward the agent for actions that helped it move towards the ideal time headway state. Furthermore, for a complex task such as driving, a multi-objective reward function needs to consider different objectives which may conflict with each other. For example, for driving, these objectives may include maintaining a safe distance from other vehicles, staying close to the centre of the lane, avoiding pedestrians, not changing lanes too often, maintaining desired velocity, and avoiding harsh accelerations/braking. Hence, the reward function should not only consider all factors that affect the agent’s behaviour, but also the weight of these factors.

A further challenge for agents which control both lateral & longitudinal actions is the difficulty of defining a reward function when the agent must be able to perform multiple actions (steering, braking, and acceleration). In reinforcement learning, the agent uses the feedback from the reward function to improve its own performance. However, when the agent is carrying out multiple actions, it may not be clear which of the actions resulted in the given reward. For example, if the vehicle steers away from the road, the acceleration may not be at fault but a negative reward signal is sent to the agent. One solution to this is a Hybrid Reward Architecture [196], where the system uses a decomposed reward function and learns a separate value function for each component reward function. Alternatively, Shalev-Shwartz et al. [168] proposed a solution in which the reward function is decomposed into a high level decision making system, through which the agent learns to drive safely and make strategic decisions (e.g. which cars to overtake or give way to), and a low level reward function which helps the agent learn an optimal policy for different actions (e.g. overtaking, merging, decelerating etc.).

The developer should also take care that the agent does not exploit the reward function in unexpected ways, resulting in unintended behaviour. This effect is also known as reward hacking. Reward hacking occurs when the agent finds an unanticipated way of exploiting the reward function to gain large rewards in a way which goes against the developers’ defined objective(s) for the agent. For example, a robot used in ball paddling with a reward function based on the distance between the ball and the desired highest point, may attempt to move the racket up and keep the ball resting on it [82]. Potential solutions to avoid reward hacking were proposed by Amodei, et al. [5] in the form of adversarial reward functions, model look-ahead, reward capping, multiple reward functions, and trip wires. Adversarial reward functions utilise a reward function which is its own agent, similar to generative adversarial networks. The reward function agent can then explore the environment, making it more robust to reward hacking. It could, for example, try to find instances where the system claims a high reward from its actions, while a human would label it as a low reward. On the other hand, model look-ahead gives a reward based on anticipated future states, instead of the present one. Reward capping is a simple solution to reward hacking, where a maximum value is imposed on the reward function, thereby preventing unexpected high reward scenarios. Multiple reward functions can also increase robustness to reward hacking, since multiple rewards can be more difficult to hack than a single one. Finally, trip wires are deliberately placed vulnerabilities in the system, where reward hacking is most likely to occur. These vulnerabilities are then monitored to alert the system if the agent is attempting to exploit its reward function. Another approach to solving these challenges in goal specification is using inverse reinforcement learning to extract a reward function from expert demonstrations of the task [154, 84, 169, 143].

Iv-D Adaptability & Generalisation

Another challenge for learned control systems is dealing with different environments with a scalable approach. For example, a driving strategy that is successful in an urban environment may not be optimal on a highway, since they are very different environments with different traffic flow patterns and safety issues. Similar issues arise with changing weather conditions, seasons, climates etc. A neural network’s ability to use what it has learned from previous experiences to operate in a completely new environment is referred to as generalisation. However, the problem with generalisation is that even if the system demonstrates good generalisation in one new environment, there is no guarantee it will generalise to other possible environments. Moreover, considering the complex operating environment of a vehicle, it is not possible to test the system in all scenarios. Therefore, building a deep learning system capable of generalising to such a vast variety of situations, as well as validating its generalisation capability poses major challenges. This is a challenge that must be overcome for deep learning driven autonomous vehicles to be deployable in the real world, as the vehicles must be able to cope with the various different environments it will be used in.

Generally, to avoid poor generalisation in DNNs the training must be stopped before the DNN starts to overfit to the training data. Overfitting refers to creating a model that fits the training data too well, losing its ability to generalise to new data. Overfitting occurs when the network is trained with either insufficient amounts of training data or too many training episodes on the same training data. This results in the neural network memorising the training data, thereby losing generalisation. Unfortunately, there are no known methods of choosing the optimal stopping point in order to avoid overfitting [160]. However, it is possible to get some indication of the network’s generalisation capability by having three different data sets: training, validation, and test sets. The training and validation sets are used during training, but only the training set results are used to update the network weights [150]. The purpose of the validation set is to minimise overfitting, by monitoring the error in the validation data set. In this way, it will be ensured that changes which reduce the error on the training set also reduce the error on the validation set, thereby avoiding overfitting. If the accuracy of the validation set starts to decrease over the training iterations, then the network is starting to overfit and training should be stopped. In addition to stopping overfitting, a validation set can also be used to compare different network architectures (e.g. comparing two different networks with different numbers of hidden layers) to provide a measure of generalisation. Nevertheless, utilising the validation set simultaneously in the selection of the network and to terminate training can result in overfitting to the validation set. Therefore an additional independent set, known as testing set, is required for the evaluation of the network performance [20]. The testing set is only used to test the final network to confirm its performance and generalisation capabilities. The testing set must provide an unbiased evaluation of the network’s generalisation [150]. Therefore, it is crucial that the test set is not used to choose between different networks or network architectures.

There are also techniques available for DNNs which aim to reduce the test error, although often at the cost of increased training error, known as regularisation techniques [59]. The basis of regularisation techniques is to introduce some constraints on the deep learning model, which either introduce prior knowledge into the model or promote simpler models in order to achieve better generalisation capability. There are a variety of regularisation techniques available to choose from. For instance, L1 and L2 regularisation techniques introduce a constraint on the model by including an additional term in the cost function of the learning model, which makes the network prefer smaller weights. The smaller weights in the network reduce the effect of individual inputs on its behaviour, which means that the effect of local noise is reduced and the network is more likely to learn trends across the whole data set [128, 125]. Similarly, imposing constraints on the network weights through weight clipping has also been shown to improve robustness [110, 36]. Another popular regularisation technique is dropout, which drops out some randomly selected neurons from training and only updates the remaining weights for the given training example. At each weight update, a different set of neurons is omitted, thereby preventing complex co-adaptions between neurons. This helps each neuron learn features which are important for the given task and therefore helps reduce overfitting [70, 174].

Iv-E Verification & Validation

The testing of the system needs to be rigorous to validate the performance and safety of the system. However, the problem is that real-world testing can be expensive in terms of time, labour, and finances. Indeed, full-scale vehicle studies with multiple vehicles have typically been achieved through collaboration of government research projects with automotive manufacturers, such as Demo ’97 [146, 141, 185] or Demo 2000 [79]. Alternatively, simulation studies can reduce the amount field testing required, and can be used as a first step for performance and safety evaluation. Simulation studies are significantly cheaper, faster, more flexible, and can be used to set up situations not easily achieved in real life (e.g. crashes). Indeed, with the increasing accuracy and speed of simulation tools, simulation has become an increasingly dominant method of study in this field [126].

While simulation has multiple advantages, the model errors must be kept in consideration throughout the verification and validation process. This is especially critical for training, as training an agent in an imprecise model will result in a system that will not transfer to the real-world without significant modifications [82, 9]. Complex mechanical interactions, such as contacts and friction, are often difficult to model accurately. These small variations between the simulation model and the real-world can have drastic consequences on the system behaviour in the real world. In other words, the problem is the agent overfitting its policies to the simulation environment, and not transferring well to a real-world environment. For a system that can be evaluated and used in the real-world, training, as well as testing, in both simulation and field tests would be required [68]. The large number of trials required for reinforcement learning algorithms to converge, makes them susceptible to this issue where simulation is used for training. However, recent studies in robot manipulation have shown effective transfer of learned policies from simulation to the real world [31, 155, 191, 188].

Validation of the model and simulation environment alone is not enough for autonomous vehicles, as the influence of the training data can be equal to that of the algorithm itself [198]. Therefore, there should be emphasis on validating the quality of the training set as well. Ensuring that the data set represents the desired operational environment adequately, and covers the potential states is important. For instance, data sets that are biased towards a certain action (e.g. turn left) or scenario (e.g. driving in daytime) can introduce harmful biases into the learning model. Therefore, data sets should be validated to understand if they contain potentially harmful biases or patterns that could lead to undesirable behaviour of the learned control policy [189].

Challenge Sub-challenges Potential Solutions
Computation
  • Computation requirements for deep learning

  • Large data sets for supervised learning

  • Curse of dimensionality in high dimensional problems

  • Simulation requirements for sample inefficient techniques

  • Scalable model architectures

  • Sharing data sets

  • Improving sample efficiency in reinforcement learning

  • Parallelisable architectures for training

Architectures
  • Lack of clear rules for network architectures

  • Reliance on heuristics and trial-and-error

  • Automated neural architecture search methods

  • Specialised architectures for autonomous driving

Goal Specification
  • Well designed reward functions for complex tasks

  • Multi-objective reward functions

  • Reward hacking

  • Reward shaping

  • Inverse reinforcement learning

  • Hybrid reward architectures

Adaptability & Generalisation
  • Wide variety of the operational environment

  • Overfitting to training data/environment

  • Representative data sets and/or training environments

  • Effective use of regularisation techniques

Verification & Validation
  • Inability to test in all possible scenarios

  • High cost of field testing

  • Inaccuracies in simulation

  • Biases and gaps in data sets

  • High fidelity simulations

  • Effective simulation to real world transfer

  • Validation of data set coverage

Safety
  • Complexity and opaqueness of DNNs

  • Safe training in the real world

  • Adversarial attacks

  • Research into interpretability of DNNs

  • Fail safes and virtual safety cages

  • Human oversight

  • Improving model robustness to perturbations

TABLE IV: A Summary of Research Challenges.

Iv-F Safety

In a safety-critical system, such as vehicle operation, a serious malfunction or failure could result in death or serious harm to people or property. Therefore, the safety of road users must be ensured before such systems are deployed commercially. However, ensuring functional safety in deep learning systems can be challenging. As the neural networks become more complex, the solutions they provide and how they come to those solutions becomes increasingly difficult to interpret [28]. This is known as the black box problem. The opacity of these solutions is an obstacle to their implementation in safety-critical applications; while it is possible to show that these systems provide good performance in our validation environment, it is impossible to test these systems in all the possible environments they would encounter in the real world. Therefore, if we do not understand the way in which the system makes its decisions, ensuring it does not make unsafe decisions in new environments becomes increasingly difficult. It becomes even more challenging in online learning methods, since they change their policies during operation and therefore could potentially shift from safe policies to unsafe policies over time [224, 32, 76, 209, 197].

Any autonomous vehicle system not only needs to drive safely, but it also needs to be capable of reacting in a safe manner to other vehicles or pedestrians acting unpredictably. It can be difficult to guarantee the safety for any vehicle controller if, for example, another driver is acting recklessly or a previously unseen pedestrian runs onto the road. Therefore, it would be useful to include unsafe and aggressive driving behaviours of other vehicles into the training data of the vehicle controller to enable it to learn how to deal with such situations. One option to improve reliability and safety in such situations is utilising a trauma memory [29] where rare negative events (e.g. collisions) are stored. These are then used in training to persistently remind the agent of these events and ensure it maintains safe behaviour.

Also, safety must be maintained during any training or testing in the real world. For instance, during early training of a reinforcement learning agent, the agent is more likely to use exploration than exploitation of past experiences, which means the agent will effectively be learning through trial and error. Therefore, care must be taken to ensure the exploration happens in a safe manner. This is especially true in any environment including other road users or pedestrians, since inappropriate actions chosen due to exploration could have disastrous results. Exploration poses safety challenges as the agent is encouraged to take random actions, which can lead to catastrophic events if not considered beforehand [164, 11, 39, 115]. Potential solutions include the use of demonstrations such as in IRL to provide examples of safe behaviour which could be used as a baseline policy, simulated exploration where exploration happens in a simulated environment, bounded exploration which limits exploration in state spaces which are considered unsafe, and human oversight although this is limited in scalability and not feasible in some real-time systems. The same holds true for any testing and evaluation of the system; until the system has been deemed to perform adequately and in a safe manner, all necessary precautions must be taken to ensure safety [5].

An approach for ensuring functional safety for deep learning based autonomous vehicles is suggested by Shalev-Shwartz et al. [168]. In the proposed system architecture, the policy function is decomposed into a learnable part and a non-learnable part. The learnable part is responsible for the comfort of driving and for making strategic decisions (e.g. which cars to overtake or give way to). This policy is learned from experience by maximising an expected reward from the reward function. On the other hand, the non-learnable policy is responsible for the safety by minimising a cost function with hard constraints (e.g. the vehicle is not allowed within a specified distance of other vehicles’ trajectories) to ensure functional safety. Alternatively, Xiong et al. [215] suggested a control structure which combines reinforcement learning based control with safety based control and path tracking. The aim is to combine a traditional control method with a reinforcement method to take advantage of the superior performance of deep learning systems whilst ensuring safety through traditional control theory. The path tracking element is included to ensure the vehicle stays on (or as close as is safe to) the centre of the lane. The reinforcement learning approach is based on the DDPG algorithm. Also, the safety based controller uses an Artificial Potential Field method [58] which models any obstacles with a repulsive force to steer the vehicle away from them. The final steering policy is then found by the weighted summation of the three models. The system was shown to keep a safe distance in a simulated environment where the vehicle had to drive along a curve with other vehicles nearby.

Furthermore, malicious inputs to deep learning systems have to be considered. It has been shown that visual classification DNN systems are vulnerable to adversarial examples, which are perturbed images that cause the DNNs to misclassify them with high confidence [180, 127, 118, 60], including misclassification of traffic signs [71]. DNNs have been shown to be vulnerable to printed adversarial examples in the real world [90] and even to 3D-printed physical adversarial examples [8], which suggests they are a threat to DNN applications in the real world. Moreover, the image modification of the adversarial examples have been shown to be subtle enough that a human eye does not notice the modification, making prevention of such malicious attacks difficult [90]. These types of weaknesses in DNNs could be exploited and pose a security concern for any technology using DNNs. Although defences against these attacks have been proposed [220], state-of-the-art attacks can by-pass defences and detection mechanisms.

V Concluding Remarks

In this manuscript, a survey of autonomous vehicle control approaches utilising deep learning was presented. The approaches were separated into three categories: lateral (steering), longitudinal (acceleration and braking), and simultaneous lateral and longitudinal control methods. The focus of this manuscript has been on the vehicle control techniques rather than perception, however there is some obvious overlap between them. It was shown that research interest in this field has grown significantly in recent years and is expected to continue to do so. The applications discussed in this paper show great promise for the application of deep learning to autonomous vehicle control. However, current deep learning based controller performance has significant room for improvement. Moreover, much of the current research is only limited to simulation. While testing in simulation is useful for feasibility studies and initial performance evaluations, extensive testing and training in the field will be required before these systems are ready for deployment.

The main research challenges to deep learning based vehicle control were also discussed and can be seen summarised in Table IV. Computation was identified as a challenge due to the large amount of data required to train deep learning models. Architectures were also identified as a challenge due to the difficulty of choosing the optimal network architecture for a given task. Goal specification is a challenge for reinforcement learning techniques due to the importance of designing a reward function which promotes the desired behaviour. Adaptability and generalisation is a challenge in the autonomous vehicle domain due to the highly complex nature of the operational environment. Verification and validation is a further challenge due to the high cost and time requirements of field tests and training. While simulation is an obvious solution to reduce the amount of physical field testing required, the use of simulation in training and testing has its own drawbacks. Safety was identified as a crucial challenge due to the safety critical nature of the autonomous vehicle domain. This is made more challenging due to the opaque nature of deep learning methods, making safety validation of these systems problematic.

Therefore, further research into interpretability of neural networks and functional safety validation methods for neural network-driven vehicles will be required. Before deep learning can be deployed on the road, some safety validation techniques will need to be found to address their opaqueness. Ensuring the safety of these deep neural networks is a major barrier preventing them from being used commercially. Furthermore, as noted by Salay et al. [156] in their analysis of the ISO26262 [75] standard, more than 40% of the required software techniques in the current version of the standard are incompatible with machine learning techniques, whilst the rest are either directly applicable or applicable if modified slightly. This reveals further need for these standards to be revised to address machine learning systems for autonomous vehicles [55]. Other safety aspects which warrant further research include defences against adversarial attacks, as they currently present a significant safety problem for the use of DNNs in autonomous vehicles. Also, robustness to erroneous inputs from sensory data or communication failures must be investigated. There is currently a significant gap in the literature for investigation of fault tolerant systems. Further research into how deep learning control systems deal with issues such as communication failures, erroneous sensory inputs, input noise, or sensor failure would move the industry towards robust and safe solutions. Furthermore, while research into deep neural networks with both lateral and longitudinal vehicle control is still relatively sparse, there is significant on-going research in this area. Full vehicle control with deep neural networks is typically achieved in simple simulation scenarios and/or with discretised outputs. Much work can be done to improve the performance of the full vehicle control techniques as well. Techniques in Sections III-A and B show promising results for lateral and longitudinal control systems, and future work will be required to bridge these techniques into an autonomous vehicle system with strong performance in the more general case of combined lateral and longitudinal control. This will also include further experiments in the real world to validate the performance of the learned control policies. Other avenues for future research include learning driving manoeuvrers which are still typically achieved through classical control techniques, such as overtaking [45, 44] or merging [4, 111]. Further work will also be needed to design autonomous vehicles which can understand the rules of the road and the behaviour of other road users. Some on-going research was discussed where the deep neural network can take into account the intended route or target destination, but more research is needed to ensure these techniques can stop at stop signs and red lights, respect speed limits, or negotiate intersections and roundabouts with other vehicles.

References

  • [1] P. Abbeel, A. Coates, M. Quigley, and A. Y. Ng (2007) An application of reinforcement learning to aerobatic helicopter flight. Education 19, pp. 1. External Links: ISBN 0262195682, ISSN 10495258 Cited by: §III-C.
  • [2] P. Abbeel and A. Y. Ng (2004) Apprenticeship learning via inverse reinforcement learning. Twenty-first international conference on Machine learning - ICML ’04, pp. 1. External Links: Document, 1206.5264, ISBN 1581138285, ISSN 0028-0836 Cited by: §III-C.
  • [3] H. Abraham, B. Reimer, B. Seppelt, C. Fitzgerald, B. Mehler, and J. F. Coughlin (2017) Consumer Interest in Automation: Preliminary Observations Exploring a Year’s Change. External Links: Link Cited by: §IV.
  • [4] K. Amezquita-Semprun, Y. C. Pradeep, P. C. Chen, W. Chen, and Z. Zhao (2019) Experimental evaluation of the stimuli-induced equilibrium point concept for automatic ramp merging systems. IEEE Transactions on Intelligent Transportation Systems. Cited by: §V.
  • [5] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané (2016) Concrete problems in ai safety. arXiv preprint arXiv:1606.06565. Cited by: §IV-C, §IV-F.
  • [6] I. Arel, D. C. Rose, and T. P. Karnowski (2010) Deep machine learning-a new frontier in artificial intelligence research [research frontier]. IEEE computational intelligence magazine 5 (4), pp. 13–18. Cited by: §I.
  • [7] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath (2017) Deep reinforcement learning: a brief survey. IEEE Signal Processing Magazine 34 (6), pp. 26–38. Cited by: §II-B, §II, §III-B.
  • [8] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok (2017) Synthesizing Robust Adversarial Examples. arXiv preprint arXiv:1707.07397. Cited by: §IV-F.
  • [9] C. G. Atkeson (1994) Using Local Trajectory Optimizers To Speed Up Global Optimization In Dynamic Programming. Advances in Neural Information Processing Systems (NIPS),, pp. 663–670. Cited by: §IV-E.
  • [10] S. A. Bagloee, M. Tavana, M. Asadi, and T. Oliver (2016) Autonomous vehicles: challenges, opportunities, and future implications for transportation policies. Journal of Modern Transportation 24 (4), pp. 284–303. External Links: Document, ISBN 2095-087X 2196-0577, ISSN 2095-087X Cited by: §IV.
  • [11] J. A. Bagnell (2004) Learning decisions: robustness, uncertainty, and appoximation. Robotics Institute, pp. 78. Cited by: §IV-F.
  • [12] M. Bansal, A. Krizhevsky, and A. Ogale (2018) Chauffeurnet: learning to drive by imitating the best and synthesizing the worst. arXiv preprint arXiv:1812.03079. Cited by: §III-C, TABLE III.
  • [13] J. T. Barron, D. S. Golland, and N. J. Hay (2009) Parallelizing reinforcement learning. UC Berkeley. Cited by: §IV-A.
  • [14] M. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos (2016) Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pp. 1471–1479. Cited by: §III-B.
  • [15] R. Bellman (1966) Dynamic Programming. Science 153 (3731), pp. 34–37. External Links: Document, arXiv:1011.1669v3, ISBN 0036-8075, ISSN 0036-8075 Cited by: §IV-A.
  • [16] R. Benenson, M. Omran, J. Hosang, and B. Schiele (2014) Ten years of pedestrian detection, what have we learned?. In European Conference on Computer Vision, pp. 613–627. Cited by: §I.
  • [17] Y. Bengio (2012) Practical recommendations for gradient-based training of deep architectures. In Neural networks: Tricks of the trade, pp. 437–478. Cited by: §IV-B.
  • [18] J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl (2011) Algorithms for Hyper-Parameter Optimization. In Advances in Neural Information Processing Systems (NIPS), pp. 2546–2554. External Links: Document, 1206.2944, ISBN 9781618395993, ISSN 10495258 Cited by: §IV-B.
  • [19] J. Bergstra and Y. Bengio (2012) Random Search for Hyper-Parameter Optimization. Journal of Machine Learning Research 13, pp. 281–305. External Links: Document, 1504.05070, ISBN 1532-4435, ISSN 1532-4435 Cited by: §IV-B.
  • [20] C. M. Bishop (1995)

    Neural networks for pattern recognition

    .
    Journal of the American Statistical Association 92, pp. 482. External Links: Document, 0-387-31073-8, ISBN 0198538642, ISSN 01621459 Cited by: §IV-D.
  • [21] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba (2016) End to End Learning for Self-Driving Cars. (May). External Links: 1604.07316, Link Cited by: Fig. 1, §III-A, §III-A, TABLE I.
  • [22] L. Bosankic (2017) How consumers’ perception of autonomous cars will influence their adoption. External Links: Link Cited by: §IV.
  • [23] L. Bottou and O. Bousquet (2008) The tradeoffs of large scale learning. In Advances in neural information processing systems, pp. 161–168. Cited by: §II-A.
  • [24] L. Bottou (2010) Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, pp. 177–186. Cited by: §III-A.
  • [25] M. Buehler, K. Iagnemma, and S. Singh (2009) The darpa urban challenge: autonomous vehicles in city traffic. Vol. 56, springer. Cited by: §I.
  • [26] H. Cai, C. Gan, and S. Han (2019) Once for all: train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791. Cited by: §IV-B.
  • [27] L. Caltagirone, M. Bellone, L. Svensson, and M. Wahde (2017) LIDAR-based driving path generation using fully convolutional neural networks. In Intelligent Transportation Systems (ITSC), 2017 IEEE 20th International Conference on, pp. 1–6. Cited by: §I.
  • [28] D. Castelvecchi (2016) Can we open the black box of ai?. Nature News 538 (7623), pp. 20. Cited by: §IV-F.
  • [29] H. Chae, C. M. Kang, B. Kim, J. Kim, C. C. Chung, and J. W. Choi (2017) Autonomous braking system via deep reinforcement learning. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pp. 1–6. Cited by: §III-B, §III-C, TABLE II, §IV-F.
  • [30] X. Chen, Y. Zhai, C. Lu, J. Gong, and G. Wang (2017) A Learning Model for Personalized Adaptive Cruise Control. In Intelligent Vehicles Symposium (IV), 2017 IEEE, pp. 379–384. External Links: ISBN 9781509048038 Cited by: §III-B, §III-B, TABLE II.
  • [31] P. Christiano, Z. Shah, I. Mordatch, J. Schneider, T. Blackwell, J. Tobin, P. Abbeel, and W. Zaremba (2016) Transfer from simulation to real world through learning deep inverse dynamics model. arXiv preprint arXiv:1610.03518. Cited by: §IV-E.
  • [32] M. Clark, X. Koutsoukos, J. Porter, R. Kumar, G. Pappas, O. Sokolsky, I. Lee, and L. Pike (2013) A study on run time assurance for complex cyber physical systems. Technical report AIR FORCE RESEARCH LAB WRIGHT-PATTERSON AFB OH AEROSPACE SYSTEMS DIR. Cited by: §IV-F.
  • [33] F. Codevilla, A. M. López, V. Koltun, and A. Dosovitskiy (2018) On offline evaluation of vision-based driving models. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 236–251. Cited by: §III-A.
  • [34] F. Codevilla, M. Müller, A. López, V. Koltun, and A. Dosovitskiy (2018) End-to-end driving via conditional imitation learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–9. Cited by: §III-C, TABLE III, §IV-B.
  • [35] F. Codevilla, E. Santana, A. M. López, and A. Gaidon (2019) Exploring the limitations of behavior cloning for autonomous driving. arXiv preprint arXiv:1904.08980. Cited by: §III-C, §IV-A, §IV-B.
  • [36] M. Courbariaux, Y. Bengio, and J. David (2015) Binaryconnect: training deep neural networks with binary weights during propagations. In Advances in neural information processing systems, pp. 3123–3131. Cited by: §IV-D.
  • [37] X. Dai, C. Li, and A. B. Rad (2005) An approach to tune fuzzy controllers based on reinforcement learning for autonomous vehicle control. IEEE Transactions on Intelligent Transportation Systems 6 (3), pp. 285–293. External Links: Document, ISSN 15249050 Cited by: §III-B, §III-B, TABLE II.
  • [38] P. de Haan, D. Jayaraman, and S. Levine (2019) Causal confusion in imitation learning. arXiv preprint arXiv:1905.11979. Cited by: §II-A.
  • [39] M. Deisenroth and C. E. Rasmussen (2011) PILCO: a model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pp. 465–472. Cited by: §IV-F.
  • [40] Department for Transport (2017) Research on the Impacts of Connected and Autonomous Vehicles (CAVs) on Traffic Flow: Summary Report. External Links: Link Cited by: §I.
  • [41] C. Desjardins and B. Chaib-draa (2011) Cooperative Adaptive Cruise Control: A Reinforcement Learning Approach. IEEE Transactions on Intelligent Transportation Systems 12 (4), pp. 1248–1260. External Links: Document, ISBN 1524-9050, ISSN 1524-9050 Cited by: §I, §III-B, TABLE II, §IV-C.
  • [42] E. D. Dickmanns and A. Zapp (1987) Autonomous high speed road vehicle guidance by computer vision1. IFAC Proceedings Volumes 20 (5), pp. 221–226. Cited by: §I.
  • [43] S. Dixit, S. Fallah, U. Montanaro, M. Dianati, A. Stevens, F. Mccullough, and A. Mouzakitis (2018) Trajectory planning and tracking for autonomous overtaking: state-of-the-art and future prospects. Annual Reviews in Control. Cited by: §I.
  • [44] S. Dixit, U. Montanaro, M. Dianati, D. Oxtoby, T. Mizutani, A. Mouzakitis, and S. Fallah (2019) Trajectory planning for autonomous high-speed overtaking in structured environments using robust mpc. IEEE Transactions on Intelligent Transportation Systems. Cited by: §V.
  • [45] S. Dixit, U. Montanaro, S. Fallah, M. Dianati, D. Oxtoby, T. Mizutani, and A. Mouzakitis (2018) Trajectory planning for autonomous high-speed overtaking using mpc with terminal set constraints. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 1061–1068. Cited by: §V.
  • [46] P. Dollár, C. Wojek, B. Schiele, and P. Perona (2009) Pedestrian detection: a benchmark. In Computer Vision and Pattern Recognition (CVPR), 2009 IEEE Conference on, pp. 304–311. Cited by: §II-C.
  • [47] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun (2017) CARLA: An open urban driving simulator. In Proceedings of the 1st Annual Conference on Robot Learning, pp. 1–16. Cited by: §III-C.
  • [48] T. Elsken, J. H. Metzen, and F. Hutter (2019) Neural architecture search: a survey. Journal of Machine Learning Research 20 (55), pp. 1–21. Cited by: §IV-B.
  • [49] H. M. Eraqi, M. N. Moustafa, and J. Honer (2017) End-to-end deep learning for steering autonomous vehicles considering temporal dependencies. arXiv preprint arXiv:1710.03804. Cited by: §III-A, TABLE I.
  • [50] A. Eskandarian (2012) Handbook of intelligent vehicles. Vol. 2, Springer. Cited by: §I.
  • [51] A. Ess, B. Leibe, K. Schindler, and L. van Gool (2008-06) A mobile vision system for robust multi-person tracking. In Computer Vision and Pattern Recognition (CVPR), 2008 IEEE Conference on, Cited by: §II-C.
  • [52] Euro NCAP (2015) European New Car Assessment Programme: Test Protocol - AEB VRU systems. Cited by: §III-B.
  • [53] European Commission (2016) Cooperative Intelligent Transportation Systems - Research Theme Analysis Report. External Links: Link Cited by: §IV.
  • [54] European Commission (2017) 2016 road safety statistics: What is behind the figures?. External Links: Link Cited by: §I.
  • [55] F. Falcini, G. Lami, and A. M. Costanza (2017) Deep learning in automotive software. IEEE Software 34 (3), pp. 56–63. Cited by: §V.
  • [56] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. International Journal of Robotics Research (IJRR). Cited by: §II-C.
  • [57] A. Geiger, P. Lenz, and R. Urtasun (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, Cited by: §II-C.
  • [58] S. Glaser, B. Vanholme, S. Mammar, D. Gruyer, and L. Nouveliere (2010) Maneuver-based trajectory planning for highly autonomous vehicles on real road with traffic and driver interaction. IEEE Transactions on Intelligent Transportation Systems 11 (3), pp. 589–606. Cited by: §IV-F.
  • [59] I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep Learning. MIT Press. External Links: Link Cited by: §II, §IV-D.
  • [60] I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §IV-F.
  • [61] G. J. Gordon (1995) Stable function approximation in dynamic programming. In Machine Learning Proceedings 1995, pp. 261–268. Cited by: §II-B.
  • [62] I. Grondman, L. Busoniu, G. A. Lopes, and R. Babuska (2012) A survey of actor-critic reinforcement learning: standard and natural policy gradients. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42 (6), pp. 1291–1307. Cited by: §II-B.
  • [63] S. Gu, T. Lillicrap, I. Sutskever, and S. Levine (2016) Continuous deep q-learning with model-based acceleration. In International Conference on Machine Learning, pp. 2829–2838. Cited by: §IV-A.
  • [64] A. Gupta, A. Murali, D. P. Gandhi, and L. Pinto (2018) Robot learning in homes: improving generalization and reducing dataset bias. In Advances in Neural Information Processing Systems, pp. 9094–9104. Cited by: §II-A.
  • [65] T. B. Hashimoto, S. Yadlowsky, and J. C. Duchi (2018) Derivative free optimization via repeated classification. arXiv preprint arXiv:1804.03761. Cited by: §IV-B.
  • [66] S. Hecker, D. Dai, and L. Van Gool (2018) End-to-end learning of driving models with surround-view cameras and route planners. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 435–453. Cited by: §III-C, TABLE III.
  • [67] HERE Technologies (2017) Consumer Acceptance of Autonomous Vehicles. External Links: Link Cited by: §IV.
  • [68] T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, A. Sendonaris, G. Dulac-Arnold, I. Osband, J. Agapiou, J. Z. Leibo, and A. Gruslys (2017) Learning from demonstrations for real world reinforcement learning. arXiv preprint arXiv:1704.03732. Cited by: §IV-E.
  • [69] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Processing Magazine 29 (6), pp. 82–97. Cited by: §I.
  • [70] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov (2012) Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580. Cited by: §IV-D.
  • [71] X. Huang, M. Kwiatkowska, S. Wang, and M. Wu (2017) Safety verification of deep neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 10426 LNCS, pp. 3–29. External Links: Document, 1610.06940, ISBN 9783319633862, ISSN 16113349 Cited by: §IV-F.
  • [72] X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, and R. Yang (2018) The apolloscape dataset for autonomous driving. arXiv preprint arXiv:1803.06184. Cited by: §II-C.
  • [73] Z. Huang, X. Xu, H. He, J. Tan, and Z. Sun (2017) Parameterized Batch Reinforcement Learning for Longitudinal Control of Autonomous Land Vehicles. IEEE Transactions on Systems, Man, and Cybernetics: Systems, pp. 1–12. External Links: Document, ISSN 21682232 Cited by: §III-B, TABLE II.
  • [74] Intel Corporation (2018) Cyclone V - Overview. External Links: Link Cited by: §II-C.
  • [75] International Organization for Standardization (2011) ISO 26262: road vehicles-functional safety. International Standard ISO/FDIS. Cited by: §V.
  • [76] S. Jacklin, J. Schumann, P. Gupta, M. Richard, K. Guenther, and F. Soares (2005) Development of advanced verification and validation procedures and tools for the certification of learning systems in aerospace applications. In Infotech@ Aerospace, pp. 6912. Cited by: §IV-F.
  • [77] J. Janai, F. Güney, A. Behl, and A. Geiger (2017) Computer vision for autonomous vehicles: problems, datasets and state-of-the-art. arXiv preprint arXiv:1704.05519. Cited by: §I.
  • [78] L. P. Kaelbling, M. L. Littman, and A. W. Moore (1996) Reinforcement learning: a survey. Journal of artificial intelligence research 4, pp. 237–285. Cited by: §III-B.
  • [79] S. Kato, S. Tsugawa, K. Tokuda, T. Matsui, and H. Fujii (2002) Vehicle Control Algorithms for Cooperative Driving with Automated Vehicles and Intervehicle Communications. IEEE Transactions on Intelligent Transportation Systems 3 (3), pp. 155–160. External Links: Document, ISBN 1524-9050, ISSN 15249050 Cited by: §IV-E.
  • [80] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §III-A.
  • [81] W. Knight (2016) An Ambitious Plan to Build a Self-Driving Borg. External Links: Link Cited by: §IV-A.
  • [82] Kober, Jens J., Bagnell, Andrew, Peters, Jan (2013) Reinforcement Learning in Robotics: A Survey. International Journal of Robotics Research 32 (11), pp. 1238–1274. External Links: Document, 9605103, ISBN 9783642276446, ISSN 1610742X Cited by: §IV-A, §IV-C, §IV-E.
  • [83] N. Koenig and A. Howard

    Design and use paradigms for gazebo, an open-source multi-robot simulator

    .
    In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), Vol. 3, pp. 2149–2154. Cited by: §III-C.
  • [84] J. Z. Kolter, P. Abbeel, and A. Y. Ng (2008) Hierarchical Apprenticeship Learning, with Application to Quadruped Locomotion. Science 1, pp. 1–8. External Links: ISBN 160560352X Cited by: §IV-C.
  • [85] K. R. Konda and R. Memisevic (2015) Learning visual odometry with a convolutional network.. In VISAPP (1), pp. 486–490. Cited by: §I.
  • [86] V. R. Konda and J. N. Tsitsiklis (2003) On actor-critic algorithms. SIAM journal on Control and Optimization 42 (4), pp. 1143–1166. Cited by: §II-B.
  • [87] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §I.
  • [88] M. Kuderer, S. Gulati, and W. Burgard (2015) Learning driving styles for autonomous vehicles from demonstration. Proceedings - IEEE International Conference on Robotics and Automation 2015-June (June), pp. 2641–2646. External Links: Document, ISBN 9781479969227, ISSN 10504729 Cited by: §I, §III-C.
  • [89] M. Kumar, G. E. Dahl, V. Vasudevan, and M. Norouzi (2018) Parallel architecture and hyperparameter search via successive halving and classification. arXiv preprint arXiv:1805.10255. Cited by: §IV-B.
  • [90] A. Kurakin, I. Goodfellow, and S. Bengio (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §IV-F.
  • [91] S. Kuutti, S. Fallah, K. Katsaros, M. Dianati, F. Mccullough, and A. Mouzakitis (2018) A survey of the state-of-the-art localization techniques and their potentials for autonomous vehicle applications. IEEE Internet of Things Journal 5 (2), pp. 829–846. Cited by: §I.
  • [92] T. L. Lai and H. Robbins (1985) Asymptotically efficient adaptive allocation rules. Advances in applied mathematics 6 (1), pp. 4–22. Cited by: §III-B.
  • [93] A. D. Laud (2004) Theory and Application of Reward Shaping in Reinforcement Learning. Ph.D. Thesis, University of Illinois. Cited by: §IV-C.
  • [94] T. Le-Anh and M. De Koster (2006) A review of design and control of automated guided vehicle systems. European Journal of Operational Research 171 (1), pp. 1–23. Cited by: §I.
  • [95] Y. LeCun (1989) Generalization and network design strategies. Connectionism in perspective, pp. 143–155. Cited by: §IV-B.
  • [96] Y. Lecun, Y. Bengio, and G. Hinton (2015) Deep learning. Nature 521 (7553), pp. 436–444. External Links: Document, arXiv:1312.6184v5, ISBN 9780521835688, ISSN 14764687 Cited by: §I, §II.
  • [97] Y. LeCun, L. Bottou, G. B. Orr, and K. Müller (1998) Efficient backprop. In Neural networks: Tricks of the trade, pp. 9–50. Cited by: §IV-B.
  • [98] S. Levine, C. Finn, T. Darrell, and P. Abbeel (2016) End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research 17 (1), pp. 1334–1373. Cited by: §I, §III-B.
  • [99] S. Levine and V. Koltun (2012) Continuous Inverse Optimal Control with Locally Optimal Examples. International Conference on Machine Learning (ICML), pp. 41–48. External Links: 1206.4617, ISBN 978-1-4503-1285-1 Cited by: §III-C.
  • [100] S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen (2016) Learning hand-eye coordination for robotic grasping with large-scale data collection. In International Symposium on Experimental Robotics, pp. 173–184. Cited by: §I, §III-B.
  • [101] Y. Li (2017) Deep reinforcement learning: an overview. arXiv preprint arXiv:1701.07274. Cited by: §II.
  • [102] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: §III-C.
  • [103] S. Liu, J. Tang, Z. Zhang, and J. Gaudiot (2017) CAAD: computer architecture for autonomous driving. arXiv preprint arXiv:1702.01894. Cited by: §II-C.
  • [104] S. Lowry, N. Sünderhauf, P. Newman, J. J. Leonard, D. Cox, P. Corke, and M. J. Milford (2016) Visual place recognition: a survey. IEEE Transactions on Robotics 32 (1), pp. 1–19. Cited by: §I.
  • [105] T. Luettel, M. Himmelsbach, and H. -. Wuensche (2012) Autonomous Ground Vehicles ―Concepts and a Path to the Future. Proceedings of the IEEE 100 (Special Centennial Issue), pp. 1831–1839. External Links: Document, ISBN 0018-9219, ISSN 0018-9219 Cited by: §I.
  • [106] T. T. Mac, C. Copot, D. T. Tran, and R. De Keyser (2016) Heuristic approaches in robot path planning: a survey. Robotics and Autonomous Systems 86, pp. 13–28. Cited by: §I.
  • [107] W. Maddern, G. Pascoe, C. Linegar, and P. Newman (2017) 1 Year, 1000km: The Oxford RobotCar Dataset. The International Journal of Robotics Research (IJRR) 36 (1), pp. 3–15. External Links: Document, Link, http://ijr.sagepub.com/content/early/2016/11/28/0278364916679498.full.pdf+html Cited by: §II-C.
  • [108] M. Maurer, J. C. Gerdes, B. Lenz, and H. Winner (2016) Autonomous Driving. Springer, Heidelberg, Berlin. Cited by: §IV.
  • [109] Mechanical Simulation Corporation CarSim. External Links: Link Cited by: §III-A.
  • [110] P. Merolla, R. Appuswamy, J. Arthur, S. K. Esser, and D. Modha (2016)

    Deep neural networks are robust to weight binarization and other non-linear distortions

    .
    arXiv preprint arXiv:1606.01981. Cited by: §IV-D.
  • [111] V. Milanés, J. Godoy, J. Villagrá, and J. Pérez (2010) Automated on-ramp merging system for congested traffic situations. IEEE Transactions on Intelligent Transportation Systems 12 (2), pp. 500–508. Cited by: §V.
  • [112] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu (2016) Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928–1937. Cited by: §II-B.
  • [113] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. (2015) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529. Cited by: §I.
  • [114] MobilEye (2018) The Evolution of EyeQ. External Links: Link Cited by: §II-C.
  • [115] T. M. Moldovan and P. Abbeel (2012) Safe exploration in markov decision processes. arXiv preprint arXiv:1205.4810. Cited by: §IV-F.
  • [116] U. Montanaro, S. Dixit, S. Fallah, M. Dianati, A. Stevens, D. Oxtoby, and A. Mouzakitis (2018) Towards connected autonomous driving: review of use-cases. Vehicle System Dynamics, pp. 1–36. Cited by: §I.
  • [117] S. Moon, I. Moon, and K. Yi (2009) Design, tuning, and evaluation of a full-range adaptive cruise control system with collision avoidance. Control Engineering Practice 17 (4), pp. 442–455. Cited by: §III-B.
  • [118] S. M. Moosavi Dezfooli, A. Fawzi, and P. Frossard (2016) Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §IV-F.
  • [119] N. Morgan and H. Bourlard (1989) Generalization and Parameter Estimation in Feedforward Nets: Some Experiments. Advances in neural information processing systems, pp. 630–637. External Links: ISBN 1-55860-100-7 Cited by: §IV-B.
  • [120] D. E. Moriarty, S. Handley, and P. Langley (1998) Learning distributed strategies for traffic control. Proc. of the fifth International Conference of the Society for Adaptive Behavior (May 1998), pp. 437–446. Cited by: §III-A.
  • [121] M. Müller, A. Dosovitskiy, B. Ghanem, and V. Koltun (2018) Driving policy transfer via modularity and abstraction. arXiv preprint arXiv:1804.09364. Cited by: §III-C.
  • [122] U. Muller, J. Ben, E. Cosatto, B. Flepp, and Y. L. Cun (2006) Off-road obstacle avoidance through end-to-end learning. In Advances in neural information processing systems, pp. 739–746. Cited by: §III-A, TABLE I.
  • [123] National Highway Traffic Safety Administration (NHTSA) (2017) 2016 Fatal Motor Vehicle Crashes: Overview. External Links: Link Cited by: §I.
  • [124] A. Y. Ng, D. Harada, and S. Russell (1999) Policy invariance under reward transformations : Theory and application to reward shaping. Sixteenth International Conference on Machine Learning 3, pp. 278–287. External Links: Document, arXiv:1011.1669v3, ISBN 1558606122, ISSN 1098-6596 Cited by: §IV-C.
  • [125] A. Y. Ng (2004) Feature selection, l 1 vs. l 2 regularization, and rotational invariance. In Proceedings of the twenty-first international conference on Machine learning, pp. 78. Cited by: §IV-D.
  • [126] L. Ng, C. M. Clark, and J. P. Huissoon (2008) Reinforcement learning of adaptive longitudinal vehicle control for dynamic collaborative driving. In IEEE Intelligent Vehicles Symposium, Proceedings, pp. 907–912. External Links: Document, ISBN 9781424425693 Cited by: §IV-E.
  • [127] A. Nguyen, J. Yosinski, and J. Clune (2015) Deep Neural Networks are Easily Fooled. Computer Vision and Pattern Recognition, 2015 IEEE Conference on, pp. 427–436. External Links: Document, 1412.1897, ISBN 9781467369640, ISSN 1875-7855 Cited by: §IV-F.
  • [128] M. Nielsen (2015) Neural Networks and Deep Learning. Determination Press. External Links: Link Cited by: §II, §IV-B, §IV-D.
  • [129] NVIDIA Corporation (2018) Autonomous car development platform from NVIDIA DRIVE PX2. External Links: Link Cited by: §II-C.
  • [130] B. Paden, M. Čáp, S. Z. Yong, D. Yershov, and E. Frazzoli (2016) A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Transactions on intelligent vehicles 1 (1), pp. 33–55. Cited by: §I.
  • [131] X. Pan, Y. You, Z. Wang, and C. Lu (2017) Virtual to real reinforcement learning for autonomous driving. arXiv preprint arXiv:1704.03952. Cited by: §III-C.
  • [132] Y. Pan, C. Cheng, K. Saigol, K. Lee, X. Yan, E. Theodorou, and B. Boots (2018) Agile autonomous driving using end-to-end deep imitation learning. Proceedings of Robotics: Science and Systems. Pittsburgh, Pennsylvania. Cited by: §III-C, TABLE III.
  • [133] M. Pasquier, C. Quek, and M. Toh (2001) Fuzzylot: a novel self-organising fuzzy-neural rule-based pilot system for automated vehicles. Neural networks 14 (8), pp. 1099–1112. Cited by: §I.
  • [134] C. Paxton, V. Raman, G. D. Hager, and M. Kobilarov (2017) Combining neural networks and tree search for task and motion planning in challenging environments. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6059–6066. Cited by: §III-C, TABLE III.
  • [135] W. Payre, J. Cestac, and P. Delhomme (2014) Intention to use a fully automated car: Attitudes and a priori acceptability. Transportation Research Part F: Traffic Psychology and Behaviour 27 (PB), pp. 252–263. External Links: Document, ISBN 1369-8478, ISSN 13698478 Cited by: §I.
  • [136] M. M. Polycarpou (1996) Stable adaptive neural control scheme for nonlinear systems. IEEE Transactions on Automatic Control 41 (3), pp. 447–451. Cited by: §III-B.
  • [137] D. Pomerleau (1997) Neural network vision for robot driving. Intelligent Unmanned Ground Vehicles, pp. 1–22. External Links: ISBN 0-262-01148-4, ISSN 08933405 Cited by: §III-A, TABLE I.
  • [138] D. A. Pomerleau (1989) Alvinn: An autonomous land vehicle in a neural network. Advances in Neural Information Processing Systems 1, pp. 305–313. External Links: ISBN 1-558-60015-9 Cited by: §III-A, §III-C, TABLE I.
  • [139] H. Porav and P. Newman (2018) Imminent collision mitigation with reinforcement learning and vision. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 958–964. Cited by: §III-C, TABLE III.
  • [140] C. Raffel (2015) Neural Network Hyperparameters. External Links: Link Cited by: §IV-B.
  • [141] R. Rajamani, H. S. Tan, B. K. Law, and W. B. Zhang (2000) Demonstration of integrated longitudinal and lateral control for the operation of automated vehicles in platoons. IEEE Transactions on Control Systems Technology 8 (4), pp. 695–708. External Links: Document, ISBN 1063-6536 VO - 8, ISSN 10636536 Cited by: §IV-E.
  • [142] C. E. Rasmussen (2004) Gaussian processes in machine learning. In Advanced lectures on machine learning, pp. 63–71. Cited by: §IV-B.
  • [143] N. Ratliff, J. A. Bagnell, and S. S. Srinivasa (2008) Imitation learning for locomotion and manipulation. In Proceedings of the 2007 7th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS 2007, pp. 392–397. External Links: Document, ISBN 9781424418626, ISSN 2164-0572 Cited by: §IV-C.
  • [144] N. D. Ratliff, J. A. Bagnell, and M. A. Zinkevich (2006) Maximum margin planning. In Proceedings of the 23rd international conference on Machine learning - ICML ’06, pp. 729–736. External Links: Document, ISBN 1595933832, ISSN 17458358 Cited by: §III-C.
  • [145] V. Rausch, A. Hansen, E. Solowjow, C. Liu, E. Kreuzer, and J. K. Hedrick (2017) Learning a deep neural net policy for end-to-end control of autonomous vehicles. In 2017 American Control Conference (ACC), pp. 4914–4919. Cited by: §I, §III-A, TABLE I.
  • [146] H. Raza and P. Ioannou (1996) Vehicle following control design for automated highway systems. IEEE Control Systems Magazine 16 (6), pp. 43–60. External Links: Document, ISBN 0-7803-3659-3, ISSN 02721708 Cited by: §IV-E.
  • [147] B. Ren, S. S. Ge, C. Su, and T. H. Lee (2009) Adaptive neural control for a class of uncertain nonlinear systems in pure-feedback form with hysteresis input. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 39 (2), pp. 431–443. Cited by: §III-B.
  • [148] M. Riedmiller, J. Peters, and S. Schaal (2007) Evaluation of policy gradient methods and variants on the cart-pole benchmark. In Approximate Dynamic Programming and Reinforcement Learning, 2007. ADPRL 2007. IEEE International Symposium on, pp. 254–261. Cited by: §II-B.
  • [149] M. Riedmiller (2005) Neural fitted q iteration–first experiences with a data efficient neural reinforcement learning method. In European Conference on Machine Learning, pp. 317–328. Cited by: §III-C.
  • [150] B. D. Ripley (1996) Pattern Recognition in Neural Networks. Cambridge University Press, Cambridge. Cited by: §IV-D.
  • [151] P. Ross (2014) Robot, you can drive my car. IEEE Spectrum 51 (6). External Links: Document, ISSN 00189235 Cited by: §I.
  • [152] S. Ross, G. Gordon, and D. Bagnell (2011) A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627–635. Cited by: §II-A, §III-C.
  • [153] R. Rothe, R. Timofte, and L. Van Gool (2015) Dex: deep expectation of apparent age from a single image. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 10–15. Cited by: §III-A.
  • [154] S. Russell (1998) Learning agents for uncertain environments (extended abstract).

    Proceedings of the 11th Annual Conference on Computational Learning Theory (COLT)

    , pp. 101–103.
    External Links: Document, ISBN 1581130570 Cited by: §IV-C.
  • [155] A. A. Rusu, M. Vecerik, T. Rothörl, N. Heess, R. Pascanu, and R. Hadsell (2016) Sim-to-real robot learning from pixels with progressive nets. arXiv preprint arXiv:1610.04286. Cited by: §IV-E.
  • [156] R. Salay, R. Queiroz, and K. Czarnecki (2017) An analysis of iso 26262: using machine learning safely in automotive software. arXiv preprint arXiv:1709.02435. Cited by: §V.
  • [157] T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever (2017) Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864. Cited by: §IV-A.
  • [158] A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani (2016) End-to-end deep reinforcement learning for lane keeping assist. arXiv preprint arXiv:1612.04340. Cited by: Fig. 3, §III-C, TABLE III.
  • [159] R. Sanner and M. Mears (1992) Stable adaptive tracking of uncertainty systems using nonlinearly parameterized on-line approximators. IEEE Transactions on Neural Networks 3 (6), pp. 837–863. Cited by: §III-B.
  • [160] R. J. Schalkoff (1997) Artificial Neural Networks. McGraw-Hill, New York. Cited by: §IV-D.
  • [161] F. Scheidegger, L. Benini, C. Bekas, and C. Malossi (2019) Constrained deep neural network architecture search for iot devices accounting hardware calibration. arXiv preprint arXiv:1909.10818. Cited by: §IV-B.
  • [162] J. Schmidhuber (1991) A possibility for implementing curiosity and boredom in model-building neural controllers. In Proc. of the international conference on simulation of adaptive behavior: From animals to animats, pp. 222–227. Cited by: §III-B.
  • [163] J. Schmidhuber (2015) Deep learning in neural networks: an overview. Neural networks 61, pp. 85–117. Cited by: §I, §II.
  • [164] J. G. Schneider (1997) Exploiting model uncertainty estimates for safe dynamic control learning. In Advances in neural information processing systems, pp. 1047–1053. Cited by: §IV-F.
  • [165] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel (2015) High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438. Cited by: §II-B.
  • [166] W. Schwarting, J. Alonso-Mora, and D. Rus (2018) Planning and decision-making for autonomous vehicles. Annual Review of Control, Robotics, and Autonomous Systems (0). Cited by: §I.
  • [167] Y. Sevchuk (2016) Hyperparameter optimization for Neural Networks. External Links: Link Cited by: §IV-B.
  • [168] S. Shalev-Shwartz, S. Shammah, and A. Shashua (2016) Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295. Cited by: §IV-C, §IV-F.
  • [169] D. Silver, J. A. Bagnell, and A. Stentz (2010) Learning from demonstration for autonomous navigation in complex unstructured terrain. In International Journal of Robotics Research, Vol. 29, pp. 1565–1592. External Links: Document, ISBN 0278364910369, ISSN 02783649 Cited by: §IV-C.
  • [170] D. Silver, J. A. Bagnell, and A. Stentz (2013) Learning Autonomous Driving Styles and Maneuvers from Expert Demonstration. In Experimental Robotics, pp. 371–386. External Links: Document, ISBN 978-3-319-00064-0, ISSN 21530858 Cited by: §I, §III-C.
  • [171] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller (2014) Deterministic policy gradient algorithms. Cited by: §II-B.
  • [172] S. Singh (2015) Critical reasons for crashes investigated in the National Motor Vehicle Crash Causation Survey. National Highway Traffic Safety Administration (February), pp. 1–2. External Links: ISBN 9767071113 Cited by: §I.
  • [173] J. Snoek, H. Larochelle, and R. P. Adams (2012) Practical Bayesian Optimization of Machine Learning Algorithms. Advances in Neural Information Processing Systems 25, pp. 2960–2968. External Links: Document, arXiv:1206.2944v1, ISBN 9781627480031, ISSN 10495258 Cited by: §IV-B.
  • [174] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research 15, pp. 1929–1958. External Links: Document, 1102.4807, ISBN 1532-4435, ISSN 15337928 Cited by: §IV-D.
  • [175] W. Su, S. Boyd, and E. Candes (2014) A differential equation for modeling nesterov’s accelerated gradient method: theory and insights. In Advances in Neural Information Processing Systems, pp. 2510–2518. Cited by: §III-A.
  • [176] Q. Sun (2016) Cooperative Adaptive Cruise Control Performance Analysis. Ph.D. Thesis, Ecole Centrale de Lille. Cited by: §III-B, §III-B.
  • [177] I. Sutskever, O. Vinyals, and Q. V. Le (2014) Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112. Cited by: §I.
  • [178] R. S. Sutton and A. G. Barto (1998) Reinforcement Learning: An Introduction. Vol. 9, MIT Press, Cambridge, MA. External Links: Document, 1603.02199, ISBN 0262193981, ISSN 10459227 Cited by: §II-B, §II, §III-B, §III-B.
  • [179] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour (2000) Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pp. 1057–1063. Cited by: §II-B.
  • [180] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §IV-F.
  • [181] M. Tan, B. Chen, R. Pang, V. Vasudevan, M. Sandler, A. Howard, and Q. V. Le (2019) Mnasnet: platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2820–2828. Cited by: §IV-B.
  • [182] J. Tani, M. Ito, and Y. Sugita (2004) Self-organization of distributedly represented multiple behavior schemata in a mirror system: reviews of robot experiments using rnnpb. Neural Networks 17 (8-9), pp. 1273–1289. Cited by: §I.
  • [183] The open racing car simulator. External Links: Link Cited by: §III-C.
  • [184] C. Thorpe, M. Herbert, T. Kanade, and S. Shafter (1991) Toward autonomous driving: the cmu navlab. ii. architecture and systems. IEEE expert 6 (4), pp. 44–52. Cited by: §I.
  • [185] C. Thorpe, T. Jochem, and D. Pomerleau (1997) The 1997 automated highway free agent demonstration. In Intelligent Transportation System, 1997. ITSC’97., IEEE Conference on, pp. 496–501. Cited by: §IV-E.
  • [186] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, et al. (2006) Stanley: the robot that won the darpa grand challenge. Journal of field Robotics 23 (9), pp. 661–692. Cited by: §I.
  • [187] S. Thrun (2010) Toward robotic cars. Communications of the ACM 53 (4), pp. 99. External Links: Document, ISSN 00010782 Cited by: §I.
  • [188] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel (2017) Domain randomization for transferring deep neural networks from simulation to the real world. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, pp. 23–30. Cited by: §IV-E.
  • [189] A. Torralba, A. A. Efros, et al. (2011) Unbiased look at dataset bias.. In CVPR, Vol. 1, pp. 7. Cited by: §II-A, §IV-E.
  • [190] J. N. Tsitsiklis and B. Van Roy (1996) Feature-based methods for large scale dynamic programming. Machine Learning 22 (1-3), pp. 59–94. Cited by: §II-B.
  • [191] E. Tzeng, C. Devin, J. Hoffman, C. Finn, X. Peng, S. Levine, K. Saenko, and T. Darrell (2015) Towards adapting deep visuomotor representations from simulated to real environments. CoRR, abs/1511.07111. Cited by: §IV-E.
  • [192] Udacity Inc. (2018) Udacity Self-driving Car Dataset. External Links: Link Cited by: §II-C.
  • [193] C. Urmson and W. Whittaker (2008) Self-driving cars and the Urban challenge. IEEE Intelligent Systems 23 (2), pp. 66–68. External Links: Document, ISBN 1541-1672, ISSN 15411672 Cited by: §I.
  • [194] A. Vahidi and A. Eskandarian (2003) Research advances in intelligent collision avoidance and adaptive cruise control. IEEE Transactions on Intelligent Transportation Systems 4 (3), pp. 143–153. Cited by: §III-B.
  • [195] J. Van Brummelen, M. O’Brien, D. Gruyer, and H. Najjaran (2018) Autonomous vehicle perception: the technology of today and tomorrow. Transportation research part C: emerging technologies. Cited by: §I.
  • [196] H. Van Seijen, M. Fatemi, J. Romoff, R. Laroche, T. Barnes, and J. Tsang (2017) Hybrid reward architecture for reinforcement learning. In Advances in Neural Information Processing Systems, pp. 5392–5402. Cited by: §IV-C.
  • [197] P. Van Wesel and A. E. Goodloe (2017) Challenges in the verification of reinforcement learning algorithms. Technical report Technical report, NASA. Cited by: §IV-F.
  • [198] K. R. Varshney and H. Alemzadeh (2017) On the safety of machine learning: cyber-physical systems, decision sciences, and data products. Big data 5 (3), pp. 246–255. Cited by: §IV-E.
  • [199] S. M. Veres, L. Molnar, N. K. Lincoln, and C. P. Morice (2011) Autonomous vehicle control systems—a review of decision making. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 225 (2), pp. 155–195. Cited by: §I.
  • [200] B. Wang, D. Zhao, C. Li, and Y. Dai (2015) Design and implementation of an adaptive cruise control system based on supervised actor-critic learning. 2015 5th International Conference on Information Science and Technology (ICIST), pp. 243–248. External Links: Document, ISBN 978-1-4799-7489-4 Cited by: §III-B.
  • [201] D. Wang and J. Huang (2002) Adaptive neural network control for a class of uncertain nonlinear systems in pure-feedback form. Automatica 38 (8), pp. 1365–1372. Cited by: §III-B.
  • [202] D. Wang and J. Huang (2005) Neural network-based adaptive dynamic surface control for a class of uncertain nonlinear systems in strict-feedback form. IEEE Transactions on Neural Networks 16 (1), pp. 195–202. Cited by: §III-B.
  • [203] D. Wang, C. Devin, Q. Cai, F. Yu, and T. Darrell (2018) Deep object centric policies for autonomous driving. arXiv preprint arXiv:1811.05432. Cited by: §III-C, TABLE III.
  • [204] J. Wang, X. Xu, D. Liu, Z. Sun, and Q. Chen (2014) Self-learning cruise control using kernel-based least squares policy iteration. IEEE Transactions on Control Systems Technology 22 (3), pp. 1078–1087. External Links: Document, ISBN 1063-6536, ISSN 10636536 Cited by: §III-B.
  • [205] P. Wang, C. Chan, and A. de La Fortelle (2018) A reinforcement learning based approach for automated lane change maneuvers. In 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1379–1384. Cited by: §III-A, TABLE I.
  • [206] Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas (2016) Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224. Cited by: §II-B.
  • [207] C. J. Watkins and P. Dayan (1992) Q-learning. Machine learning 8 (3-4), pp. 279–292. Cited by: §II-B.
  • [208] (2019) Waymo open dataset: an autonomous driving dataset. External Links: Link Cited by: §II-C.
  • [209] C. Wilkinson, J. Lynch, and R. Bharadwaj (2013) Final report, regulatory considerations for adaptive systems. National Aeronautics and Space Administration, Langley Research Center. Cited by: §IV-F.
  • [210] R. J. Williams (1987) Reinforcement-learning connectionist systems. College of Computer Science, Northeastern University. Cited by: §II-B.
  • [211] World Health Organization (2018) Global status report on road safety 2018. External Links: Link Cited by: §I.
  • [212] B. Wu, X. Dai, P. Zhang, Y. Wang, F. Sun, Y. Wu, Y. Tian, P. Vajda, Y. Jia, and K. Keutzer (2019) Fbnet: hardware-aware efficient convnet design via differentiable neural architecture search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10734–10742. Cited by: §IV-B.
  • [213] M. Wulfmeier, D. Rao, D. Z. Wang, P. Ondruska, and I. Posner (2017) Large-scale cost function learning for path planning using deep inverse reinforcement learning. International Journal of Robotics Research 36 (10), pp. 1073–1087. External Links: Document, ISSN 17413176 Cited by: §III-C, TABLE III.
  • [214] W. Xia, H. Li, and B. Li (2016) A control strategy of autonomous vehicles based on deep reinforcement learning. In Computational Intelligence and Design (ISCID), 2016 9th International Symposium on, Vol. 2, pp. 198–201. Cited by: §III-C, TABLE III.
  • [215] X. Xiong, J. Wang, F. Zhang, and K. Li (2016) Combining deep reinforcement learning and safety based control for autonomous driving. arXiv preprint arXiv:1612.00147. Cited by: §IV-F.
  • [216] X. Xu, D. Hu, and X. Lu (2007) Kernel-based least squares policy iteration for reinforcement learning. IEEE Transactions on Neural Networks 18 (4), pp. 973–992. Cited by: §III-B.
  • [217] S. Yang, W. Wang, C. Liu, W. Deng, and J. K. Hedrick (2017) Feature analysis and selection for training an end-to-end autonomous vehicle controller using deep learning approach. In 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 1033–1038. Cited by: §IV-A.
  • [218] H. Yin and C. Berger (2017) When to use what data set for your self-driving car algorithm: an overview of publicly available driving datasets. In Intelligent Transportation Systems (ITSC), 2017 IEEE 20th International Conference on, pp. 1–8. Cited by: §II-C.
  • [219] G. Yu and I. K. Sethi (1995) Road-following with continuous learning. In Intelligent Vehicles ’95 Symposium., Proceedings of the, Detroit, MI. Cited by: §III-A, TABLE I.
  • [220] X. Yuan, P. He, Q. Zhu, and X. Li (2019) Adversarial examples: attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems. Cited by: §IV-F.
  • [221] J. Zhang and K. Cho (2016) Query-efficient imitation learning for end-to-end autonomous driving. arXiv preprint arXiv:1605.06450. Cited by: §III-C, TABLE III.
  • [222] S. Zhang, R. Benenson, M. Omran, J. Hosang, and B. Schiele (2016) How far are we from solving pedestrian detection?. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1259–1267. Cited by: §I.
  • [223] T. Zhang, S. S. Ge, and C. C. Hang (2000) Adaptive neural network control for strict-feedback nonlinear systems using backstepping design. Automatica 36 (12), pp. 1835–1846. Cited by: §III-B.
  • [224] X. Zhang, M. Clark, K. Rattan, and J. Muse (2015) Controller verification in adaptive learning systems towards trusted autonomy. In Proceedings of the ACM/IEEE Sixth International Conference on Cyber-Physical Systems, pp. 31–40. Cited by: §IV-F.
  • [225] D. Zhao, Z. Hu, Z. Xia, C. Alippi, Y. Zhu, and D. Wang (2014) Full-range adaptive cruise control based on supervised adaptive dynamic programming. Neurocomputing 125 (February), pp. 57–67. External Links: Document, ISBN 09252312, ISSN 09252312 Cited by: §III-B.
  • [226] D. Zhao, B. Wang, and D. Liu (2013) A supervised Actor-Critic approach for adaptive cruise control. Soft Computing 17 (11), pp. 2089–2099. External Links: Document, ISSN 14327643 Cited by: §I, §III-B, TABLE II.
  • [227] D. Zhao, Z. Xia, and Q. Zhang (2017) Model-free optimal control based intelligent cruise control with hardware-in-the-loop demonstration [research frontier]. IEEE Computational Intelligence Magazine 12 (2), pp. 56–69. Cited by: Fig. 2, §III-B, TABLE II.
  • [228] S. Zhifei and E. M. Joo (2012) A review of inverse reinforcement learning theory and recent advances. World Congress on Computational Intelligence, pp. 1–8. External Links: Document, ISBN 9781467315098 Cited by: §III-C, §III-C.
  • [229] H. Zhu, K. Yuen, L. Mihaylova, and H. Leung (2017) Overview of environment perception for intelligent vehicles. IEEE Transactions on Intelligent Transportation Systems 18 (10), pp. 2584–2601. Cited by: §I.
  • [230] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey (2008) Maximum Entropy Inverse Reinforcement Learning.. AAAI Conference on Artificial Intelligence, pp. 1433–1438. External Links: arXiv:1507.04888v2, ISBN 9781577353683 (ISBN), ISSN 10450823 Cited by: §III-C.