A Survey of End-to-End Driving: Architectures and Training Methods

03/13/2020 ∙ by Ardi Tampuu, et al. ∙ 7

Autonomous driving is of great interest to industry and academia alike. The use of machine learning approaches for autonomous driving has long been studied, but mostly in the context of perception. In this paper we take a deeper look on the so called end-to-end approaches for autonomous driving, where the entire driving pipeline is replaced with a single neural network. We review the learning methods, input and output modalities, network architectures and evaluation schemes in end-to-end driving literature. Interpretability and safety are discussed separately, as they remain challenging for this approach. Beyond providing a comprehensive overview of existing methods, we conclude the review with an architecture that combines the most promising elements of the end-to-end autonomous driving systems.



There are no comments yet.


page 3

page 4

page 6

page 7

page 8

page 10

page 12

page 22

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Autonomy in the context of robotics is the ability of a robot to operate without human intervention or control [7]

. The same definition can be applied to autonomously driving vehicles. The field of autonomous driving has long been investigated by researchers but it has seen a surge of interest in the past decade, both from industry and academia. This surge has been stimulated by DARPA grand and urban challenges of 2004, 2005 and 2007. As such, self-driving cars can be seen as a next potential milestone for the field of artificial intelligence.

On a broad level, the research on autonomous driving can be divided into two main approaches: (i) modular and (ii) end-to-end. The modular approach, also known as the mediated perception approach, is widely used by the industry and is nowadays considered the conventional approach. The modular systems stem from architectures that evolved primarily for autonomous mobile robots and that are built of self-contained but inter-connected modules such as perception, localization, planning and control [140]. As a major advantage, such pipelines are interpretable – in case of a malfunction or unexpected system behavior, one can identify the module at fault. Nevertheless, building and maintaining such a pipeline is costly and despite many man-years of work, such approaches are still far from complete autonomy.

Fig. 1: Modular and end-to-end pipelines. The modular pipeline for autonomous driving consists of many interconnected modules, while end-to-end approach treats the entire pipeline as one learnable machine learning task.

End-to-end driving, also known in the literature as the behavior reflex, is an alternative to the aforementioned modular approach and has become a growing trend in autonomous vehicle research [89, 79, 14, 28, 6, 48, 57, 142, 22]. This approach proposes to directly optimize the entire driving pipeline from processing sensory inputs to generating steering and acceleration commands as a single machine learning task. The driving model is either learned in supervised fashion via imitation learning to mimic human drivers [3], or through exploration and improvement of driving policy from scratch via reinforcement learning [116]. Usually, the architecture is simpler than the modular stack with much fewer components (see Figure 1). While conceptually appealing, this simplicity leads to problems in interpretability – with few intermediate outputs, it is difficult or even impossible to figure out why the model misbehaves.

This review focuses on the main training methods and network architectures used for end-to-end driving. Hereafter, we define end-to-end driving as a system where a neural network makes the main driving decisions, without constraining what the inputs or outputs of the network are. We begin by comparing end-to-end and modular approaches in the second section, showing the pros and cons of each. In the third section we review the main learning methods or paradigms in end-to-end driving. The fourth section lists potential input modalities used by end-to-end networks and the fifth section considers the possible outputs. The sixth section highlights common evaluation strategies and compares the used metrics. The seventh section reviews different approaches to achieve interpretability, which is commonly seen as a major weakness of neural networks and end-to-end learning. Safety and comfort, discussed in the eight section, are prerequisites for real-world use of the self-driving technology. We conclude by discussing the trade-offs provided by different training methods and architectures, and pose a candidate architecture that combines the features used in reviewed papers.

Ii Comparison of modular and end-to-end approaches

Modular approaches involve a fine-grained pipeline of software modules working together to drive the vehicle. The intricate nature of inter-dependencies between such modules is a well-established problem in the broader autonomous-robotics literature (as highlighted in works such as [137, 53]) and has lead to the development of frameworks such as ROS [129]. A detailed description of one such pipeline implementation is presented in [119] which describes the winner of the 2005 DARPA Grand Challenge: a vehicle named Stanley. The Stanley software consisted of around thirty modules, arranged in six layers, namely sensor interface, perception, control, vehicle interface, user interface, and a global service layer. The same software architecture was later also employed in 2007 DARPA urban challenge [77]. The modular nature of classical pipelines for autonomous driving has also been discussed in [140, 16, 68, 40, 84, 102, 142, 133].

Modularity enables engineering teams to concentrate on well defined sub-tasks and independently make improvements across the whole stack, keeping the system operational as long as the intermediate outputs are kept functional. However, designing the interconnection of these dozens of modules is an intricate task. Inputs and outputs of each module must be carefully chosen by the engineering teams to accommodate for the final driving task. Similarly, the eventual driving decisions are the result of an ensemble of engineered subsystems that handle different processes based on deterministic rules [16, 68, 84, 140]. Clearly defined intermediate representations and deterministic rules make autonomous-driving systems based on modular pipelines behave predictably within their established capabilities, given the strict and known inter-dependencies between different sub-systems. Also, as a major advantage, such pipelines are interpretable. In case of a malfunction or unexpected system behavior, one can track down the initial source of error, e.g. a misdetection [12]. More generally, modularity allows to reliably reason about how the system arrived at specific driving decisions [142, 133, 12].

However, the modular systems also bear a number of disadvantages. The predefined inputs and outputs of individual sub-systems might not be optimal for the final driving task in different scenarios [142]. Different road and traffic conditions might require attending to different pieces of information obtained from the environment. As a result, it is very hard to come up with an exhaustive list of useful information to consider to cover all driving situations. For instance, in terms of perception, commonly the modular stack compresses dynamic objects into 3D bounding boxes. Information not contained in this representation is not retrievable for the subsequent modules. As decision making depends on road and traffic context, the vast variety of driving scenarios and environments makes it incredibly hard to cover all the cases with proper contextualized solutions [34, 142]. An example is a ball suddenly rolling to the road. In a residential area, one might expect a child to come retrieve the ball, hence slowing down is the reasonable decision. However, on highways sudden breaking might lead to rear-end collisions and hitting a ball-type object might be less risky than sharp breaking. To deal with both of these situations, first the perception engineers must decide to include the ”ball” object-type among the detectable objects in the perception module. Secondly, engineers working on control must label ”ball” with different costs depending on the context, resulting in distinct behaviors. There is a long tail of similar, rare but relevant situations - road constructions, accidents ahead, mistakes by other drivers, etc. [34]. The engineering effort needed to cover all of them with adequate behaviors is immense. As a further disadvantage, some of the sub-problems solved in the modular pipeline may be unnecessarily difficult and wasteful [97, 142]. A perception module trained to detect 3D bounding boxes of all objects on the scene treats all object types and locations equally, just maximizing average precision [142]. However, it is clear that in self-driving settings, nearby moving objects are the most crucial to optimize for. Trying to detect all objects leads to longer computation times and trades off precision in relevant objects for good precision overall. Additionally, the uncertainty of detections is often lost when passing detected object boxes to succeeding modules [142].

In end-to-end driving the entire pipeline of transforming sensory inputs to driving commands is treated as a single learning task. Assuming enough expert driving data for an imitation learning model or sufficient exploration and training for reinforcement learning, the model should learn optimal intermediate representations for the target task. The model is free to attend to any implicit sources of information, as there are no human-defined information bottlenecks. For example, in darkness the presence of an obscured car could be deduced from its headlights reflecting off other objects. Such indirect reasoning is not possible if the driving scene is reduced to object locations (as in modular stack), but is possible if the model can learn by itself what visual patterns to attend to.

The ability to learn to extract task-specific features and build task-specific representations has lead to a great success of fully neural network based (i.e. end-to-end) solutions in many fields. To begin with, end-to-end approaches are used to solve most computer vision tasks, such as object recognition

[47], object detection [46] and semantic segmentation [23]

. Neural networks have demonstrated the ability to extract abstract and long-term information from written text and solve tasks such as natural language text generation

[93], machine translation [125] and question-answering [65]. End-to-end approaches have shown superhuman performance in Atari video games [76] and grandmaster level results in highly competitive multiplayer games such as StarCraft [126] and Dota 2 [8]. End-to-end neural networks have been also the crucial component in conquering board games such as Go [108] and Chess [109]. Many of these solved tasks are in many aspects more complex than driving a car, a task that a large proportion of people successfully perform even when tired or distracted. Oftentimes a person can later recollect nothing or very little about the route, suggesting the task needs very little conscious attention and might be a simple behavior reflex task. It is therefore reasonable to believe that in the near future an end-to-end approach is also capable to autonomously control a vehicle.

The use of end-to-end optimization raises an issue of interpretability. With no intermediate outputs, it is much harder to trace the initial cause of an error as well as to explain why the model arrived at specific driving decisions [133, 15, 58, 142]. However, solutions exist to increase the interpretability of end-to-end driving models (discussed later in Section VII).

While powerful and successful, it is well known that neural networks are susceptible to adversarial attacks [118]. By making small, but carefully chosen changes to the inputs, the models can be fooled and tricked into making errors. This leads to serious safety concerns when applying neural networks to high-risk domains such as autonomous driving (addressed in Section VIII).

Despite having received less interest and investment over the years, the end-to-end approach to autonomous driving has recently shown promising results. While interpretability remains a challenge, the ease of training and the simplicity of the end-to-end models are appealing. The success of neural networks in other domains also supports continued interest in the field.

Iii Learning methods

Below we describe the common learning methods in end-to-end driving.

Iii-a Imitation learning

Imitation learning (IL) or behavior cloning

is the dominant paradigm used in end-to-end driving. Imitation learning is a supervised learning approach in which a model is trained to mimic expert behavior

[3]. In the case of autonomous driving, the expert is a human driver and the mimicked behavior is the driving commands, e.g. steering, acceleration and braking. The model is optimized to produce the same driving actions as a human, based on the sensory input recorded while the human was driving. The simplicity of collecting large amounts of human driving data makes the IL approach work quite well for simple tasks such as lane following [89, 14]. However, more complicated and rarely occurring traffic scenarios remain challenging for this approach [29].

The first use of imitation learning for end-to-end control was the seminal ALVINN model by Dean Pomerleau [89]. In that work, a shallow fully-connected neural network learned to predict steering wheel angle from camera and radar images. The NAVLAB car steered by ALVINN was able to perform lane following on public roads. Obstacle avoidance was first achieved by Muller et al. [79] with a small robot car, DAVE, navigating in a cluttered backyard. DAVE used two cameras enabling the CNN-architecture to extract distance information. More recently, NVIDIA [14]

brought the end-to-end driving paradigm to the forefront by training a large-scale convolutional neural network to steer a commercial vehicle in a range of driving conditions including both highways and smaller residential roads.

Distribution shift problem

An imitation learning model learns to mimic the experts’s response to traffic situations that the expert has caused. In contrast, when the car is driven by the model, the model’s own outputs influence the car’s observations in the next time step. Hence, the model needs to respond to situations its own driving leads to. If the driving decisions lead to unseen situations, the model might no longer know how to behave.

Self-driving leading the car to states unseen during training is called the distribution shift problem [96, 29] - the actual observations when driving differ from the expert driving presented during training. For example, if the expert always drove near the center of the road, the model has never seen how to recover when deviating towards the side of the road.

Potential solutions to the distribution shift problem are: data augmentation, data diversification and on-policy learning. All these methods diversify the training data in some way - either by collecting or generating additional data. Diversity of training data is crucial for generalization.

Iii-A1 Data augmentation

Collecting a large and diverse enough dataset can be challenging. Instead, one can generate additional, artificial data via data augmentation [107]. Blurring, cropping, changing image brightness and adding noise to the image are standard methods and have also been applied in self-driving context [112]. Furthermore, original camera images can be shifted and rotated as if the car had deviated from the center of the lane (see Figure 2) [89, 14, 28]. The artificial images need to be associated with target driving commands to recover from such deviations. Such artificial deviations have been sufficient to avoid accumulation of errors in lane-keeping. Additionally, one can place two additional cameras pointing forward-left and forward-right and associate the images with commands to turn right and left respectively [14, 78].

Fig. 2: (a) An original and (b) a synthesized image from [14]. The synthesized image looks as if the car has drifted towards the center of the road.

Iii-A2 Data diversification

In addition to data augmentation, it is possible to diversify the training data during collection [28, 34, 146, 29, 78, 112]. Noise can be added to the expert’s driving commands at recording time. The noise forces the car off trajectory, forcing the expert to react to the disturbance and by doing so provide examples of how to deal with deviations. While noise uncorrelated across timesteps might be sufficient [112], often temporally correlated noise (see Figure 3) is added [28, 34, 146, 29, 78]. While useful in simulation, such diversification technique might be too dangerous to apply in the real world.

Fig. 3: Data diversification in [28]. Noise is injected during data collection. Left: steering control in [rad/s]. The steering signal provided to the car (blue) is the sum of the driver’s (green) control and the noise (red). Right: driver’s point of view at three points in time (the trajectories are added for visualization). (a) the noise starts to produce a drift to the right. (b) This triggers a human reaction, sharp turn to the left. (c) Finally, the car recovers from the disturbance. The driver-provided signal is used for training.

ChauffeurNet [6] predicts future waypoints from top-down semantic images instead of camera images. The system uses synthetic trajectory perturbations (including collisions, going off the road) during training for more robust policy (see Figure 4). Due to the semantic nature of the inputs, perturbations are much easier to implement than with camera inputs.

Fig. 4: Trajectory Perturbation as illustrated in [6], where dots connected by a line form a trajectory. (a) Original unperturbed trajectory. The vehicle drives at the center of a lane. (b) A perturbed trajectory, obtained by shifting an agent location (red dot) away from the center of lane and then fitting a line that brings the agent back to the center (first few green dots after the red).

Iii-A3 On-policy learning

To overcome the model not being able to correct course when drifting away from the center of the lane, DAgger [96] proposes to alternate between model and expert while collecting driving data. In principle, the expert provides examples how to solve situations the model-driving leads to. Such systems learn to recover from errors and handle situations that would never happen during human driving. Recovery annotations can be recorded offline [96] or online by having an expert override the control when the autonomously controlled car makes mistakes [26].

However, keeping an expert in the loop is expensive, the amount of possible ways to fail is unlimited. SafeDAgger [144]

reduced human involvement by a module deciding if the self-driving model needs expert’s help at any given moment. The need for human labels can be further reduced by using automatic methods to recover from bad situations. For example, OIL (observational imitation learning)

[69] uses both a set of localization based PID (proportional-integral-derivative) controllers and a human as experts to train IL policies. Taking a step further, one can completely remove the need for human experts in DAgger by applying a conventional localization and controller stack to automatically annotate the states encountered [86].

When using an algorithm as the expert that provides the true labels [86, 22], one can do imitation learning on-policy – the learning agent controls the car during data collection while continuously getting supervision from the expert algorithm. Furthermore, the learned policy can rely on a different (e.g. more cost-effective) set of sensors than the expert algorithm. With these insights, Chen et al. [22] first trained a privileged agent with access to ground-truth road map to imitate expert autopilot (off-policy). They then used the trained privileged agent as the expert to train on-policy a sensorimotor agent with only visual input. Learning on-policy from the privileged agent instead of learning off-policy from the original IL labels resulted in a drastically improved vision-only driving model. Additionally, for a model with multiple branches corresponding to different navigational commands (more details in section IV), the target values for each possible command can be queried from the privileged agent and all branches receive supervision at all times.

Dataset balancing

Biases in the training dataset can reduce machine learning models’ test-time performance and generalization ability [121, 11]. Imitation learning for self-driving is particularly susceptible to this problem, because 1) the datasets are dominated by common behaviors such as driving straight with stable speed [29] and 2) the inputs are high-dimensional and there is plenty of spurious correlations that can lead to causal confusion [32]. At the same time the self-driving task contains many very rarely occurring and difficult to solve situations. When optimizing for average imitation accuracy, the model might trade-off performance on rare hard-to-solve cases for imitating precisely the easy common cases. As a result, training on more data can actually lead to a decrease of generalization ability, as experienced in [29].

Balancing the dataset according to steering angle can help remedy the inherent biases. Balancing can be achieved via upsampling the rarely occurring angles, downsampling the common ones or by weighting the samples [20]. Hawke et al. [45] divide the steering angle space into bins and assign the sample weights to be equal to the bin width divided by the number of points in the bin. This leads to the few samples in sparse but wide bins having increased influence compared to samples from a densely populated narrow bin. Furthermore, within each bin they similarly bin and weight samples according to speed. The balancing of training data according to the two output dimensions (speed and steering) proved so effective that the authors claim further data augmentation or synthesis was not needed [45].

Instead of balancing according to output dimensions, one can also balance the dataset according to data point difficulty. One such approach is to change the sampling frequency of data points according to the prediction error the model makes on them. This has been applied in imitation learning models [22]. Alternatively, instead of re-sampling by error, one could also use weighting by error (weight the hard-to-predict data points higher).

The need for dataset balancing might also arise in case of incorporating navigational commands such as “turn left”, “turn right” and “go straight” (see Section IV-D), the latter being by far the most common in recorded datasets. Balancing the mini-batches to include the same amount of each command is a proposed solution [34].

Training instability

Another challenge of using neural networks as driving models is that the training process is unstable and is not guaranteed to converge to the same minima each time [81]. In fact, with different network initialization or with different ordering of training samples into batches, the trained self-driving model might obtain qualitatively different driving behaviors [29]. Minor differences in model outputs are amplified at test time when the model’s actions define its future inputs, resulting in completely different behaviors. As all other neural networks, end-to-end models are sensitive to dataset biases and may overfit [29].

Iii-B Reinforcement learning

While imitation learning often suffers from insufficient exposure to diverse driving situations during training, reinforcement learning (RL) is more immune to this problem. The learning happens online, so sufficient exploration during the training phase leads to encountering and learning to deal with the relevant situations. The inputs for a RL driving policy can be the same as for IL models, but there is no need to collect expert driving recordings, i.e. human labeled data. Instead, in RL the learning signal originates from rewards, which need to be computed and recorded at each time step.

In an early work, Reidmiller et al. [95] applied reinforcement learning to learn steering on a real car based on five variables describing the state of the car. More recently, Koutnik et al. [63] learned to drive based on image inputs in the TORCS game environment [132] via neuroevolution. RL was also used to learn the difficult double-merge scenario with multiple agents [104] each trained by a policy gradient algorithm [128, 117]. Another popular RL approach, deep Q-learning, has also been used to train vision-based driving in simulation [130]. Deep deterministic policy gradients (DDPG, [71]) has also been applied to self driving [70, 57].

Policies can be first trained with IL and then fine-tuned with RL methods [70]. In other words, one initializes the RL policy with an IL-trained model. This approach reduces the long training time of RL approaches and, as the RL-based fine-tuning happens online, also helps overcome the problem of IL models learning off-policy.

Iii-B1 Rewards

While in imitation learning the desired behavior is defined by the ground truth actions, in reinforcement learning the model simply aims to maximize the rewards. Therefore the choice of positively and negatively rewarded events and actions influences the eventual learned behavior very directly. The simpler the rewarding scheme, the easier it is to understand why certain (mis)behaviors emerge. However, the more complicated rewarding scheme is used, the more explicitly we can define what is desirable and what is not.

The most common is to reward the movement speed (towards the goal; along the road) [85, 34, 70, 57]. Another option is to penalize not being near the center of the track [95, 85]. While crashes shorten the distance covered without disengagement and are implicitly avoided when maximizing discounted future reward (e.g. in [57]), one might also just explicitly punish crashes by assigning a large negative reward, as was done in [74, 70, 85, 34, 104]. Additionally, punishments for overlap with sidewalks and the opposite lane [70, 34] and for abnormal steering angles [70] have been used. Multiple authors find that combining more than one of these rewards is beneficial [34, 85, 70].

Iii-B2 Learning in the real world

Crucial challenge in training reinforcement learning policies lies in providing the necessary exploration for driving policies without incurring damage on the vehicle or other objects. Learning in the real world is still possible by adding a safety measure that takes over control if the RL policy deviates from the road. Riedmiller et al. [95] used an analytically derived steering controller for taking control of the car in case the RL policy deviated too much from the center of track. As another option, a safety-driver can be used if using real cars [57].

Iii-B3 Learning in simulation

Alternatively, it is common to train and test the model within a simulated environment. Recently, the CARLA simulator [34] and GTA V computer game [146] have been used for training IL and RL models. These engines simulate an urban driving environment with cars, pedestrians and other objects. Beyond creating the CARLA simulator, Dosovitsky et al. [34] also trained and evaluated an A3C deep reinforcement learning driving policy. Despite training the RL model more extensively, the imitation learning models and a classical modular pipeline outperformed the RL-based method at evaluation [34].

Iii-C Transfer from simulation to real world

To avoid costly crashes and endangering humans by using real cars, one can train a model in simulation and apply it in the real world [74]. However, as the inputs from simulations and from the real world are somewhat different, the generalization might suffer if no means are taken to minimize the difference in input distributions or to adapt the model. The problem of adapting a model to new, different data, can be approached via supervised [105, 138]

or unsupervised learning

[39, 122].

Fine-tuning [105, 138] is a common supervised approach for adapting models to new data. A model trained in simulation can be re-trained using some real world data. The model can thus adapt itself to the new input distribution. Labeled examples from the real world are needed for this adapting. This approach, however, has not commonly been used in end-to-end driving.

Using an unsupervised approach, one can instead adapt the incoming data and keep the driving model fixed. This approach requires only unlabeled data from the real world, not labeled data. Using conditional generative adversarial networks (cGANs) [43, 75], real-looking images can be generated based on images from a simulator [85]. An end-to-end model can then be trained on the synthetic made-to-look-real images. The resulting driving model can be deployed in the real world or just evaluated by comparing to real-world recordings [85].

Conversely, real images can be transformed into simulation-like images via a cGAN [145]. Generating real-looking images is challenging, generating simulation-like images can be easier. With this approach driving models can be trained in simulation and used on real data by adapting the inputs. When evaluated in the real world, models trained on simulated-to-real transformed images do not need an adaptation module, whereas models trained on real-to-simulated transformed images need the domain-adaptation module. This means the former is computationally more efficient on real world data. However, the latter is more efficient to train in simulation.

Both real and simulated images can be mapped to a common representation that is informative, but sufficiently abstract to remove unnecessary low-level details. When training a driving model using this representation as input, the behavior does not depend any more on where the inputs originate from. Müller et al. [78] proposed to extract the semantic segmentation of the scene from both the real and the simulated images and use it as the input for the driving policy. This approach allowed to successfully transfer a policy learned in CARLA simulator to a 1/5 sized real-world truck. The processing pipeline is demonstrated in Figure 5.

Fig. 5: In Müller et al. [78] the simulated environment image is turned into segmentation map which is in turn used by the driving policy. Both simulated and real images can be segmented. The driving policy does not care where the segmentations originated from and can drive in both environments.

Instead of specifying an arbitrary intermediate representation (e.g. semantic segmentation), end-to-end optimization can be made to learn the most useful latent representation from the data [9]

. To do that, domain transfer modules learn to map images from simulation and real world into a shared lower-dimensional latent space (and back) via two variational autoencoders

[61, 33, 72], one per domain (see Figure 6). The unsupervised transfer modules do not need corresponding pairs of simulation and real-world images for training. The driving policy is trained with IL in simulation. The same latent-to-steering policy is later applied to the real-world by using the real-image-to-latent encoder. The method was tested on a real vehicle and outperformed other domain adaptation methods.

Fig. 6: Bewley et al. [9]

use two variational autoencoders to learn a common latent space between simulated and real images. An image from either domain can be translated to the common latent space. The latent representation is used to generate steering command, but can also be used to generate an image in either of the original domains. This allows to learn a common representation via cyclic losses. For example cyclic reconstruction loss compares simulated images with images obtained by 1) encoding this simulated image to latent space, then 2) generating a ”real image” from the resulting vector, then 3) mapping the generated image again to latent space and 4) generating an artificial simulated image from the new latent vector.

Simulations can be useful also to test and compare architectures, combinations of inputs, fusion methods etc. Hopefully, what works well on simulated data also works on real inputs. Kendall et al. [57] used simulations in Unreal engine to select architecture, action space and learning hyper-parameters before training an RL model in the real world.

Iv Input modalities

Visual inputs combined with intrinsic knowledge about the world and the desired route are usually sufficient for a human driver to safely navigate from point A to point B. External guidance from route planers is often also visual, though voice commands are useful to keep attention on the road. In self-driving, however, the range of possible input modalities is wider. Different inputs can complement each other and help improve generalization and accuracy [133]. Hence, while vision could be sufficient to drive, in end-to-end driving often multiple input modalities are used simultaneously. Most commonly used modalities are described as follows.

Iv-a Camera vision

Monocular camera image is the most natural input modality for end-to-end driving and was already used in [89]. Indeed, humans can also drive with vision in only one eye [131, 92], hence stereo-vision is not a pre-requisite of driving. Many models have managed to achieve good performance with monocular vision-only models [57, 14, 28]. Nevertheless, other authors have found it useful to use stereo cameras, allowing CNNs to learn to implicitly extract depth information [79, 26].

To model temporal aspects, the model needs to consider the combination of multiple past frames [99, 48, 26, 146, 45].

Surround-view cameras are necessary for lane changes and for giving way on intersections. They achieve the function of rear-view mirrors used by human drivers [48]. Surround-view led to an improved imitation accuracy on turns and intersections, but no improvement was seen on highways [48].

While mostly the self-driving datasets are sufficiently large to train end-to-end networks with millions of parameters from scratch, it is possible to use pre-trained object-detection networks as out-of-the-box feature extractors [22, 48, 134]. Different versions of ResNet [47]

trained on ImageNet dataset are commonly used. The pre-trained layers can be fine-tuned for the current task

[134, 50].

Iv-B Semantic representations

Instead or in addition to the RGB images one can also provide the model with extracted representations of the visual scene such as semantic segmentation, depth map, surface normals, optical flow and albedo [146, 78, 112, 45]. In simulation, these images can be obtained with perfect precision. Alternatively (and in the real world) these inputs can be generated from the original images by using other, specialized networks. The specialized networks, e.g. for semantic segmentation, can be pre-trained on existing datasets and do not need to be trained simultaneously with the driving model. While the original RGB image contains all the information present in the predicted images, explicitly extracting such pre-defined representations and using them as (additional) inputs has been shown to improve model robustness [146, 112].

Iv-C Vehicle state

Multiple authors have additionally provided their models with high-level measurements about the state of the vehicle, such as current speed and acceleration [28, 70, 18, 134].

Current speed is particularly useful when the model does not consider multiple consecutive frames. However, in imitation learning settings, if the model receives the current speed as input and predicts the speed for the next timestep as one of the outputs, this can lead to the inertia problem [29]. As in the vast majority of samples the current and next timestep speeds are highly correlated, the model learns to base its speed prediction exclusively on current speed. This leads to the model being reluctant to change its speed, for example to start moving again after stopping behind another car or a at traffic light [29]. This problem can be remedied by helping the model learn speed-related internal representations from images via an additional speed output (not used for controlling the car), but has not been solved completely.

Iv-D Navigational inputs

Being able to follow lane and avoid obstacles are crucial parts of any self-driving model. However, a self-driving car becomes actually useful once we are able to decide where to drive. The indispensable feature of choosing where the car will take you has been studied also in end-to-end models. Multiple approaches have been applied.

Iv-D1 Navigational commands

Navigating from point A to point B can be achieved by providing an additional navigational command assuming values like “go left”, “go right”, “go straight” and ”follow the road” [28, 99, 70, 133, 45, 22, 18]. These commands are generated by a high-level route planner. Notice that similar commands are sufficient for human drivers and are often provided by navigation tools. The prevailing method to incorporate these commands is to have separate output heads (or branches of more than one layer) for each command and just switch between them depending on the received command. Codevilla et al. [28]

demonstrated that such approach works better than just inputting the navigational command to the network as an additional categorical variable. However, such an approach does not scale well with increasing number of different commands. Using navigation commands as additional inputs to intermediate layers

[45, 26]

is an alternative. Similarly to switching branches, this allows the conditional input to be closer to the model output, influencing more directly decision-making, not scene understanding.

As a generalization of navigational commands, driving models can be requested to drive in a certain manner. Similarly to inserting a command to turn left or right, a model can be ordered to keep to the right of the lane, to the center of the lane or follow another vehicle [26] via additional categorical inputs. In a similar manner, switching between slow, fast, aggressive or cautious modes can be envisaged. Learning personalized driving styles was investigated in [64].

Iv-D2 Route planner

“Left”, “right” and “straight” navigational inputs are momentary and do not allow long-term path planning. A more informative representation of the desired route can be inserted to the model in the form of visual route representations. Intuitively, navigation app screen images can be used as input [48], similarly to what a human driver would see when using the planner. Adding a route planner screen as a secondary input to end-to-end models yields higher imitation accuracy compared to models without route input. Using a raw list of desired future GPS coordinates as input was not as effective as using a route planner screen [48].

ChauffeurNet [6] operates on top-down HD maps and provides also the desired route as a binary top-down image (Figure 7), i.e the same frame of reference as other inputs.

Fig. 7: Left: Route input in ChauffeurNet [6] is a binary map. Middle and right: Route input in Hecker et al. [48] is either a screen image from TomTom (middle) or a list of GPS coordinates (right).

Iv-D3 Textual commands

Using text as additional input for driving policies has also been explored. The extra advice can either be goal-oriented such as “Drive slowly” or descriptive such as “There is a pedestrian”. Textual advice can help the policy to better predict expert trajectories compared to camera-only approaches. [59]


Another notable source of inputs in self-driving is LiDAR. LiDAR point clouds are insensitive to illumination conditions and can provide good distance estimations. The output from LiDAR is a sparse cloud of unordered points, which needs to be processed to extract useful features. It is very common to preprocess these points into binary 3D occupancy grids and input them to CNNs

[142, 19, 18]. However, working with the 3D occupancy grid (e.g. via 3D convolution) can be costly [147], because the number of voxels increases rapidly with spatial precision. There are however methods that allow to reduce this processing time, for example by applying convolutions sparsely, only on locations associated with input points [135].Instead of using 3D occupancy grids, PointNet [90, 91] allows to convert the raw point cloud directly to usable features, as used in [25].

A further option is to project the point cloud to a 2D grid in top-down view (bird-eye view (BEV)) [24, 110]. Other sources of information can also be mapped to the BEV reference frame [19, 6], which can make fusion of raw inputs or extracted CNN feature maps more convenient.

LiDAR point cloud can be mapped to a 2D image also via polar grid mapping (PGM) [112]. PGM outputs images where each column corresponds to a certain direction and the width covers all 360 degrees. Each row corresponds to a different LiDAR beam. The values on the image reflect distance of points from the device. Using PGM representation of LiDAR inputs instead of BEV increased performance in [112].

There is a variety of other LiDAR point cloud processing methods not yet explored in the end-to-end context [66, 136, 147, 135, 106, 127]. More specialized reviews and comparisons of methods exist on this topic, in studies such as [38, 143].

Iv-F High-definition maps

The modular approach to autonomous driving relies heavily on accurate localization for route planning. High-definition (HD) maps are relatively costly to obtain, but for driving in dense urban environments they are imperative. In contrast, HD maps are not generally required for learning end-to-end policies – just a camera input can be sufficient.

That said, HD maps can be incorporated into the end-to-end pipelines if needed. HD maps can provide an immense amount of information about the driving scene and make the task of self-driving a lot more manageable. Usage of HD maps in end-to-end driving displays a compromise between the two extremes of driving software stacks. Such approaches are sometimes referred to in the literature as mid-to-mid [6].

In order to make use of HD maps in an end-to-end pipeline, it is common to put all the traffic information onto one or several top-down images in order to be processed by convolutional layers [6, 142, 19, 22]. Top-down HD maps may contain static information about roads, lanes, intersections, crossings, traffic signs, speed limits as well as dynamically changing information about traffic lights [142, 19, 6]. In addition to information about the road, ChauffeurNet [6] uses an existing perception module to detect and draw a map containing other agents in the driving scene (see Figure 8).

Fig. 8: CauffeurNet’s [6] inputs are top-down HD maps containing both static information about the road and the locations of dynamic objects.

Hecker et al. [50] state that while simpler approaches (e.g. camera only) allow to study relevant problems in self-driving, fully autonomous cars require the use of detailed maps. For this reason, they augment the Drive360 dataset from [48] with 15 measures (affordances) extracted from detailed maps (see Table I.)

Name and description Range of values
Road-distance to intersection [0 m, 250 m]
Road-distance to traffic light [0 m, 250 m]
Road-distance to pedestrian crossing [0 m, 250 m]
Road-distance to yield sign [0 m, 250 m]
Legal speed limit [0 km/h, 120 km/h]
Average driving speed based on road geometry [0 km/h, km/h]
Curvature (inverse of radius) [0 m, m]
Turn number: which way to turn in next intersection [0,]
Relative heading of the road after intersection [-180, 180 ]
Relative heading of all other roads [-180, 180 ]
Relative headings of map-matched GPS coordinate [-180, 180 ] x 5
in {1,5,10,20,50} meters
TABLE I: List of measures (affordances) extracted from HERE Technologies detailed map and used as inputs in [50].

Iv-G Multi-modal fusion

In multi-modal approaches the information from multiple input sources must be combined [38]. The multiple modalities might be joined at different stages of computation:

  • Early fusion: multiple sources are combined before feeding them into the learnable end-to-end system. For example, one can concatenate RGB and depth images channel-wise. Pre-processing of inputs might be necessary before fusing (e.g. converting to the same reference frame, rescaling to match dimensions, etc.).

  • Middle fusion: modalities are combined after some feature extraction is done on some or all of them. The feature extraction itself is most often part of the end-to-end model and is learnable. Further computations are performed on the joined (e.g. concatenated) features to reach the final output.

  • Late fusion: outputs are calculated on each input modality separately. The separate outputs are then combined in some way. A well-known late-fusion approach is ensembling, for example using Kalman filters [52] or mixture of experts [54].

In the works covered in this review we mainly encountered early and middle fusion approaches. Early fusion is often computationally the most efficient [133]. The most common fusion technique is concatenation that simply stacks inputs. Concatenating RGB color and depth channels proved to be the best-performing solution for merging RGB and depth inputs [133]. Zhou et al. [146] early-fused various semantic maps (segmentation, depth and others) with RGB images. In the case of concatenating LiDAR and visual inputs both early [142] and middle fusion [19, 25, 112] have been successfully applied. Vehicle state measurements such as speed and acceleration are usually middle-fused with visual inputs [28, 70] (concatenation in both cases). Hecker et al. [48, 50] middle-fused visual temporal features obtained with LSTM [51] modules from camera feeds by concatenating them with features extracted from maps or from GPS coordinates. As an alternative fusion method, element-wise multiplication was used by Kim et al. [59] to middle-fuse textual commands with visual information.

Late fusion has proved more efficient than early fusion in entering desired navigational command (go left, go right, go straight, continue) [99, 28, 133]. Specifically, navigational command is used to switch between output branches. Alternatively, Chowdhuri et al. [26] solved a similar problem (switching behavior mode) with middle-fusion.

Iv-H Multiple timesteps

Certain physical characteristics of the driving scene like speed and acceleration of self and other objects are not directly observable from a single camera image. It can therefore be beneficial to consider multiple past inputs via:

  • CNN+RNN.

    Most commonly CNN-based image processing layers are followed by a recurrent neural network (RNN, most often LSTM

    [51]). The RNN receives the sequence of extracted image features and produces final outputs [134, 25, 6]. Multiple RNNs can be used in parallel for performing different subtasks [6]. Also multiple sources of information (e.g. LiDAR and camera [25]) can be processed by CNNs, concatenated and fed to the RNN together. Alternatively, spatio-temporal extraction can be done on each source separately, with the final output calculated based on concatenated RNN outputs [48, 50]. Recurrent modules can also be stacked on top of each other – Kim et al. [59] uses a CNN image encoder and a LSTM-based textual encoder followed by an LSTM control module.

  • Fixed window CNN. Alternatively, a fixed number of previous inputs can be fed into the spatial feature extractor CNN module. The resulting feature maps can act as inputs for the succeeding task-specific layers [99]. For LiDAR, Zeng et al. [142] stack the ten most recent LiDAR 3D occupancy grids along height axis and use 2D convolutions.

Driving is a dynamic task where temporal context matters. Using past information might help a very good driving model to remember the presence of objects that are momentarily obscured or to consider other drivers’ behavior and driving styles.

V Output modalities

V-a Steering and speed

The majority of end-to-end models yield as output the steering angle and speed (or acceleration and brake commands) for the next timestep [89, 48, 58, 28, 70, 112]. Usually, this is treated as a regression problem, but by binning steering angles one can transform it into classification task [134]. Steering wheel angle can be recorded directly from the car’s CAN bus and is an easily obtained label for IL approaches. However, the function between steering wheel angle and the resulting turning radius depends on the car’s geometry, making this measure specific to the car type used for recording. In contrast, predicting the inverse of the car’s turning radius [14] is independent of the car model’s geometry. Conveniently, the inverse turning radius does not go to infinity on a straight road. Notice that to achieve the desired speed and steering angle outputted by the network, additional PID controllers are needed to convert them to acceleration/brake and steering torque.

To enforce smoother driving, Hawke et al. [45] proposed to output not only the speed and the steering angle, but also the car’s acceleration and angular acceleration of steering. Beyond smoothness, the authors also report better driving performance compared to just predicting values. Temporal consistency of output commands is also enforced in Hecker et al. [50] (see Section VIII).

Many authors have recently optimized speed and steering commands using L1 loss (mean absolute error, MAE) instead of L2 loss (mean squared error, MSE) [50, 29, 133, 146, 99, 6]. MAE has been shown to correlate better with actual driving performance [27].

V-B Waypoints

A higher-level output modality is predicting future waypoints or desired trajectories. Such approaches have been investigated, for instance, in [6, 26, 142, 18, 69]. The network’s output waypoints can be transformed into low-level steering and acceleration commands by another trainable network model, as in [69], or via a controller module, for instance as in [6, 22]. Different control algorithms are available for such controller modules, a popular one being PID (proportional-integral-derivative). Such controller modules can reliably generate the low-level steering and acceleration/braking commands to reach the desired points. For smoother driving, one can fit a curve to the noisy predicted waypoints and use this curve as the desired trajectory [22].

In contrast to predicting momentary steering and acceleration commands, by outputting a sequence of waypoints or a trajectory, the model is forced to plan ahead. A further advantage of using waypoints as output is that they are independent of car geometry. Furthermore, waypoint-based trajectories are easier to interpret and analyze than low-level network outputs such as momentary steering commands.

V-C Cost maps

In many cases a variety of paths are equally valid and safe. It has been proposed to output cost maps, which contain information about where it is safe to drive [35, 142]. The cost maps are then used to pick a good trajectory and a controller (MPC [37]) computes the necessary low-level commands. [142] produced top-down view 2D cost maps for a number of future time steps based on LiDAR and HD maps. Human expert trajectories were used for training the cost map prediction by minimizing human trajectory cost and maximizing random trajectory cost. Based on this cost volume, potential trajectories were evaluated (see Figure 9). Visualizing the cost maps allows humans to better understand the machine’s decisions and reasoning.

Fig. 9: Cost Volume across Time produced by Zeng et al. [142]. The planned trajectory is shown as a red line, ground-truth (human) trajectory as a blue line. The lowest cost regions for a number of future timesteps are overlaid using different colors (each color represents the low-cost region for a separate timestep, indicated by the legend). Detection of other objects in the driving scene and corresponding motion prediction results are in cyan. Figure adapted from [142]


V-D Direct perception and affordances

Direct perception [21] approaches aim to fall between modular pipelines and end-to-end driving and combine the benefits of both approaches. Instead of parsing all the objects in the driving scene and performing robust localization (as modular approach), the system focuses on a small set of crucial indicators, called affordances [21, 99, 1, 103]. For example, the car’s position with respect to the lane, position relative to the edges of the road and the distances to surrounding cars can be predicted from inputs [21]. The affordance values can be fed to a planning algorithm to generate low-level commands for safe and smooth driving.

Direct perception via affordances was combined with conditional driving in [99] by providing high-level navigational commands. Furthermore, the predicted affordances in [99] include speed signs, traffic lights and unexpected traffic scene agents (see Figure 10).

Conditional Affordances Acronym Range of values
No Hazard stop -
Red Traffic Light -
Speed Sign [km/h] -
No Distance to vehicle [m]
Yes Relative angle [rad]
Distance to centerline [m]
Fig. 10: Affordances used by Sauer et al. [99]. Top: Illustration of the affordances (red) and observation areas to used by the model. Within traffic lights and speed signs are detected. If there is an obstacle in hazard stop label is set to True and the agent is expected to stop. Distance to vehicle ahead is measured in . Bottom: List of the affordances. They can be discrete or continuous and either conditional (dependent on directional input) or not.

V-E Multitask learning

In multitask learning [124], multiple outputs are predicted and optimized simultaneously. For example, in addition to planning the trajectory of self, driving models can be asked to also detect and predict the motion of other objects on the scene [6, 142]. These additional outputs are computed by a separate branch based on an intermediate layer of the network. While these tasks are only indirectly relevant for the driving task (the main task), they provide a rich source of additional information that conditions the internal representations of the model in the shared layers. Results show that simultaneously optimizing such side-tasks is beneficial and results in a more robust model [6, 142, 134].

As an additional benefit, the additional outputs can potentially help comprehend the driving decisions and failures of the end-to-end model (more details in Section VII).

Vi Evaluation

It is not always cost-effective to test each model thoroughly in real life. Deploying each incremental improvement to measure its effect is costly. Deploying experimental approaches in real traffic is outright dangerous. In the case of a modular approach, it is common to evaluate each module independently against benchmark datasets such as KITTI [40] for object detection. Nevertheless, performance in sub-tasks does not directly relate to actual driving performance and it is not applicable to end-to-end models as there are no intermediate outputs to evaluate. Hence, a different set of metrics has been developed for end-to-end approaches.

The easiest way to test imitation learning models is open-loop evaluation (Figure 11 left). In such evaluation, the decisions of the autonomous-driving model are compared with the recorded decisions of a human driver. Typically, a dataset is split into training and testing data. The trained model is evaluated on the test set by some performance metric. The most common metrics are mean absolute error and mean squared error of network outputs (e.g. steering angle, speed). A more thorough list of open-loop metrics is given in Table II. Note that there might be multiple equally correct ways to behave in each situation and similarity with just one expert’s style might not be a fair measure of driving ability. Particularly in the case of RL agents, the models may come up with safe driving policies that are not human-like according to mean errors. Hence, open-loop evaluation is restricted to IL and is not used for models trained with RL. Despite its limitations, some end-to-end models are only open-loop evaluated [48, 142, 2, 50, 85, 134, 59].

Fig. 11: Open and closed-loop evaluation. A: Open-loop evaluation uses a part of the original dataset to evaluate the model. Similarity between predicted and ground truth values is measured. No actual driving is done. B: Closed-loop evaluation deploys the model in an environment. The model’s predictions are used as driving actions. The resulting driving behavior is observed and quantified.
Metric name Parameters Metric definition
Squared error
Absolute error
Speed-weighted absolute error
Cumulative speed-weighted absolute error
Quantized classification error
Thresholded relative error

Open-loop evaluation metrics used in

[27]. is the continuous ground-truth action, is the predicted action, is a set of samples used in validation, is the Kronecker delta function, is the Heaviside step function, is a quantization function.

In contrast, closed-loop evaluation directly evaluates the performance of a driving model in a realistic (real or simulated) driving scenario by giving the model control over the car (Figure 11 right). Unlike in the real world, in simulation closed-loop evaluation is easy and not costly to perform. Furthermore two benchmarks exist for the CARLA simulator, allowing a fair comparison of models [34, 29]. Despite the inherent danger, real-life closed-loop testing has also been reported for end-to-end models [9, 45, 57, 6, 14, 28]. The closed-loop metrics used in literature include:

  • percentage of successful trials, [34, 29, 45, 6, 70, 78, 99, 133, 146]

  • number of infractions (collisions, missed turns, going off road, etc), [78, 112, 28],

  • average distance between infractions [34, 9, 29, 57] or disengagements [113] ,

  • time spent on lane markings or off-road, [112]

  • percentage of autonomy (i.e. percentage of time car is controlled by the model, not safety driver), [14],

  • fraction of distance travelled towards the goal [27].

Clearly, all these metrics directly measure the model’s ability to drive on its own, unlike open-loop metrics.

Indeed, good open-loop performance does not necessarily lead to good driving ability in closed-loop settings. Codevilla et al. [27] performed extensive experimentation (using 45 different models) to measure correlations between different open-loop and closed-loop metrics. The used open-loop metrics are listed in Figure II, the closed-loop metrics were 1) trial success rate, 2) fraction of distance traveled towards goal, and 3) average distance between infractions. The results showed that even the best offline metrics only loosely predict closed-loop performance. Mean squared error correlates with closed-loop success rate only weakly (correlation coefficient ), so mean absolute error, quantized classification error or thresholded relative error should be used instead ( for all three). Beyond these three suggested open-loop measures, balanced-MAE was recently reported to correlate better with closed-loop performance than simple MAE [9]. Balanced-MAE is computed by averaging the mean values of unequal-length bins (according to steering angle). Because most data lies in the region around steering angle 0, equally weighting the bins grows the importance of rarely occurring higher steering angles.

While open-loop metrics are not sufficient to measure driving ability, when experimenting with various models, hyper-parameter tuning or when deciding which loss function to use, it is useful to know that some open-loop metrics correlate better with eventual driving ability than others. For example, Bewley

et al. [9] used balanced-MAE to select the best models to test in closed-loop.

Beyond just measuring ability to drive without infractions, Hecker et al. [50] proposed to measure the human-likeness of behavior using generative adversarial networks. More human-like driving is argued to be more comfortable and safer. For measuring comfort, one could also evaluate models according to longitudinal and lateral jerk that are major causes of discomfort (as done in the Supplementary Information of [99]).

Undoubtedly, the most relevant measure of quality of self-driving models is the ability to drive without accidents in real traffic. For comparing the safety of autonomous driving solutions, the State of California requires manufacturers to report each traffic collision involving an autonomous vehicle. These reports are publicly available111https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/autonomousveh_ol316. Furthermore, manufacturers testing autonomous vehicles on public roads also submit an annual report summarizing the disengagements (interventions of safety driver) during testing222https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/testing. These reports allow to compare the performance of different self-driving technologies by miles-per-disengagement and miles-per-accident metrics, i.e. the real-life performance. The shortcomings of these measures are examined in Discussion (Section IX).

Vii Interpretability

In case of failures, it is crucial to understand why the model drives the way it does, so that similar failures could be avoided in the future. While neural networks perform highly complex hierarchical computations, certain methods allow us to investigate what is happening inside the models.

Vii-a Visual saliency

A whole body of research exists on how to interpret the computations that convolutional neural networks perform on visual inputs [141, 111, 5, 30]. Sensitivity analysis aims to determine the parts of an input that a model is most sensitive to. The most common approach involves computing the gradients with respect to the input and using the magnitude as the measure of sensitivity. While popular in other domains and not restrictively expensive computationally, there is no notable application of gradient-based approach in end-to-end driving.

The VisualBackProp method [13]

is a computationally efficient heuristic to determine which input pixels influence the car’s driving decision the most

[15]. Shifting the salient regions of images influences steering prediction linearly and almost as much as shifting the whole image, confirming they are relevant. VisualBackProp can also be applied to other types of inputs, such as segmented images and 2D projections of LiDAR point clouds [112] (see Figure 12).

Fig. 12: Saliency for different input types as illustrated in [112]. Salient parts of inputs are colored green. (a) Original RGB camera image; (b) Attention map overlaid on the Original RGB image; (c) Ground truth semantic segmentation; (d) Attention map overlaid on the semantically segmented image; (e) Attention map overlaid on LiDAR bird view image; (f) Attention map overlaid on LiDAR data with PGM processing

While gradient-based sensitivity analysis and VisualBackProp highlight important image regions for an already trained model, visual attention is a built-in mechanism present already when learning. The model predicts a spatial mask of weights, corresponding to “where to attend”, which is then used to scale inputs. As feature maps preserve spatiality, visual attention can be applied on the feature maps from the convolutional layers [58, 59]. Where to attend in the next timestep (the attention mask), is predicted as additional output in the current step and can be made to depend on additional sources of information (e.g. textual commands [59]). Masks can be upsampled and displayed on the original image (see Figure 13).

Fig. 13: Model in [59] can take in consideration textual commands from the user. These commands influence which regions of the inputs the model pays more attention to.

Vii-B Intermediate representations

Interpretability is a definite advantage of the engineered intermediate representations used in modular pipelines. End-to-end models can also benefit from semantic representations as inputs to the model. In addition to improving model generalization [146] they allow to separately investigate the errors made in predicting the semantic images (segmentation, depth image etc.) and the errors made by the driving model built on them.

Direct perception approaches [99, 21, 1] predict human-understandable affordances that can then be used by hard-coded controllers to make detailed driving decisions. Similarly, waypoints are quite intuitively understandable. Outputting waypoints or affordances instead of the low-level driving commands adds interpretability and helps directly pinpoint the reason (e.g. error in a certain affordance) that lead to a questionable driving decision. Similarly, failures of driving scene understanding are clearly visible when visualizing cost maps that are the outputs of the learnable part of the pipeline in [142] and [35]. The cost maps can be transformed into actions via path planning and control methods.

Vii-C Auxilliary outputs

While one can visualize the sequence of commands or predicted routes, how the system reached these outputs is often not clear. Designing the model to simultaneously predict additional outputs (i.e. auxilliary outputs) and can help comprehend the driving decisions, a benefit beyond the main goal of helping the model to learn more efficient representations

The main and side-tasks (auxilliary tasks) rely on the same intermediate representations within the network. The tasks can be optimised simultaneously, using multiple loss functions. For example, based on the same extracted visual features that are fed to the decision-making branch (main task), one can also predict ego-speed, drivable area on the scene, and positions and speeds of other objects [6, 142, 29]. Such auxilliary tasks can help the model to learn better representations via additional learning signals and via constraining the learned representations [6, 29, 142], but can also help understand the mistakes a model makes. A failure in an auxilliary task (e.g. object detection) might suggest that necessary information was not present already in the intermediate representations (layers) that it shared with the main task. Hence, also the main task did not have access to this information and might have failed for the same reason.

Viii Safety and comfort

Safety and comfort of passengers are prerequisites of commercial applications of self-driving cars. However, millions of hours of driving are needed to prove a reasonable-sized fleet of cars causes accidents rarely enough [56, 82]. Also, the diversity of test driving data is almost never sufficiently high to ensure the safety of driving models in all conditions [88]. For example, NVIDIA DAVE-2 [14] can be made to fail by just changing the light conditions [88]. Similarly, many more recent and more advanced models fail to generalize to combinations of new location and new weather conditions, especially in dense traffic, even in simulation [29].

It is known that neural networks can be fooled with carefully designed inputs, especially if one has access to the model parameters (white box attacks) [118, 42]. For example, it has been shown that perception models can be fooled by placing stickers on traffic signs [36]. Also, driving assistance systems can be fooled by putting stickers or projecting images on the road [80]. There is no evident reason why end-to-end models should by default be robust to such attacks, hence before the technology can be deployed specific measures need to be taken to avoid this vulnerability.

The real world is expected to be even more diverse and challenging for generalization than simulation. Testing enough to get sufficient statistical power is hard and expensive to achieve, and also puts other traffic participants in danger [82]. Hence simulations are still seen as the main safety-testing environment [82, 83]. In simulation one can make the model play through many short (few to tens of seconds) periods of critical, dangerous situations [82]. In real life each play-through of such situation would need the involvement of many people and cars and resetting the scene each time.

One can avoid the need to generalize to all cases if the model can reliably detect its inability to deal with the situation and pass the driving on to a person [49] or to another algorithm. This means adding a safety module detecting situations where the self-driving model is likely to fail [49]. It remains to be determined how early the driver can be alerted and how fast the driver can react and take over control.

Ensuring the comfort of passengers is a distinct problem from safety. Comfort includes motion comfort (sickness), apparent safety and level of controllability [64, 50]. Motion comfort can be increased by reducing longitudinal and lateral jerk via an additional loss function enforcing temporal smoothness of steering and speed [50]. Note that for passenger comfort speed and steering should depend on each other (as in [4, 44]), instead of being two independent outputs. While not mentioning comfort as a desired outcome, ChauffeurNet [6] included an additional loss function (geometry loss) to enforce smooth trajectories.

Human-like driving increases safety and comfort as people both inside and outside the vehicle can better comprehend the model’s driving behavior. An adversarial network attempting to discriminate between human and machine driving was added in [50]. The adversarial loss increased the comfort as measured by the lateral and longitudinal jerk, and achieved improved accuracy over pure imitation learning.

Ix Discussion

In this section we attempt to summarize the best, most promising practices in end-to-end driving.

Ix-a Architectures

There is a huge diversity of architectures that could be used in end-to-end driving. In here we try to narrow down the space of likely to be useful models by discussing the most promising choices for inputs, outputs, fusion techniques and auxilliary tasks.

Driving based on only camera inputs makes the eventual deployment of the technology affordable. Measurements such as current speed and acceleration are also easy to obtain. However, using LiDAR and HD maps puts end-to-end models in the same price range with modular approaches, making it not affordable for everyone. The initial cost of sensors, the cost of creating and maintaining HD maps and the cost of repairing and insuring a car with many sensors[31] prevent the wider adoption of LiDAR and map-based approaches. We believe that pursuing affordable self-driving based on the end-to-end approach is of more interest for car manufacturers. This is opposed to ride-hailing service providers and trucking companies, whose cost model can accommodate the increased price due to the self-driving technology.

A 360-degree view around the vehicle is shown to be useful for more complicated driving maneuvers like lane changing or giving way on intersections. Conversely, stereo-vision for depth estimation is not commonly used, probably because it is useful only in the close proximity (10m)


Another important input modality is navigational instructions. Providing the route as a map image is the more flexible option, as it defines the intended route more precisely in a longer time-scale. In contrast, with categorical commands, the instruction ”turn left” might be confusing or come too late if there are multiple roads to turn into on the left. An average human driver would also find it hard to navigate in a foreign city based on only the voice instructions of a navigation app. We hence conclude that while it is not clear how well the model is capable of extracting route information from the route planner screen image, this approach is more flexible and more promising in longer term.

Models usually have multiple inputs. With several input sources (e.g. multiple cameras, self-speed, navigation input), one needs to merge the information in some way. Early fusion is appealing as all pieces of information can be combined from early on. One could early-fuse maps, visual inputs and LiDAR data for example by concatenating them as different channels, but for this they must be mapped to the same frame of reference and be of equal spatial size. Early-fusion seems hard to apply for inserting speed and other non-spatial measurements into the model. Hence, middle-fusion remains the default strategy that can be applied to all inputs.

It is beneficial to endow a model with computer-vision capabilities. Zhou et al. [146] propose training a separate set of networks to predict semantic segmentation, optical flow, depth and other human-understandable representations from the camera feed. These images can then be used as additional inputs (e.g. early-fused by stacking as channels). Alternatively, the these semantic images can be used as targets for auxiliary tasks. In multi-task learning, the main branch of the model transforms images into driving outputs, while at certain layer(s) the model forks out to produce additional outputs such as segmentation maps and depth maps. In comparison to generating and using additional inputs, the approach of using auxiliary tasks has one major benefit - the specialized networks (and the branches) are not needed at execution time, making the eventual deployed system computationally faster. On the other hand, using those additional branches during evaluation allows to reason about the mistakes the network made - if the model did not detect an object it may be the reason why it did not avoid it.

The outputs of the model define the level of understanding the model is expected to achieve. When predicting instantaneous low-level commands, we are not explicitly forcing the model to plan a long-term trajectory. Ability to plan ahead might arise in the internal representations, but it is neither guaranteed nor easily measurable. When predicting a series of future desired locations (e.g. waypoints), the model is explicitly asked to plan ahead. Outputting waypoints also increases interpretability, as they can be easily visualized. Model outputs can be noisy, so waypoints should be further smoothed by fitting a curve to them and using this curve as the desired trajectory, as done in [22]. Speed can be deduced from distances between consecutive waypoints, but can also be an explicit additional output, as done in ChauffeurNet [6]. If deemed necessary, additional constraints can be added, such as forcing human-like trajectories (see Hecker et al. [50]).

Outputting a series of cost-maps is an equally promising approach, allowing to plan motion many seconds ahead. The model can be made to estimate not only the instantaneous cost map, but also the cost maps for multiple future timesteps. A planner can then generate candidate paths and select a trajectory that minimizes the cost over multiple future time points.

Ix-B Learning

Imitation learning is the dominant strategy for end-to-end autonomous driving. A list of existing datasets for IL has been provided in Appendix B of this survey. Despite multiple large datasets being available, it is common for authors to collect their own data. It is hard to disentangle if differences in results are due to architecture or training data, even if benchmarks exist.

Online training [22] allows to avoid the distribution shift problem commonly associated with imitation learning. The case where the supervision signal can be queried in any state of the world and for any possible navigation command is particularly promising. Indeed, such an online-trained vision-only agent performed remarkably well in CARLA and NoCrash benchmark tasks [22]. Once trained, the vision-only model no longer needs the expert nor the detailed inputs. In the real world, however, creating the necessary expert policy (that can be queried in any state) is complicated, as there are only few companies reporting required level of performance from their modular driving stacks. Advances in sim2real may help such online-trained model to generalize from the simulation to the real world.

Recently, [29] reported that using more training data from CARLA Town1 decreases generalization ability in Town2. This illustrates that more data without more diversity is not useful. The not-diverse datapoints contain the same information over and over again and lead to overfitting. As a potential remedy, one should weight the rare outputs and rare inputs (rare situations, locations, visual aspects, etc.) higher. The error the model makes on a datapoint might be a reasonable proxy for datapoint novelty. One could sample difficult data more frequently (as in prioritized experience replay used in RL [101]) or weight difficult samples higher. This approach promises to boost learning in the long tails of both the input and output distributions.

Augmenting the data also improves the generalization ability. Common techniques such as blurring inputs, dropping out pixels, perturbing colors, adding noise or changing light conditions are known to work for standard supervised learning tasks, therefore can be expected to be beneficial also for imitation learning. Adding temporally correlated noise to the control commands during data collection is a common method for diversifying data that improves the generalization performance of imitation learning.

Ix-C Evaluation

Off-policy imitation learning builds models via maximizing an open-loop performance, while actually the model is deployed in closed-loop settings. There are open-loop metrics that correlate better with closed-loop performance and therefore should be preferred during training. For example MAE is shown to be advantageous to MSE in that sense [27]. Furthermore, Bewley et al. [9] reported Balanced-MAE correlating even stronger with driving ability, which suggests training set balancing being also important for closed-loop performance.

One of the goals of evaluation is to give a fair estimation of the generalization ability of the model, i.e. the expected performance in new situations. Since the CARLA benchmark was released and adopted, multiple works have shown in simulation that model performance does not degrade drastically in a new city and unseen weather conditions [29, 22]. However, the generalization ability drops sharply when increasing traffic density.

While for CARLA-based models two benchmarks exist, models trained and tested in other simulators or on real world data have no clear comparison baselines. So an author can show high performance by choosing to test in simple settings. Therefore, readers should always pay close attention to the testing conditions of models.

The problem of comparing performance metrics obtained in different difficulty levels also applies for the safety measures collected by the State of California333https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/testing - miles per disengagement or miles per accident do not reveal where and in which conditions these miles were driven. Furthermore, it is not commonly agreed what constitutes a disengagement and different companies might apply different thresholds. There is a need for universal real-life evaluation procedure that would make a wide variety of different approaches comparable.

For allowing to compare different end-to-end models, future models should perform closed-loop evaluation:

  • In diverse locations. Locations not used during training, if possible.

  • In diverse weather and light conditions. Conditions not used during training, if possible.

  • In diverse traffic situations, with low, regular or dense traffic.

If training the model in CARLA simulator, one should report the performance in CARLA and NoCrash benchmarks.

Ix-D Candidate architecture

Based on the most promising approaches in the end-to-end models reviewed in this survey article, we propose a candidate architecture to visualize what a modern end-to-end model looks like. This architecture has not been implemented and should be taken as an illustration. Such illustration can help readers to more intuitively grasp the structure of end-to-end models. The candidate architecture is given in Figure 14.

Fig. 14: Candidate architecture summarizing the reviewed papers. Top: the network receives input from multiple sensors. The inputs that are costly to use in the real world are marked optional. Navigational conditioning can be achieved via behavior command or route planner screen image. Behavior represents a categorical command, e.g. ”turn left”, ”change lane right” or ”drive carefully”. Route planner screen image fulfills a similar role, but contains also extra information about the surrounding road network and buildings. Center: the inputs are processed with CNNs or MLPs and fused via concatenation (+). A RNN module extracts temporal information from a sequence of these fused representations and a fully-connected module calculates the final driving outputs. Bottom right: the final outputs are usually either actuation values, waypoints or cost maps. In the case of conditioning with behavioral command, the network has multiple sets of output nodes, one set per one categorical command. Which set of output nodes is used, is determined by switching according to the command. Bottom left: jointly optimizing the main task and the auxilliary tasks shapes the internal representation of the camera CNN to include more semantic information. Additional auxilliary tasks can be added, for example predicting speed based on visual inputs or 3D object detection based on LiDAR.

Ix-E Future challenges

To drive safely in the real world, an autonomous driving model needs to handle smoothly the long tail of rare traffic situations. Commonly, the collected datasets contain large amounts of repeated traffic situations and only few of those rare events. Naive training on the whole dataset results in weak performance on the rare situations, because the model optimizes for the average performance to which the frequent situations contribute more. To put more emphasis on atypical situations during optimization, it has been proposed to balance the dataset according to the output distribution - i.e. making the rarely occurring driving commands more influential. However, this solves only part of the problem, because situations with atypical inputs, but common outputs, also exist - e.g. driving straight with a steady speed in rare weather conditions. Furthermore, it is in fact the joint distribution of inputs and outputs that defines the rarity of a data point. It is therefore crucial to develop data sampling techniques aiding the model to learn to deal with rare data points in a more general fashion.

At the same time the ability to deal with unusual situations is very hard to reliably test in the real world. The huge number of possible rare events means that billions of miles need to be driven before one can statistically claim an autonomous driving model is safe [82]. Such black-box testing, i.e. testing the entire driving pipeline together, is the only evaluation option for end-to-end models, which cannot be divided into separately verifiable sub-modules. Evaluation by deploying the entire model is also unavoidable for advanced modular approaches, where the interconnections between modules are complex and small errors may be amplified and cause unexpected behavior. Hence, the problem of black-box testing is universally relevant for all autonomous driving models and has been discussed intensively over the years [56, 62, 120, 83, 82]. There is a need for testing methods that would have safety guarantees for the real world.

Testing in simulation can be seen as one solution to the high costs of real-world testing. However the ability to pass simulated scenarios does not directly translate into safe driving in the real world. Further advances in making simulations more realistic or the development of better domain adaptation techniques are needed. One particularly challenging area is the modeling of the behavior of other traffic participants, needed for life-like simulations.

Appendix A Recent contributions in end-to-end driving

Bewley et al. transfers single frontal camera steering 1) image reconstruction loss 2) cyclic reconstruction loss OPEN: MAE, Balanced-MAE 60K frames
2019 [9] simulated to real 3) control loss 4) cyclic control loss 5) adversarial loss CLOSED: distance to intervention town/rural
(custom simulator) 6) perceptual loss 7) latent reconstruction loss clear/rain/overcast
Codevilla et al. CARLA single frontal camera steering, acceleration MAE Train: 100h , Test: 80h
2019 [29] ego-speed, navigation command AUXILIARY: speed from vision CLOSED: CARLA benchmark111CARLA benchmark. The model is tested on routes of varying difficulty, with and without dynamic objects, while the percentage of successful routes and kilometers per infraction are measured. There are two towns and 6 different weather conditions. One of the towns and two of the weather conditions are novel, i.e. excluded from training set. two towns, diverse weather
NoCrash benchmark 222NoCrash benchmark. The model is tested on routes in 3 different traffic densities, 6 different weather conditions and two different towns. Percentage of successful episodes is counted. One of the towns and two of the weather conditions are novel, i.e. excluded from training set. & traffic density
Chen et al. CARLA single frontal camera waypoints MAE loss on trajectories CLOSED: CARLA benchmark111CARLA benchmark. The model is tested on routes of varying difficulty, with and without dynamic objects, while the percentage of successful routes and kilometers per infraction are measured. There are two towns and 6 different weather conditions. One of the towns and two of the weather conditions are novel, i.e. excluded from training set. Train: 154K frames = 4h
2019 [22] (in the camera reference frame) (comparing with privileged agent) NoCrash benchmark222NoCrash benchmark. The model is tested on routes in 3 different traffic densities, 6 different weather conditions and two different towns. Percentage of successful episodes is counted. One of the towns and two of the weather conditions are novel, i.e. excluded from training set. two towns, diverse weather
Hawke et al. real 1 or 3 cameras steering: value and slope future-discounted MSE on OPEN: Balanced-MAE for model selection Train: 30h, Test: 26 routes
2019 [45] 2 timesteps (only for flow) speed: value and slope predicted values & slopes CLOSED: success % of turns, of stopping behind pace car one city, 6 months
navigation command vs. observed future values collision rate, traffic violation rate different times of day
Meters per intervention (overall, in line follow, in pace car following)
Hecker et al. real single frontal camera steering, speed MAE on speed and steering OPEN: MAE Train: 60h, Test: 10h
2019 [50] TomTom screen MAE on second derivative of speed and steering over time city+countryside
features from HD maps adversarial (log loss) on humanness of command sequence
Kendall et al. real single frontal camera steering, speed Reward: distance travelled without driver taking over CLOSED: meters per disengagement 250m rural road
2019 [57]
Xiao et al. CARLA single frontal camera steering, throttle, brake MAE CLOSED: CARLA benchmark111CARLA benchmark. The model is tested on routes of varying difficulty, with and without dynamic objects, while the percentage of successful routes and kilometers per infraction are measured. There are two towns and 6 different weather conditions. One of the towns and two of the weather conditions are novel, i.e. excluded from training set. Train: 25h
2019 [133] depth image (true or estimated) two towns, diverse weather
Zhou et al. GTA V single or multiple cameras + steering MAE for steering CLOSED: % of successful episodes, Train: 100K frames = 3.5 h
2019 [146] true or predicted: If predicting: MAE for depth, normals, flow, albedo % success weighted by track length
semantic & instance segmentation, monocular CE for segmentation and boundary prediction urban + off-road trail
depth, surface normals, optical flow, albedo
Zeng et al. real single frontal camera space-time cost volume, planning loss (cost maps) OPEN: At different time horizons: 1.4M frames, 6500 scenarios
2019 [142] LiDAR, ego-speed AUXILIARY: ego-speed, 3D object perception loss (object detection) MAE and MSE loss of trajectory multiple cities
locations and future trajectories collision & traffic violation rate
Amini et al. real 3 cameras, unrouted map,

1) unrouted: weight, mean, variance

unrouted: negative log-likelihood of human steering,

OPEN: z-score of human steering

Train: 25 km test: separate 1km
2018 [2] (optional) routed map of the 3 gaussian models (GMs) routed: steering MSE loss, according to the GMM suburban with turns, intersections,
2) routed: deterministic steering control L1 penalty on norm of the Gaussian roundabouts, dynamic obstacles
mixture model weights vector,
quadratic penalty on the log of variance of GMs
Bansal et al. real, 7 top down semantic maps waypoints, headings, speeds, self position 1) waypoint (CE) 2) agent box loss (CE) OPEN: MSE on waypoints 26M examples = 60days
2018 [6] Test: real+sim (Roadmap, Traffic Lights, Speed Limit, , 3) direction (MAE) 4) p_subpixel loss (MAE) CLOSED: success % at stop-signs, traffic lights, lane following, no information on diversity
Route, Current Agent Box, AUXILLIARY: road mask, perception boxes 5) speed (MAE) 6) collision loss 7)on road loss navigating around a parked car,
Dynamic Boxes, Past Agent Poses) 8)geometry loss AUXILIARY: objects loss & road loss recovering from perturbations, slowing down behind a slow car
Codevilla et al. CARLA + single frontal camera steering MSE CLOSED CARLA: CARLA benchmark111CARLA benchmark. The model is tested on routes of varying difficulty, with and without dynamic objects, while the percentage of successful routes and kilometers per infraction are measured. There are two towns and 6 different weather conditions. One of the towns and two of the weather conditions are novel, i.e. excluded from training set. SIM: train 2h, two towns, diverse weather
2018 [28] real toy truck ego-speed, navigation command acceleration CLOSED REAL: % missed turns, # interventions, time spent REAL: Train 2h, not diverse
Hecker et al. real 4 cameras steering, speed MSE OPEN: MSE 60h  multiple cities, conditions
2018 route: GPS coordinates or TomTom map
Liang et al. CARLA single frontal camera steering, acceleraton, brake Trained in two phases, CLOSED: CARLA benchmark111CARLA benchmark. The model is tested on routes of varying difficulty, with and without dynamic objects, while the percentage of successful routes and kilometers per infraction are measured. There are two towns and 6 different weather conditions. One of the towns and two of the weather conditions are novel, i.e. excluded from training set. IL: 14h + RL: 12h
2018 navigation command In IL phase:   MSE two towns, diverse weather
In RL phase: speed(+), abnormal steer angle (-)
collisions (-), overlap with sidewalk or other lane (-)
Müller et al. transfers Train: 28 h (in clear daytime weather)
2018 [78] simulated to real single frontal camera two waypoints (fixed distance, predicted angle) MSE CLOSED: % of sucessful episodes Test: 2x25 trials
(simulation: CARLA) navigation command in real: time spent, missed turns, infractions in cloudy after rain weather
(real: toy trucks) two towns
real: diverse situations, weather diversity unclear
Sauer et al. CARLA single frontal camera 6 affordances: 3 x CE CLOSED: CARLA benchmark111CARLA benchmark. The model is tested on routes of varying difficulty, with and without dynamic objects, while the percentage of successful routes and kilometers per infraction are measured. There are two towns and 6 different weather conditions. One of the towns and two of the weather conditions are novel, i.e. excluded from training set. no information on amount
2018 [99] Hazard stop (boolean), Red Traffic Light (boolean), 3 x MAE mean distance (km) between various types of infractions two towns, diverse weather
Speed Sign [categorical], Relative angle [rad] in SI: distance to centerline, jerk
Distances to vehicle [m] and centerline [m]
Sobh et al. CARLA single frontal camera as RGB steering, throttle MSE CLOSED: time spent off-road Train: 136K samples Test:20 min
2018 [112] or as segmentation time spent on lane markings
LiDAR in BEV or PGM number of crashes weather not diverse (not specified)
navigation command
Dosovitsky et al. CARLA single frontal camera for IL model: specified as ”action” not specified CLOSED: CARLA benchmark111CARLA benchmark. The model is tested on routes of varying difficulty, with and without dynamic objects, while the percentage of successful routes and kilometers per infraction are measured. There are two towns and 6 different weather conditions. One of the towns and two of the weather conditions are novel, i.e. excluded from training set. IL model: 14h of driving data
2017 [34] navigation command RL model: no information distance between infractions RL model: 12 days of driving
Bojarski et al. real single frontal camera steering MSE CLOSED: autonomy (%of driving time Train: length not specified
2016 [14] when car was controlled by the model, not safety driver) day, night, multiple towns, conditions
test: 3h = 100km
TABLE III: Recent contributions in end-to-end driving. The included articles are mainly selected to either test the model in real life or perform closed-loop testing in simulation.

Appendix B Datasets

Dataset Modalities Annotations Diversity Size License









Route planner

Semantic map

Text annotations

3D annotations

2D annotations


Time of day

Driving scene

Udacity [123] 3 5h MIT
CARLA [34] 1 12h MIT
Drive360 [48] 8 55h Academic
Comma.ai 2016 [98] 1 7h 15min CC BY-NC-SA 3.0
Comma.ai 2019 [100] 1 30h MIT
DeepDrive [139] 1 1100h Berkley
DeepDrive-X [60] 1 77h Berkley
Oxford RobotCar [73] 4 214h CC BY-NC-SA 4.0
HDD [94] 3 104h Academic
Brain4Cars [55] 1 1180 miles Academic
Li-Vi [25, 67] 1 10h Academic
DDD17 [10] 1***Event camera 12h CC-BY-NC-SA-4.0
A2D2 [41] 6 390k frames CC BY-ND 4.0
nuScenes [17] 6 5.5h Non-commercial
Waymo [115] 5 5.5h Non-commercial
H3D [87] 3 N/A Academic
HAD [59] 3 30h Academic
TABLE IV: List of useful datasets for training end-to-end self-driving models.


The authors would like to thank Hannes Liik for fruitful discussions.

Ardi Tampuu, Maksym Semikin, Dmytro Fishman and Tambet Matiisen were funded by European Social Fund via Smart Specialization project with Bolt. Naveed Muhammad has been funded by European Social Fund via IT Academy programme.


  • [1] M. Al-Qizwini, I. Barjasteh, H. Al-Qassab, and H. Radha (2017) Deep learning algorithm for autonomous driving using googlenet. In 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 89–96. Cited by: §V-D, §VII-B.
  • [2] A. Amini, G. Rosman, S. Karaman, and D. Rus (2019) Variational end-to-end navigation and localization. In 2019 International Conference on Robotics and Automation (ICRA), pp. 8958–8964. Cited by: TABLE III, §VI.
  • [3] B. D. Argall, S. Chernova, M. Veloso, and B. Browning (2009) A survey of robot learning from demonstration. Robotics and autonomous systems 57 (5), pp. 469–483. Cited by: §I, §III-A.
  • [4] R. Attia, R. Orjuela, and M. Basset (2014) Combined longitudinal and lateral control for automated vehicle guidance. Vehicle System Dynamics 52 (2), pp. 261–279. Cited by: §VIII.
  • [5] S. Bach, A. Binder, G. Montavon, F. Klauschen, K. Müller, and W. Samek (2015)

    On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation

    PloS one 10 (7), pp. e0130140. Cited by: §VII-A.
  • [6] M. Bansal, A. Krizhevsky, and A. Ogale (2018) Chauffeurnet: learning to drive by imitating the best and synthesizing the worst. arXiv preprint arXiv:1812.03079. Cited by: TABLE III, §I, Fig. 4, §III-A2, Fig. 7, Fig. 8, 1st item, §IV-D2, §IV-E, §IV-F, §IV-F, §V-A, §V-B, §V-E, 1st item, §VI, §VII-C, §VIII, §IX-A.
  • [7] G. A. Bekey (2005) Autonomous robots: from biological inspiration to implementation and control (intelligent robotics and autonomous agents). The MIT Press. External Links: ISBN 0262025787 Cited by: §I.
  • [8] C. Berner, G. Brockman, B. Chan, V. Cheung, P. Debiak, C. Dennison, D. Farhi, Q. Fischer, S. Hashme, C. Hesse, et al. (2019) Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680. Cited by: §II.
  • [9] A. Bewley, J. Rigley, Y. Liu, J. Hawke, R. Shen, V. Lam, and A. Kendall (2019) Learning to drive from simulation without real world labels. In 2019 International Conference on Robotics and Automation (ICRA), pp. 4818–4824. Cited by: TABLE III, Fig. 6, §III-C, 3rd item, §VI, §VI, §VI, §IX-C.
  • [10] J. Binas, D. Neil, S. Liu, and T. Delbruck (2017) DDD17: end-to-end davis driving dataset. arXiv preprint arXiv:1711.01458. Cited by: TABLE IV.
  • [11] R. Binns (2017) Fairness in machine learning: lessons from political philosophy. arXiv preprint arXiv:1712.03586. Cited by: §III-A.
  • [12] N. Board (2018) Preliminary report highway: hwy18mh010. National Transpotation Safety Board. Cited by: §II.
  • [13] M. Bojarski, A. Choromanska, K. Choromanski, B. Firner, L. J. Ackel, U. Muller, P. Yeres, and K. Zieba (2018) Visualbackprop: efficient visualization of cnns for autonomous driving. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. Cited by: §VII-A.
  • [14] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al. (2016) End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316. Cited by: TABLE III, §I, Fig. 2, §III-A1, §III-A, §III-A, §IV-A, §V-A, 5th item, §VI, §VIII.
  • [15] M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller (2017) Explaining how a deep neural network trained with end-to-end learning steers a car. arXiv preprint arXiv:1704.07911. Cited by: §II, §VII-A.
  • [16] M. Buehler, K. Iagnemma, and S. Singh (2009) The darpa urban challenge: autonomous vehicles in city traffic. Vol. 56, springer. Cited by: §II, §II.
  • [17] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom (2019) Nuscenes: a multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027. Cited by: TABLE IV.
  • [18] L. Caltagirone, M. Bellone, L. Svensson, and M. Wahde (2017) LIDAR-based driving path generation using fully convolutional neural networks. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pp. 1–6. Cited by: §IV-C, §IV-D1, §IV-E, §V-B.
  • [19] S. Casas, W. Luo, and R. Urtasun (2018) Intentnet: learning to predict intention from raw sensor data. In Conference on Robot Learning, pp. 947–956. Cited by: §IV-E, §IV-E, §IV-F, §IV-G.
  • [20] N. V. Chawla (2009) Data mining for imbalanced datasets: an overview. In Data mining and knowledge discovery handbook, pp. 875–886. Cited by: §III-A.
  • [21] C. Chen, A. Seff, A. Kornhauser, and J. Xiao (2015) Deepdriving: learning affordance for direct perception in autonomous driving. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2722–2730. Cited by: §V-D, §VII-B.
  • [22] D. Chen, B. Zhou, and V. Koltun (2019) Learning by cheating. Technical report Technical report. Cited by: TABLE III, §I, §III-A3, §III-A, §IV-A, §IV-D1, §IV-F, §V-B, §IX-A, §IX-B, §IX-C.
  • [23] L. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pp. 801–818. Cited by: §II.
  • [24] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia (2017) Multi-view 3d object detection network for autonomous driving. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 1907–1915. Cited by: §IV-E.
  • [25] Y. Chen, J. Wang, J. Li, C. Lu, Z. Luo, H. Xue, and C. Wang (2018) Lidar-video driving dataset: learning driving policies effectively. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5870–5878. Cited by: TABLE IV, 1st item, §IV-E, §IV-G.
  • [26] S. Chowdhuri, T. Pankaj, and K. Zipser (2019) MultiNet: multi-modal multi-task learning for autonomous driving. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1496–1504. Cited by: §III-A3, §IV-A, §IV-A, §IV-D1, §IV-D1, §IV-G, §V-B.
  • [27] F. Codevilla, A. M. López, V. Koltun, and A. Dosovitskiy (2018) On offline evaluation of vision-based driving models. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 236–251. Cited by: §V-A, 6th item, TABLE II, §VI, §IX-C.
  • [28] F. Codevilla, M. Miiller, A. López, V. Koltun, and A. Dosovitskiy (2018) End-to-end driving via conditional imitation learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–9. Cited by: TABLE III, §I, Fig. 3, §III-A1, §III-A2, §IV-A, §IV-C, §IV-D1, §IV-G, §IV-G, §V-A, 2nd item, §VI.
  • [29] F. Codevilla, E. Santana, A. M. López, and A. Gaidon (2019) Exploring the limitations of behavior cloning for autonomous driving. arXiv preprint arXiv:1904.08980. Cited by: TABLE III, §III-A2, §III-A, §III-A, §III-A, §III-A, §IV-C, §V-A, 1st item, 3rd item, §VI, §VII-C, §VIII, §IX-B, §IX-C.
  • [30] P. Dabkowski and Y. Gal (2017) Real time image saliency for black box classifiers. In Advances in Neural Information Processing Systems, pp. 6967–6976. Cited by: §VII-A.
  • [31] A. Davies (2020) New safety gizmos are making car insurance more expensive. Note: https://www.wired.com/story/safety-gizmos-making-car-insurance-more-expensive/ Cited by: §IX-A.
  • [32] P. de Haan, D. Jayaraman, and S. Levine (2019) Causal confusion in imitation learning. In Advances in Neural Information Processing Systems, pp. 11693–11704. Cited by: §III-A.
  • [33] P. K. Diederik, M. Welling, et al. (2014) Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR), Cited by: §III-C.
  • [34] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun (2017) CARLA: an open urban driving simulator. arXiv preprint arXiv:1711.03938. Cited by: TABLE III, TABLE IV, §II, §III-A2, §III-A, §III-B1, §III-B3, 1st item, 3rd item, §VI.
  • [35] P. Drews, G. Williams, B. Goldfain, E. A. Theodorou, and J. M. Rehg (2017) Aggressive deep driving: model predictive control with a cnn cost model. arXiv preprint arXiv:1707.05303. Cited by: §V-C, §VII-B.
  • [36] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song (2018) Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625–1634. Cited by: §VIII.
  • [37] P. Falcone, F. Borrelli, J. Asgari, H. E. Tseng, and D. Hrovat (2007) Predictive active steering control for autonomous vehicle systems. IEEE Transactions on Control Systems Technology 15 (3), pp. 566–580. Cited by: §V-C.
  • [38] D. Feng, C. Haase-Schuetz, L. Rosenbaum, H. Hertlein, F. Duffhauss, C. Glaeser, W. Wiesbeck, and K. Dietmayer (2019) Deep multi-modal object detection and semantic segmentation for autonomous driving: datasets, methods, and challenges. arXiv preprint arXiv:1902.07830. Cited by: §IV-E, §IV-G.
  • [39] Y. Ganin and V. Lempitsky (2014)

    Unsupervised domain adaptation by backpropagation

    arXiv preprint arXiv:1409.7495. Cited by: §III-C.
  • [40] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research 32 (11), pp. 1231–1237. Cited by: §II, §VI.
  • [41] J. Geyer, Y. Kassahun, M. Mahmudi, X. Ricou, R. Durgesh, A. S. Chung, L. Hauswald, V. H. Pham, M. Mhlegg, S. Dorn, et al. (2019) A2D2: aev autonomous driving dataset. Note: https://www.audi-electronics-venture.com/aev/web/en/driving-dataset.html Cited by: TABLE IV.
  • [42] I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §VIII.
  • [43] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §III-C.
  • [44] L. Han, H. Yashiro, H. T. N. Nejad, Q. H. Do, and S. Mita (2010) Bezier curve based path planning for autonomous vehicle in urban environment. In 2010 IEEE Intelligent Vehicles Symposium, pp. 1036–1042. Cited by: §VIII.
  • [45] J. Hawke, R. Shen, C. Gurau, S. Sharma, D. Reda, N. Nikolov, P. Mazur, S. Micklethwaite, N. Griffiths, A. Shah, et al. (2019) Urban driving with conditional imitation learning. arXiv preprint arXiv:1912.00177. Cited by: TABLE III, §III-A, §IV-A, §IV-B, §IV-D1, §V-A, 1st item, §VI.
  • [46] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969. Cited by: §II.
  • [47] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §II, §IV-A.
  • [48] S. Hecker, D. Dai, and L. Van Gool (2018) End-to-end learning of driving models with surround-view cameras and route planners. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 435–453. Cited by: TABLE IV, §I, Fig. 7, 1st item, §IV-A, §IV-A, §IV-A, §IV-D2, §IV-F, §IV-G, §V-A, §VI.
  • [49] S. Hecker, D. Dai, and L. Van Gool (2018) Failure prediction for autonomous driving. In 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1792–1799. Cited by: §VIII.
  • [50] S. Hecker, D. Dai, and L. Van Gool (2019) Learning accurate, comfortable and human-like driving. arXiv preprint arXiv:1903.10995. Cited by: TABLE III, 1st item, §IV-A, §IV-F, §IV-G, TABLE I, §V-A, §V-A, §VI, §VI, §VIII, §VIII, §IX-A.
  • [51] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: 1st item, §IV-G.
  • [52] P. L. Houtekamer and H. L. Mitchell (1998) Data assimilation using an ensemble kalman filter technique. Monthly Weather Review 126 (3), pp. 796–811. Cited by: 3rd item.
  • [53] K. J., V. S., H. J., and S. P. (2010) High level software architecture for autonomous mobile robot. Recent Advances in Mechatronics, pp. 185–190. Cited by: §II.
  • [54] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, G. E. Hinton, et al. (1991) Adaptive mixtures of local experts.. Neural computation 3 (1), pp. 79–87. Cited by: 3rd item.
  • [55] A. Jain, H. S. Koppula, B. Raghavan, S. Soh, and A. Saxena (2015) Car that knows before you do: anticipating maneuvers via learning temporal driving models. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3182–3190. Cited by: TABLE IV.
  • [56] N. Kalra and S. M. Paddock (2016) Driving to safety: how many miles of driving would it take to demonstrate autonomous vehicle reliability?. Transportation Research Part A: Policy and Practice 94, pp. 182–193. Cited by: §VIII, §IX-E.
  • [57] A. Kendall, J. Hawke, D. Janz, P. Mazur, D. Reda, J. Allen, V. Lam, A. Bewley, and A. Shah (2019) Learning to drive in a day. In 2019 International Conference on Robotics and Automation (ICRA), pp. 8248–8254. Cited by: TABLE III, §I, §III-B1, §III-B2, §III-B, §III-C, §IV-A, 3rd item, §VI.
  • [58] J. Kim and J. Canny (2017) Interpretable learning for self-driving cars by visualizing causal attention. In Proceedings of the IEEE international conference on computer vision, pp. 2942–2950. Cited by: §II, §V-A, §VII-A.
  • [59] J. Kim, T. Misu, Y. Chen, A. Tawari, and J. Canny (2019) Grounding human-to-vehicle advice for self-driving vehicles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10591–10599. Cited by: TABLE IV, 1st item, §IV-D3, §IV-G, §VI, Fig. 13, §VII-A.
  • [60] J. Kim, A. Rohrbach, T. Darrell, J. Canny, and Z. Akata (2018) Textual explanations for self-driving vehicles. In Proceedings of the European conference on computer vision (ECCV), pp. 563–578. Cited by: TABLE IV.
  • [61] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §III-C.
  • [62] P. Koopman and M. Wagner (2016) Challenges in autonomous vehicle testing and validation. SAE International Journal of Transportation Safety 4 (1), pp. 15–24. Cited by: §IX-E.
  • [63] J. Koutník, J. Schmidhuber, and F. Gomez (2014) Evolving deep unsupervised convolutional networks for vision-based reinforcement learning. In

    Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation

    pp. 541–548. Cited by: §III-B.
  • [64] M. Kuderer, S. Gulati, and W. Burgard (2015) Learning driving styles for autonomous vehicles from demonstration. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 2641–2646. Cited by: §IV-D1, §VIII.
  • [65] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut (2019) Albert: a lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. Cited by: §II.
  • [66] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12697–12705. Cited by: §IV-E.
  • [67] (Accessed: 2020-02-25) Large-scale driving behavior dataset. Note: http://www.dbehavior.net/index.html Cited by: TABLE IV.
  • [68] J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, S. Kammel, J. Z. Kolter, D. Langer, O. Pink, V. Pratt, et al. (2011) Towards fully autonomous driving: systems and algorithms. In 2011 IEEE Intelligent Vehicles Symposium (IV), pp. 163–168. Cited by: §II, §II.
  • [69] G. Li, M. Mueller, V. Casser, N. Smith, D. L. Michels, and B. Ghanem (2018) OIL: observational imitation learning. arXiv preprint arXiv:1803.01129. Cited by: §III-A3, §V-B.
  • [70] X. Liang, T. Wang, L. Yang, and E. Xing (2018) Cirl: controllable imitative reinforcement learning for vision-based self-driving. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 584–599. Cited by: §III-B1, §III-B, §III-B, §IV-C, §IV-D1, §IV-G, §V-A, 1st item.
  • [71] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: §III-B.
  • [72] M. Liu, T. Breuel, and J. Kautz (2017)

    Unsupervised image-to-image translation networks

    In Advances in neural information processing systems, pp. 700–708. Cited by: §III-C.
  • [73] W. Maddern, G. Pascoe, C. Linegar, and P. Newman (2017) 1 year, 1000 km: the oxford robotcar dataset. The International Journal of Robotics Research 36 (1), pp. 3–15. Cited by: TABLE IV.
  • [74] J. Michels, A. Saxena, and A. Y. Ng (2005) High speed obstacle avoidance using monocular vision and reinforcement learning. In Proceedings of the 22nd international conference on Machine learning, pp. 593–600. Cited by: §III-B1, §III-C.
  • [75] M. Mirza and S. Osindero (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by: §III-C.
  • [76] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. (2015) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529. Cited by: §II.
  • [77] M. Montemerlo, J. Becker, S. Bhat, H. Dahlkamp, D. Dolgov, S. Ettinger, D. Haehnel, T. Hilden, G. Hoffmann, B. Huhnke, D. Johnston, S. Klumpp, D. Langer, A. Levandowski, J. Levinson, J. Marcil, D. Orenstein, J. Paefgen, I. Penny, A. Petrovskaya, M. Pflueger, G. Stanek, D. Stavens, A. Vogt, and S. Thrun (2008-09) Junior: the stanford entry in the urban challenge. Journal of Field Robotics 25 (9), pp. 569–597. External Links: ISSN 1556-4959, Link, Document Cited by: §II.
  • [78] M. Müller, A. Dosovitskiy, B. Ghanem, and V. Koltun (2018) Driving policy transfer via modularity and abstraction. arXiv preprint arXiv:1804.09364. Cited by: TABLE III, Fig. 5, §III-A1, §III-A2, §III-C, §IV-B, 1st item, 2nd item.
  • [79] U. Muller, J. Ben, E. Cosatto, B. Flepp, and Y. L. Cun (2006) Off-road obstacle avoidance through end-to-end learning. In Advances in neural information processing systems, pp. 739–746. Cited by: §I, §III-A, §IV-A.
  • [80] B. Nassi, D. Nassi, R. Ben-Netanel, Y. Mirsky, O. Drokin, and Y. Elovici (2020) Phantom of the adas: phantom attacks on driver-assistance systems. Note: https://eprint.iacr.org/2020/085.pdf Cited by: §VIII.
  • [81] B. Neal, S. Mittal, A. Baratin, V. Tantia, M. Scicluna, S. Lacoste-Julien, and I. Mitliagkas (2018) A modern take on the bias-variance tradeoff in neural networks. arXiv preprint arXiv:1810.08591. Cited by: §III-A.
  • [82] J. Norden, M. O’Kelly, and A. Sinha (2019) Efficient black-box assessment of autonomous vehicle safety. arXiv preprint arXiv:1912.03618. Cited by: §VIII, §VIII, §IX-E.
  • [83] M. O’Kelly, A. Sinha, H. Namkoong, R. Tedrake, and J. C. Duchi (2018) Scalable end-to-end autonomous vehicle testing via rare-event simulation. In Advances in Neural Information Processing Systems, pp. 9827–9838. Cited by: §VIII, §IX-E.
  • [84] B. Paden, M. Čáp, S. Z. Yong, D. Yershov, and E. Frazzoli (2016) A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Transactions on intelligent vehicles 1 (1), pp. 33–55. Cited by: §II, §II.
  • [85] X. Pan, Y. You, Z. Wang, and C. Lu (2017) Virtual to real reinforcement learning for autonomous driving. arXiv preprint arXiv:1704.03952. Cited by: §III-B1, §III-C, §VI.
  • [86] Y. Pan, C. Cheng, K. Saigol, K. Lee, X. Yan, E. Theodorou, and B. Boots (2018) Agile autonomous driving using end-to-end deep imitation learning. In Robotics: science and systems, Cited by: §III-A3, §III-A3.
  • [87] A. Patil, S. Malla, H. Gang, and Y. Chen (2019) The h3d dataset for full-surround 3d multi-object detection and tracking in crowded urban scenes. In 2019 International Conference on Robotics and Automation (ICRA), pp. 9552–9557. Cited by: TABLE IV.
  • [88] K. Pei, Y. Cao, J. Yang, and S. Jana (2017) Deepxplore: automated whitebox testing of deep learning systems. In Proceedings of the 26th Symposium on Operating Systems Principles, pp. 1–18. Cited by: §VIII.
  • [89] D. A. Pomerleau (1989) Alvinn: an autonomous land vehicle in a neural network. In Advances in neural information processing systems, pp. 305–313. Cited by: §I, §III-A1, §III-A, §III-A, §IV-A, §V-A.
  • [90] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660. Cited by: §IV-E.
  • [91] C. R. Qi, L. Yi, H. Su, and L. J. Guibas (2017) Pointnet++: deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, pp. 5099–5108. Cited by: §IV-E.
  • [92] L. Racette and E. J. Casson (2005) The impact of visual field loss on driving performance: evidence from on-road driving assessments. Optometry and vision science 82 (8), pp. 668–674. Cited by: §IV-A.
  • [93] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2019) Language models are unsupervised multitask learners. OpenAI Blog 1 (8), pp. 9. Cited by: §II.
  • [94] V. Ramanishka, Y. Chen, T. Misu, and K. Saenko (2018) Toward driving scene understanding: a dataset for learning driver behavior and causal reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7699–7707. Cited by: TABLE IV.
  • [95] M. Riedmiller, M. Montemerlo, and H. Dahlkamp (2007) Learning to drive a real car in 20 minutes. In 2007 Frontiers in the Convergence of Bioscience and Information Technologies, pp. 645–650. Cited by: §III-B1, §III-B2, §III-B.
  • [96] S. Ross, G. Gordon, and D. Bagnell (2011) A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627–635. Cited by: §III-A3, §III-A.
  • [97] A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani (2017) Deep reinforcement learning framework for autonomous driving. Electronic Imaging 2017 (19), pp. 70–76. Cited by: §II.
  • [98] E. Santana and G. Hotz (2016) Learning a driving simulator. arXiv preprint arXiv:1608.01230. Cited by: TABLE IV.
  • [99] A. Sauer, N. Savinov, and A. Geiger (2018) Conditional affordance learning for driving in urban environments. arXiv preprint arXiv:1806.06498. Cited by: TABLE III, 2nd item, §IV-A, §IV-D1, §IV-G, Fig. 10, §V-A, §V-D, §V-D, 1st item, §VI, §VII-B.
  • [100] H. Schafer, E. Santana, A. Haden, and R. Biasini (2018) A commute in data: the comma2k19 dataset. arXiv preprint arXiv:1812.05752. Cited by: TABLE IV.
  • [101] T. Schaul, J. Quan, I. Antonoglou, and D. Silver (2015) Prioritized experience replay. arXiv preprint arXiv:1511.05952. Cited by: §IX-B.
  • [102] W. Schwarting, J. Alonso-Mora, and D. Rus (2018) Planning and decision-making for autonomous vehicles. Annual Review of Control, Robotics, and Autonomous Systems. Cited by: §II.
  • [103] A. Seff and J. Xiao (2016) Learning from maps: visual common sense for autonomous driving. arXiv preprint arXiv:1611.08583. Cited by: §V-D.
  • [104] S. Shalev-Shwartz, S. Shammah, and A. Shashua (2016) Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295. Cited by: §III-B1, §III-B.
  • [105] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson (2014) CNN features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 806–813. Cited by: §III-C, §III-C.
  • [106] S. Shi, X. Wang, and H. Li (2019) Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–779. Cited by: §IV-E.
  • [107] C. Shorten and T. M. Khoshgoftaar (2019) A survey on image data augmentation for deep learning. Journal of Big Data 6 (1), pp. 60. Cited by: §III-A1.
  • [108] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. (2016) Mastering the game of go with deep neural networks and tree search. nature 529 (7587), pp. 484. Cited by: §II.
  • [109] D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, et al. (2017) Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815. Cited by: §II.
  • [110] M. Simon, S. Milz, K. Amende, and H. Gross (2018) Complex-yolo: an euler-region-proposal for real-time 3d object detection on point clouds. In European Conference on Computer Vision, pp. 197–209. Cited by: §IV-E.
  • [111] K. Simonyan, A. Vedaldi, and A. Zisserman (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. Cited by: §VII-A.
  • [112] I. Sobh, L. Amin, S. Abdelkarim, K. Elmadawy, M. Saeed, O. Abdeltawab, M. Gamal, and A. El Sallab (2018) End-to-end multi-modal sensors fusion system for urban automated driving. Cited by: TABLE III, §III-A1, §III-A2, §IV-B, §IV-E, §IV-G, §V-A, 2nd item, 4th item, Fig. 12, §VII-A.
  • [113] D. o. M. V. State of California 2019 autonomous vehicle disengagement reports. Note: https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/disengagement_report_20192020 Cited by: 3rd item.
  • [114] T. M. Strat (1992) Natural object recognition. In Natural Object Recognition, pp. 47–48. Cited by: §IX-A.
  • [115] P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, et al. (2019) Scalability in perception for autonomous driving: waymo open dataset. arXiv, pp. arXiv–1912. Cited by: TABLE IV.
  • [116] R. S. Sutton, A. G. Barto, et al. (1998) Introduction to reinforcement learning. Vol. 2, MIT press Cambridge. Cited by: §I.
  • [117] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour (2000) Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pp. 1057–1063. Cited by: §III-B.
  • [118] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §II, §VIII.
  • [119] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, K. Lau, C. Oakley, M. Palatucci, V. Pratt, P. Stang, S. Strohband, C. Dupont, L. Jendrossek, C. Koelen, C. Markey, C. Rummel, J. van Niekerk, E. Jensen, P. Alessandrini, G. Bradski, B. Davies, S. Ettinger, A. Kaehler, A. Nefian, and P. Mahoney (2006-09) Stanley: the robot that won the darpa grand challenge: research articles. Journal of Field Robotics 23 (9), pp. 661–692. External Links: ISSN 0741-2223, Link, Document Cited by: §II.
  • [120] Y. Tian, K. Pei, S. Jana, and B. Ray (2018) Deeptest: automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th international conference on software engineering, pp. 303–314. Cited by: §IX-E.
  • [121] A. Torralba, A. A. Efros, et al. (2011) Unbiased look at dataset bias.. In CVPR, Vol. 1, pp. 7. Cited by: §III-A.
  • [122] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell (2017) Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176. Cited by: §III-C.
  • [123] (2017) Udacity: public driving dataset. Note: https://github.com/udacity/self-driving-car/tree/master/datasets Cited by: TABLE IV.
  • [124] V. Vapnik and A. Vashist (2009) A new learning paradigm: learning using privileged information. Neural networks 22 (5-6), pp. 544–557. Cited by: §V-E.
  • [125] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §II.
  • [126] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, et al. (2019) Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature 575 (7782), pp. 350–354. Cited by: §II.
  • [127] W. Wang, R. Yu, Q. Huang, and U. Neumann (2018) Sgpn: similarity group proposal network for 3d point cloud instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2569–2578. Cited by: §IV-E.
  • [128] R. J. Williams (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8 (3-4), pp. 229–256. Cited by: §III-B.
  • [129] Robot operating system External Links: Link Cited by: §II.
  • [130] P. Wolf, C. Hubschneider, M. Weber, A. Bauer, J. Härtl, F. Dürr, and J. M. Zöllner (2017) Learning how to drive in a real world simulation with deep q-networks. In 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 244–250. Cited by: §III-B.
  • [131] J. M. Wood (2002) Aging, driving and vision. Clinical and experimental optometry 85 (4), pp. 214–220. Cited by: §IV-A.
  • [132] B. Wymann, E. Espié, C. Guionneau, C. Dimitrakakis, R. Coulom, and A. Sumner (2000) Torcs, the open racing car simulator. Software available at http://torcs. sourceforge. net 4 (6). Cited by: §III-B.
  • [133] Y. Xiao, F. Codevilla, A. Gurram, O. Urfalioglu, and A. M. López (2019) Multimodal end-to-end autonomous driving. arXiv preprint arXiv:1906.03199. Cited by: TABLE III, §II, §II, §II, §IV-D1, §IV-G, §IV-G, §IV, §V-A, 1st item.
  • [134] H. Xu, Y. Gao, F. Yu, and T. Darrell (2017) End-to-end learning of driving models from large-scale video datasets. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2174–2182. Cited by: 1st item, §IV-A, §IV-C, §V-A, §V-E, §VI.
  • [135] Y. Yan, Y. Mao, and B. Li (2018) Second: sparsely embedded convolutional detection. Sensors 18 (10), pp. 3337. Cited by: §IV-E, §IV-E.
  • [136] B. Yang, W. Luo, and R. Urtasun (2018) Pixor: real-time 3d object detection from point clouds. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7652–7660. Cited by: §IV-E.
  • [137] S. Yang, X. Mao, S. Yang, Z. Liu, G. Chen, S. Wang, J. Xue, and Z. Xu (2017) Towards a robust software architecture for autonomous robot software. In International Workshop on Computer Science and Engineering, pp. 1197–1207. Cited by: §II.
  • [138] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson (2014) How transferable are features in deep neural networks?. In Advances in neural information processing systems, pp. 3320–3328. Cited by: §III-C, §III-C.
  • [139] F. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan, and T. Darrell (2018) Bdd100k: a diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687. Cited by: TABLE IV.
  • [140] E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda (2019) A survey of autonomous driving: common practices and emerging technologies. arXiv preprint arXiv:1906.05113. Cited by: §I, §II, §II.
  • [141] M. D. Zeiler and R. Fergus (2014) Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818–833. Cited by: §VII-A.
  • [142] W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, and R. Urtasun (2019) End-to-end interpretable neural motion planner. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8660–8669. Cited by: TABLE III, §I, §II, §II, §II, §II, 2nd item, §IV-E, §IV-F, §IV-G, Fig. 9, §V-B, §V-C, §V-E, §VI, §VII-B, §VII-C.
  • [143] F. Zhang, C. Guan, J. Fang, S. Bai, R. Yang, P. Torr, and V. Prisacariu (2020) Instance segmentation of lidar point clouds. Note: http://www.feihuzhang.com/ICRA2020.pdf Cited by: §IV-E.
  • [144] J. Zhang and K. Cho (2016) Query-efficient imitation learning for end-to-end autonomous driving. arXiv preprint arXiv:1605.06450. Cited by: §III-A3.
  • [145] J. Zhang, L. Tai, P. Yun, Y. Xiong, M. Liu, J. Boedecker, and W. Burgard (2019) Vr-goggles for robots: real-to-sim domain adaptation for visual control. IEEE Robotics and Automation Letters 4 (2), pp. 1148–1155. Cited by: §III-C.
  • [146] B. Zhou, P. Krähenbühl, and V. Koltun (2019) Does computer vision matter for action?. arXiv preprint arXiv:1905.12887. Cited by: TABLE III, §III-A2, §III-B3, §IV-A, §IV-B, §IV-G, §V-A, 1st item, §VII-B, §IX-A.
  • [147] Y. Zhou and O. Tuzel (2018) Voxelnet: end-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4490–4499. Cited by: §IV-E, §IV-E.