Log In Sign Up

Is it Safe to Drive? An Overview of Factors, Challenges, and Datasets for Driveability Assessment in Autonomous Driving

by   Junyao Guo, et al.

With recent advances in learning algorithms and hardware development, autonomous cars have shown promise when operating in structured environments under good driving conditions. However, for complex, cluttered and unseen environments with high uncertainty, autonomous driving systems still frequently demonstrate erroneous or unexpected behaviors, that could lead to catastrophic outcomes. Autonomous vehicles should ideally adapt to driving conditions; while this can be achieved through multiple routes, it would be beneficial as a first step to be able to characterize Driveability in some quantified form. To this end, this paper aims to create a framework for investigating different factors that can impact driveability. Also, one of the main mechanisms to adapt autonomous driving systems to any driving condition is to be able to learn and generalize from representative scenarios. The machine learning algorithms that currently do so learn predominantly in a supervised manner and consequently need sufficient data for robust and efficient learning. Therefore, we also perform a comparative overview of 45 public driving datasets that enable learning and publish this dataset index at Specifically, we categorize the datasets according to use cases, and highlight the datasets that capture complicated and hazardous driving conditions which can be better used for training robust driving models. Furthermore, by discussions of what driving scenarios are not covered by existing public datasets and what driveability factors need more investigation and data acquisition, this paper aims to encourage both targeted dataset collection and the proposal of novel driveability metrics that enhance the robustness of autonomous cars in adverse environments.


page 1

page 3

page 8


Improving Robustness of Learning-based Autonomous Steering Using Adversarial Images

For safety of autonomous driving, vehicles need to be able to drive unde...

A Review and Comparative Study on Probabilistic Object Detection in Autonomous Driving

Capturing uncertainty in object detection is indispensable for safe auto...

Autonomous Driving in Adverse Weather Conditions: A Survey

Automated Driving Systems (ADS) open up a new domain for the automotive ...

A Persistent and Context-aware Behavior Tree Framework for Multi Sensor Localization in Autonomous Driving

Robust and persistent localisation is essential for ensuring the safe op...

A Survey of Autonomous Driving: Common Practices and Emerging Technologies

Automated driving systems (ADSs) promise a safe, comfortable and efficie...

I Introduction

Despite testing autonomous cars in highly controlled settings, these cars still occasionally fail in making correct decisions, often with catastrophic results111These apply to autonomous vehicles in general.. There have been several accidents reported recently [1] due to failure of the autonomous capability of these cars. According to the accident records, the failures are most likely to happen in complex or unseen driving environments. The fact remains that while autonomous cars can operate well in controlled or structured environments such as highways, they are still far from reliable when operating in cluttered, unstructured or unseen environments [2].

To adapt autonomous driving systems to all types of driving conditions, it would be beneficial to first characterize the Driveability of a scene222We adopt the definition of a scene proposed in [3]; i.e., “A scene describes a snapshot of the environment including the scenery and dynamic elements, as well as all actors’ and observers’ self-representations, and the relationships among those entities.”. This can then lead to addressing two core issues of autonomous driving or advanced driver-assistance systems (ADAS): 1) policy learning for driver control hand-off; and 2) incorporation of driveability in the autonomous vehicle’s decision making, planning and testing. These two different application fields also suggest that driveability could be quantified in different forms, either as a single metric or a composition of metrics. For example, with ADAS and current Level 2 or 3 autonomy, a scene can be simply defined as driveable if the car can operate safely in autonomous mode. When a non-driveable scene is detected, the autonomous car can hand over control to the human driver in a timely manner [4]. However, in the long term where Level 4 or 5 autonomy is targeted, it is not possible to hand over control to the driver. This restriction means that a richer representation of driveability is needed; one that is informative enough for the car to take proactive measures to prevent catastrophic failure. For instance, an autonomous car can selectively use sensors in scenes with high driveability whereas request further support from cloud computing for detailed analysis of less-driveable scenes [5]. In scenes with particularly low driveability, contingency plans might need to be executed such as deceleration or even making a full stop. The driveability information could also be useful when processed offline. For example, it can be used to build a safety map that not only guides road users to plan alternate routes that are safer, but also identifies unsafe areas that need repair [6]. Such information can also be used by insurance companies to quantify operating risk [7, 8].

The concept of driveability is not new. It has been proposed for measuring road conditions [9] and modeling human driver performance [10]. However, there exists no unified concept of driveability in the domain of autonomous driving. Related usages of driveability include “driveability map” which is a map divided into cells categorized as driveable or non-driveable for motion planning [11, 12, 13, 14], and “object driveability” which determines if an object can be driven over without causing damage to the vehicle [15]. The concept that is most similar to that considered in this paper is “scene driveability” proposed in two recent studies [4, 16]. Scene driveability measures how easy a scene is for an autonomous car to make full decisions on steering angles and accelerations or to perform a specific task such as lane changing. However, none of these studies reason about what makes a scene driveable or non-driveable at a high level in the context of autonomous driving.

While it may not be difficult for human drivers to tell whether an environment is safe, it is far from obvious for an autonomous car because the driveability is affected by a variety of factors. Therefore, we first provide one taxonomy of those factors including both environmental conditions and road user behaviors, and particularly review representative approaches that handle the risks originating from each factor. Then, established performance metrics for driveability assessment are summarized. The development of both robust methods and novel metrics depends, in part, on having access to large-scale naturalistic and diverse driving datasets. This will have consequence not just in understanding of driveability conditions and developing novel quantification metrics, but will also assist subsequent validation and verification as well as incorporation of driveability aspects in autonomous driving policy. Therefore, as a major contribution of this paper, an exhaustive and comparative study of publicly available datasets for autonomous driving research is presented. To provide practical guidance on when to use which dataset, we categorize the datasets according to autonomous driving tasks. More importantly, we highlight the datasets that capture low-driveability scenes using which models could be trained to better handle traffic hazards and risks.

By reviewing existing literature and datasets from a driveability perspective, this paper accomplishes the following:

  • Identifies both environmental and behavioral factors that contribute to driveability of the scene and sheds light on the limitations of existing approaches in handling low-driveability scenarios.

  • Reviews established metrics for driveability assessment, identifies their limitations and proposes the need of principled driveability metrics for enhancing reliability of autonomous driving systems.

  • Provides a practical reference for the autonomous driving research community of 45 publicly available driving datasets, highlighting large-scale datasets that are suitable for driveability assessment of challenging scenarios.

The rest of the paper is organized as follows: Section II presents the factors contributing to driveability and their related studies and challenges. Section III introduces existing metrics used for driveability assessment and discusses their limitations. Section IV carries out the study of existing public driving datasets. Section V and Section VI propose approaches that enable learning when data is scarce. Finally, Section VII concludes the paper.

Ii Driveability Factors

The driveability of a scene is greatly affected by environmental conditions such as weather, traffic flow, road condition, obstacles, and so on, which are explicit factors that can be directly perceived from the environment. However, environmental factors alone are not sufficient for driveability assessment. At times, potential risks can only be identified if the intent and interactions of road users are understood [17, 18], which is implicit information that needs to be inferred from observation. Therefore, in this section, we present both explicit and implicit factors that contribute to driveability and summarize the most relevant elements associated with each factor, which is illustrated in Fig. 1. Note that Fig. 1 only includes elements that are more likely to lower the driveability, which are generally more difficult to handle compared to good or controlled driving conditions assumed in most autonomous driving studies (e.g., heavy rain, snow and fog are included in the “weather” factor as opposed to sunny). Furthermore, these elements are less studied and not well understood by the research community, and therefore require more investigation.

Handling each factor presents its own research challenge and existing approaches are still far from delivering robust solutions that incorporate all factors. To maintain focus over a large research landscape, we will point to surveys on generic studies of these factors and only detail representative works that focus on scenarios with high complexity, dynamics and uncertainty.

Fig. 1: Illustration of factors contributing to driveability assessment and their associated hazardous scenarios. Exemplary figures are adapted from [19, 20, 21, 22, 23, 16, 24, 25, 26, 27, 28, 29, 30, 31].

Ii-a Explicit Factors

Ii-A1 Weather/Visibility

Extreme weather such as fog, heavy rain and snow

can significantly impair road visibility and sensor performance. Deep neural network (DNN) models are known to behave erroneously under adverse weather conditions

[32]. There are some studies [33, 34] that propose methods that are robust to all weather conditions. However, it is not well understood how various weather conditions affect the reliability of autonomous cars.

Ii-A2 Illumination

Variations in illumination caused by time of day (dusk/dawn/night), landscape (shades) and light sources pose different challenges for environment perception. [35, 36, 37] propose various illumination invariant transforms to improve the robustness of visual perception of autonomous cars under different directions and intensities of the light sources. Nighttime is usually specially handled, where most studies focus on nighttime vehicle detection [38, 39] and rely on vehicle lights as detectable features. However, nighttime pedestrian detection is much harder and more often additional thermal or far infrared data is needed for accurate detection [40, 41].

Ii-A3 Road Geometry

Intersections and roundabouts are more difficult to drive through compared to straight highway roads. As pointed out in [42], a significant number of accidents involve intersections. Consequently, intersection is the most studied road geometry, where [43] makes an overview of recent studies on road users behaviors at intersections. Another notable work is [44]

, which uses different combinations of visual cues such as scene flow, semantic labels, vanishing points, etc for intersection scene understanding including object detection, driveable area estimation, street orientation and lane detection. Compared to intersections, roundabouts are even more challenging given the stringent time constraints on yielding and merging maneuvers.

[30] reviews related studies and proposes an action planning method to enable an autonomous car to merge into a roundabout. While there are many studies focusing on intersections and roundabouts, almost all consider urban scenarios. In contrast, for rural and unmapped areas, the road geometry and its impact on autonomous driving are not well understood.

Ii-A4 Road Condition

Potentially hazardous road conditions include road damage, uneven surfaces, rough surfaces, etc. [25]

introduces a large-scale road damage dataset and implements a convolutional neural network (CNN)-based method to detect eight categories of road damage. Terrain roughness is also important for vehicle control because rough terrain induces shock especially at high speed. Survey

[45] reviews methods that estimate terrain traversability for unmanned ground vehicles which could potentially be extended to autonomous cars.

Ii-A5 Road Construction

Road construction can affect multiple road appearances including road geometry and traffic signs as well as driving condition. Usually, there are workers present around a construction site that also need particular attention. Therefore, the boundary of road construction needs to be accurately identified and potential hazards there need to be specifically handled. To this end, [20] focuses on identifying work zone boundary and driving condition change on highways.

Ii-A6 Lane Marking

Lane markings provide important reference for driving, where broken or missing lane markings and irregular lane shapes cause difficulty in performing lane following or lane change. While lane or road detection is an active research area with a good survey [22] reviewing recent progress, very few studies focus on robustness of these methods. A CNN-based method is proposed in [46] to recognize damaged arrow-road markings which is robust to perspective distortion and partial occlusions, whereas [47] and [48] focus on unpaved roads with no markings and stochastic lane shapes, respectively.

Ii-A7 Traffic Condition

In general, highway and urban scenarios require specified learning models due to their different traffic characteristics. Particular attention needs to be paid to situations with extremely high speed limit, heavy flows and potential accidents. However, it is not well understood whether/how the autonomous car’s behavior is affected by traffic conditions or when an accident is likely to happen. To this end, [27]

presents a near-miss accident dataset and uses a quasi-recurrent neural network (RNN) to predict accidents.

Ii-A8 Static and Dynamic Objects

Object detection and tracking is the most studied topic with surveys [49, 50, 51, 52, 28] reviewing methods and benchmarks on detection of static obstacles, vehicles, pedestrians and cyclists. However, existing methods still demonstrate high error when encountering unseen or hard-to-identify objects especially those of small size, heavy occlusion and large truncation [53]. Towards detection of uncommon and unexpected obstacles, [26] constructs a dataset with small obstacles such as lost cargos and proposes a Bayesian fusion framework that combines CNN with a stereo-system for semantic label prediction. Another work [54] focuses on obstacles that have thin structures such as cables and tree branches, and proposes an edge-based visual odometry technique for detection. While these studies cover several types of uncommon obstacles, there are many more that need investigation such as animals, flying hindrances, sudden obstruction, etc. Furthermore, many of these obstacles (like fallen trees) can cause sudden and unexpected changes to the a-priori map required for vehicle localization. These changes can potentially impair localization accuracy and consequently hinder effective planning of autonomous cars.

Ii-B Implicit Factors

Implicit factors consist of behaviors and intent of road users interacting with the autonomous car. An interesting survey [55] exhaustively reviewed studies on understanding, modeling, and predicting human agents in three domains, namely, inside the vehicle cabin, around the vehicle, and inside surrounding vehicles. [56] also provides a literature review on interaction between autonomous cars and other road users. We will not repeat the findings presented in these surveys, but only highlight risky behaviors of vehicles, pedestrians, drivers, motorcyclists and bicyclists, respectively.

Ii-B1 Vehicle Behaviors

Potentially hazardous vehicle behaviors include overtaking, lane change, rear-ending, speed-driving and failure to obey traffic laws, where the first three are studied the most with related works summarized in [50, 57, 58]. Vehicle behavior prediction can be hard at high-speed or cluttered urban scenarios. Under these considerations, [59] reviews studies on trajectory planning and tracking for high-speed autonomous overtaking, whereas [60, 61] propose scenario-adaptive and intention-aware vehicle trajectory estimation approaches for challenging urban scenarios such as unsignalized urban intersections. Note that vehicle behaviors are tightly connected to the driver’s condition. Hazardous vehicle behaviors can also occur when their respective drivers are inexperienced, intoxicated, or limited in physical ability to pay careful attention to everyone on the road.

Ii-B2 Pedestrian Behaviors

Pedestrians are the most vulnerable road users. Most accidents happen when a pedestrian is crossing, and many seem to result from pedestrians’ inattentiveness or failure of compliance with law. [62] gives a comprehensive overview of factors, methods and challenges on pedestrian behavior studies with both the absence and presence of autonomous vehicles. It is argued that the pedestrians’ behavior and their perceived risk of autonomous vehicles may vary depending on numerous factors including demographics as age and gender, dynamic factors such as vehicle speed and distance, as well as social norms and culture. It remains an open question how these factors are interrelated and influential in understanding pedestrians’ intent.

Ii-B3 Driver Behaviors

For partial or high automated vehicles, driver’s availability is still required at times when the autonomous car is incapable of making reliable decisions and requests to handover control. Driver behavior analysis is the most mature field with a myriad of studies on driver’s activity, intent, alertness, skill and style, and so on [55]. Among these aspects, driver distraction and drowsiness are two main reasons for traffic accidents. Surveys [63, 64, 29] review methods for driver distraction-and drowsiness-detection using both visual features such as facial expression and eye movement, and non-visual features such as physiological signals and car dynamics, where they show that hybrid measures generate fewer false alarms and higher recognition rate. More investigation is needed on handling emergency situations such as sudden driver impairment under medical conditions.

Ii-B4 Motorcyclist/Bicyclist Behaviors

Compared to other groups of road users, the models and methods for bicyclists/motorcyclist behavior analysis are far more limited due to the lack of datasets. Report [65] shows that the majority of accidents happen when a bicycle appears in front of a vehicle and the driver failed to brake in time due to either obstructed view or the bicyclist traveling in the wrong direction. Substantial efforts are needed for trajectory estimation and intent prediction for motorcyclists and bicyclists.

Ii-C Interrelationship of Driveability Factors

In most cases, the driveability factors are interrelated where the change in one factor could affect the others significantly. For example, how road users behave greatly depend on explicit factors such as weather, road condition, road geometry, and traffic condition. While there are many studies on behavior analysis under different driving contexts, those contexts are generally limited to road geometry such as intersection and traffic condition such as highway traffic versus urban traffic [50, 58]. The interrelationship among other factors are under-exploited.

A more challenging issue arises from the interaction of different groups of road users. The biggest limitation of existing research on behavior analysis is that most studies only focus on one type of road user (vehicles or pedestrians only). Only a few works investigate the joint attention of multiple road users including autonomous cars appearing in the same scene [23, 66]. Joint attention in autonomous driving is a complicated problem which involves issues from biological, social and algorithmic perspectives and requires methods for multiple tasks including object detection and tracking, pose estimation and intent prediction [66].

As pointed out in [67], autonomous cars should ideally behave in a way that is comprehensible to humans. They should communicate effectively with pedestrians and cyclists, and react safely to any unpredicted human behaviors. Beyond interaction with pedestrians, it is also important to plan for autonomous cars that account for effects on human drivers because they are expected to share the road for the coming decades. In fact, the cause of many accidents involving autonomous cars could be attributed to human drivers expecting autonomous cars to behave differently [68]. In this direction, one pioneering work [69]

uses an inverse reinforcement learning approach to optimize the planning for autonomous vehicles that takes into account the effects on human drivers, which makes the autonomous vehicle more efficient and communicative. However, this study was only carried out on simple simulated scenarios, and more investigation is needed about its applicability in real-world situations.

Iii Driveability Metrics

Currently, there is no unified performance metric for assessing the driveability of a scene, because driveability assessment involves many tasks in perception and behavior analysis, and the performance metric for each task is tightly related to the underlying model used. In this section, we introduce the most relevant metrics established in existing research for scene driveability evaluation [4] and risk assessment [70], and present the design methodology underlying these metrics, with the purpose of encouraging the proposal of novel metrics for driveability assessment. While risk assessment metrics are well established and accepted in studies on ADAS, a metric for scene driveability has only been proposed recently and it is aimed at end-to-end driving policy learning. For risk assessment, we follow the discussions made in [70] and categorize the metrics according to collision-based risks and behavior-based risks, and we refer interested readers to [70] for details on the works that utilize these metrics.

Iii-a Scene Driveability

In [4]

, scene driveability is defined by how easy a scene is for an autonomous car to navigate and a scene driveability score is used to measure how likely the car will fail. An end-to-end approach is used to calculate this score. Specifically, an end-to-end driving policy learning model using CNN and long-short-term-memory (LSTM) is first trained to predict driving maneuvers including velocity and steering angle. Then the scene driveability score is calculated based on the discrepancies between the predictions made by the trained driving model and the ground-truth maneuvers. If the score is lower than some manually chosen threshold, the scene is considered “Hazardous”. Otherwise, the scene is considered “Safe”. Once all the scenes are labeled safe or hazardous, another CNN model is trained to predict whether a new scene is safe or hazardous. A similar approach is used in


, which uses a bi-directional RNN to classify scenes as safe or unsafe for performing a lane change. However, the limitation of such metrics is that they are defined purely from the model prediction outcome, which is highly model dependent. Furthermore, the end-to-end approach barely provides any insight into what makes a scene hazardous.

Iii-B Collision-Based Risk

There are two basic metrics for collision risk computation, namely, binary risk indicator and probabilistic risk indicator. The former only predicts whether or not a collision will happen in the near future, while the latter represents risk score by a probability calculated based on current states, event, choice of hypothesis, future states and damages

[71, 72]. Conventional methods that calculate collision-based risk first predict the potential future trajectories for moving entities and then detect collisions between each pair of trajectories [70]. Newer approaches use deep predictive models to predict whether a collision will occur in the future directly from videos and other sensor data [73].

Another widely used indicator is Time-To-X (TTX), where X refers to a relevant event in the course towards collision. The most standard TTX indicator is Time-To-Collision (TTC), which measures the time remaining before the collision occurs and provides clues on whether the car should send a warning to the driver or directly perform an action. Recent study [74] also proposed a worst-time-to-collision (WTTC) metric with the purpose of selecting most critical objects and situations out of a typical test drive to reduce the amount of data saved. An object or situation is considered less critical if its associated WTTC is large. However, the calculation of TTC in most studies relies on simple assumptions of vehicle status and trajectories, which may be difficult to adapt to real-world driving scenarios.

Iii-C Behavior-Based Risk

Behavior-based risk estimation is usually cast as a binary classification problem, where “nominal behaviors” are learned from data and then “dangerous behaviors” are detected. Nominal behaviors are defined based on acceptable speeds, traffic rules, location semantics, weather conditions, and/or the level of fatigue of the driver. For situations involving more than one vehicle, pairs of maneuvers can be labeled as “conflicting” or “not conflicting” [70]. However, behavior-based risk estimation mostly focuses on driver behaviors, while it barely covers the behaviors of other traffic participants.

Iii-D Challenges in Metric Design

There are some limitations of existing metrics. First, in most cases, if not all, the “metrics” are not strict mathematical metrics and don’t satisfy all metric properties. As such, it is difficult to understand and analyse them across systems and scenarios in a commensurable fashion. Second, typically, even though the risk is measured on a continuous scale, for usability these measurements are thresholded (e.g., Safe vs Hazardous). This results in information-loss. Moreover, they also render the categorization subjective. Third, while the existing metrics can evaluate some types of risks, none of them provides a high-level explanation covering all driveability factors.

Even though not covered in this paper, it is also worth considering whether non-safety factors such as psychological and emotional factors of both the driver and the passengers should also be accounted for by the driveability metric. Ideally, autonomous driving should enable transportation that is not only safe but also enjoyable. To this end, a recent work [75] investigated ride comfort measures in autonomous cars, which are shown to be affected by factors such as motion sickness, naturality, apparent safety, etc. Some metrics for evaluation of driver’s emotional feelings such as road frustration index has also been developed in industrial applications. However, it remains a challenging question how to harmonize the concept of driveability for both autonomous cars and the humans inside the vehicle.

As shown above, the actual scope of driveability metric depends greatly on the driving context and use case, which makes it challenging to design a single comprehensive and interpretable metric for driveability. Therefore, composite metrics should be thought of that include both basic measures that are generalizable across systems and additional measures that can be tailored to the specific requirements of target application. Furthermore, such composite metrics could contain heterogeneous measures. Those can include but are not limited to continuous risk scores, the states and trajectories of all traffic participants [13], and even semantic descriptors such as “road construction” or “low lane marking visibility” which are shown to better communicate the intent of the autonomous car to its driver [76]. There is also dependency on the availability of up-to-date a-priori maps of the roads as well as cultural issues specific to a region. For example, driving in the US is probably easier than in some of the less developed countries where roads are less organized and people tend to disobey traffic rules more often.

Iv Datasets

Abbreviations used: VI-Video, IM-Image, Li-LiDAR, VD-Vehicle Data, CO-Codes; BB-Bounding Box, SL-Semantic Label, LM-Lane Marking, BL-Behavioral Label, O-Others; UR-Urban, RU-Rural, HI-Highway; WE-Weather, SE-Season, LO-Location, NI-Night, IL-Illumination; s-stereo images; uz-unzipped. Unit: K-Kilo, M-Million.
Dataset Provider Time & Venue Data provided Annotation Traffic Diversity Volume
Apollo Open Platform()[77] Baidu Inc 2018; multiple cities in China multiple datasets, volumes vary
ApolloScape[19] Baidu Inc 2018; multiple cities in China 140K IM
Belgium Traffic Sign[78] ETH Zürich 2011; Belgium 9K IM, 50GB
Berkeley DeepDrive[24] UC Berkeley 2017; San Francisco Bay Area, New York, US 100K IM, 1.8TB
Bosch Small Traffic Lights[79] Bosch North America Research 2017; San Francisco Bay Area, US 13K IM
Brain4Cars[21] Cornell Univ. 2016; two states in US 700 VI
Caltech Pedestrian[80] California Inst. of Tech. 2009; Los Angeles, US 250K IM, 11GB
CamVid[81] Univ. of Cambridge 2009; Cambridge, UK 700 IM, 8GB
CCSAD[82] Centro de Investigacin en Matemticas 2014; Guanajuato, Mexico 96K IM(s), 500GB
CityScapes[83] Daimler AG,MPI-IS,TU Darmstadt 2016; 50 cities in Germany, Switzerland & France 25K IM(s), 63GB
CMU[84] Carnegie Mellon Univ. 2011; Pittsburgh, US 16 VI, 275GB[85] 2016; San Francisco, US 80GB
CULane[86] Chinese Univ. of Hong Kong 2018; Beijing, China 133K IM
Daimler Pedestrian()[87] Daimler AG R&D, Univ. of Amsterdam 2006-2016; Beijing, China, others unknown 8 datasets, 2.5MB - 45GB each
DAVIS[88] Univ. of Zürich, ETH Zürich 2017; Switzerland, Germany 41 VI, 450GB
DBNet[89] Shanghai Jiao Tong Univ., Xiamen Univ. 2018; several cities in China 10K IM, 1TB(uz)
DIPLECS()[90] Univ. of Surrey 2015; Surrey, UK, Stockholm, Sweden 4.3GB & 1.1GB
Dr(eye)ve[91] Univ. of Modena and Reggio Emilia 2016; Modena, Italy 555K
EISATS()[92] Univ. of Auckland 2010; multiple locations in Germany and New Zealand multiple datasets, volumes vary
Elektra()[93] Autonomous Univ. of Barcelona, Polytechnic Univ. of Catalonia 2016; Barcelona, Spain VI & Infrared 9 datasets, 0.5GB-12GB each
ETH Pedestrian[94] ETH Zürich 2009; Zürich, Switzerland 4.8K IM(s), 660MB
Ford[95] Univ. of Michigan 2009; Michigan, US 80GB & 120GB
German Traffic Sign[96] Ruhr Univ. Bochum 2012; Germany 5K IM, 1.6GB
HCI Challenging Stereo[97] HCI(Heidelberg), Bosch Corporation Research 2012; Hildesheim, Germany 11 VI(s)
HD1K[98] HCI(Heidelberg), Robert Bosch GmbH 2018; Heidelberg, Germany optical flow 1K
Highway Workzones[20] Carnegie Mellon Univ. 2015; US 6 VI, 1.2GB
JAAD[23] York University 2016; mostly in Ukraine and Canada 347 VI, 170GB
KAIST Multi-Spectral[99] KAIST 2015; South Korea VI, Li, GPS, CO, Thermal 10 VI
KAIST Urban[100] KAIST 2018; multiple cities in South Korea 19 VI, 1GB-22GB each
KITTI()[101] Karlsruhe Inst of Technology, Toyota Technological Inst 2011; Karlsruhe, Germany multiple datasets, volumes vary
LISA Traffic Sign[102] Univ. of California, San Diego 2012; US 6.6K IM, 8GB
LostAndFound[26] Daimler AG 2016, Germany 2K IM(s), 40GB
Málaga[103] Univ. of Málaga 2014; Málaga, Spain 15 VI(s), 70GB
Mapillary Vistas[104] Mapillary AB 2017; around the globe 25K IM
NEXET[105] Nexar 2017; around the globe 55K IM, 10GB
nuScenes[106] nuTonomy Inc, Aptiv 2018; Boston, US & Singapore 1K VI, 40K IM
Oxford RobotCar[107] Oxford Univ. 2015; Central Oxford, UK 130 VI(s), 23TB
Road Damage[25] Univ. of Tokyo 2018; multiple cities in Japan 9K IM
Stanford Track[108] Stanford Univ. 2010; Stanford Univ., US stixel 14K tracks, 5.7GB
Stixel[109] Daimler AG 2013, Germany stixel 2.5K IM, 3GB
TME Motorway[110] Czech Technical Univ. 2011; Northern Italy 28 VI
TUD-Brussels Pedestrian[111] Max Planck Inst for Informatics 2009; Belgium 1.6K IM
TuSimple()[112] TuSimple 2017; venue unknown 7K IM & 5K IM
UAH[113] University of Alcalá 2016; Madrid, Spain 35 VI
Udacity()[114] Udacity 2016; Mountain View, US 300GB(uz)
TABLE I: Overview of publicly open datasets for autonomous driving.

The development of robust autonomous driving models depends on having access to large-scale training datasets, especially as more learning-based approaches are incorporated. Over the past decade, tens of datasets for autonomous driving have been collected and made public by multiple institutes around the world. These datasets are a valuable resource for the research community to develop benchmarks and consolidate research efforts. However, as autonomous driving encompasses numerous tasks in perception, localization and behavior analysis and the datasets are greatly varied in application focus, it is not trivial to determine which dataset to use for which task. Therefore, in this section, we provide an up-to-date exhaustive list of 45 publicly available datasets for autonomous driving and categorize them according to the tasks that they are suitable for. Particularly, since we have identified the factors that contribute to driveability and their associated challenges in Section II, we will highlight the datasets that can be used to address these challenges. We also publish this dataset archive at, which allows interactive exploration of datasets and will continue to be maintained after publication.

Prior to this work, a comprehensive survey of publicly available datasets was published in [115] and includes 27 publicly available datasets collected on public roads before late 2016. However, some of the largest and most diversified datasets have been released in the past two years, which are not included in [115]. We enrich the list in [115] by adding 23 more datasets most of which were released after 2016. We also exclude 5 datasets in [115] that have broken web links, impose a charge, or are integrated into newer datasets. We would like to mention that besides our selected datasets, there are other reported efforts on dataset acquisition such as the dataset “TorontoCity” collected by researchers at University of Toronto [116]. Unfortunately, this dataset no longer supports open access and is therefore not included in this paper.

Different from [115] which compares the datasets’ metadata such as venue, volume, traffic condition, sensor setup and file type, we compare the datasets from an application perspective in terms of which tasks each dataset is suitable for, diversity and modality, level of annotations, and whether training/testing data and benchmarks are provided. Furthermore, we will summarize the trends emerging in these recently published datasets and propose directions for future dataset collection.

Iv-a Overview of Datasets

We follow the same inclusion criteria to select relevant datasets as those used in [115], i.e., the dataset must be collected by on-board sensors of a vehicle running on public roads, contain camera or LiDAR data, and allow free open access. By an extensive search and snowballing in dataset websites, publications and competitions on autonomous driving, 45 datasets have been included in this survey and their metadata are presented in Table I. The symbol () denotes that the dataset contains multiple subsets that are collected with different sensors and for different purposes. In the following, we elaborate on the aspects considered when presenting these datasets. While license information is not included in Table I, we suggest the readers check and observe the license when using these datasets.

Time & Venue denotes when the dataset was published and where the data was collected. In terms of time, half of the datasets were released since the year of 2016, which shows an increasing interest and effort on public dataset collection to boost the progress in autonomous driving research. In terms of venue, while most datasets published before 2016 were collected in Europe and the United States, the collection of many of the more recent datasets took place in Asian countries (such as ApolloScape, DBNet, KAIST, Road Damage Dataset, etc), and even around the globe (Mapillary Vistas and NEXET) with collaborative efforts from drivers worldwide who uploaded images to the database. Such diversity in locations is a leap forward towards enabling autonomous driving on road networks all over the world.

Data provided lists the data modalities and development codes provided by the dataset. A dataset is considered to contain videos if it provides either videos or image sequences that capture temporal information. Otherwise, a dataset is considered to only contain images if standalone images are provided without preceding or tailing video clips. For datasets that contain raw video clips with annotations made on selected but not all video frames, we mark them as containing both video and image data. Additional to videos and images, around one-fourth of the presented datasets provide LiDAR scans, GPS/IMU (inertial measurement unit) data, and/or vehicle status data including steering angle and velocity. There are also two datasets that provide infrared and thermal scans. Data collected from various types of sensors is greatly appreciated in deep multimodal learning, which is believed to enhance the inference performance of deep neural networks [117]. Most of the datasets provide codes in Python, Matlab or C++ for data preprocessing and visualization.

Annotation presents the type of labels provided by the dataset. The most common annotation types are bounding boxes and semantic labels, where the former is used for object detection and tracking and the latter is used for semantic segmentation. Some datasets also provide lane markings for lane detection. In addition to these graphical labels, a few datasets provide behavioral labels or data for higher-level scene reasoning. Such labels include synchronized videos of both the driver’s face and the road (Brain4Cars), driver’s gaze (Dr(eye)ve, Elektra) and behaviors of both the drivers and pedestrians present in the same scene (JAAD). Besides these annotations, the optical flow information is provided by the HD1K dataset, and a stixel label that uses multiple cubics to represent an object is provided in the Stanford Track and the Stixel dataset. Figure 2 shows one example for each type of annotation. Data labeling is generally considered to be time-consuming and cumbersome, and therefore choosing a dataset with annotations could ease the labeling burden in model training.

Traffic records the traffic conditions where the majority of datasets focus on urban traffic. Diversity shows the diversity in environmental conditions under which the dataset was collected, which includes weather, season, night and illuminations. Two-thirds of the presented datasets provide diversified data in at least one of the above categories. However, only two datasets, namely, Mapillary Vistas and NEXET, demonstrate diversity across all four categories. Seasonal changes are not commonly seen due to the fact that most datasets are collected over shorter durations.

Volume shows the total dataset size in terms of zipped files if not specified otherwise, which provides reference for disk usage when one considers storing and utilizing the dataset.

(a) bounding box
(b) semantic label
(c) lane marking
(d) driveable area
(e) driver’s gaze
(f) stixel label
(g) depth image
(h) optical flow
Fig. 2: Sample images with various annotation types. (a)(c)(d) are from Berkeley DeepDrive, (b) is from CityScapes, (e) is from Dr(eye)ve, (f) is from Stixel, (g) is from ApolloScape, and (h) is from HD1K. All sample images are extracted from the webpages hosting the datasets.
✓denotes that separate training and test sets are provided; denotes that benchmark results are provided.
Task Datasets
Stereo / 3D vision CamVid, CCSAD, CMU, EISATS, Elektra, HCI Challenging Stereo, KAIST Urban, KITTI(✓), Málaga, Oxford Robotcar, Stixel
Optical flow HCI Challenging Stereo, HD1K, KITTI(✓)
Object detection Multi-class: Apollo Open Platform, Berkeley DeepDrive (✓), CamVid, CityScapes(✓), Elektra, JAAD, KAIST Multi-Spectral, KAIST Urban, KITTI(✓), NEXET(✓), nuScenes, Stanford Track, TME Motorway, Udacity(✓)
Traffic sign (TS): Belgium TS(✓), Bosch Small Traffic Lights(✓), Highway Workzones, German TS(✓), LISA TS
Pedestrian (Ped): Caltech Ped(✓), Daimler Ped(✓), ETH Ped, TUD-Brussels Ped(✓)
Obstacle: LostAndFound(✓), Road Damage(✓)
Object tracking Apollo Open Platform, Berkeley DeepDrive(✓), Caltech Ped(✓), Daimler Ped, Elektra, ETH Ped, JAAD, KAIST Multi-Spectral, KITTI(✓), nuScenes, Stanford Track, TME Motorway, TUD-Brussels Ped(✓), TuSimple(✓), Udacity(✓)
Lane/Road detection ApolloScape(✓), Berkeley DeepDrive(✓), CULane(✓), KAIST Multi-Spectral(), KITTI(✓), TuSimple(✓)
Semantic segmentation ApolloScape(✓), Berkeley DeepDrive(✓), CamVid, CityScapes(✓), Daimler Ped(✓), Elektra, KITTI(✓), Mapillary Vistas(✓)
Localization / SLAM CMU, Ford, KAIST Multi-Spectral, KAIST Urban, KITTI(✓), Málaga, Oxford Robotcar, Udacity(✓)
End-to-end learning Apollo Open Platform,, DAVIS, DBNet(), DIPLECS, Udacity(✓)
Behavior analysis Brain4Cars(), DIPLECS, Dr(eye)ve, EISATS, Elektra, JAAD, UAH
TABLE II: Dataset categorization by autonomous driving tasks.

Iv-B Dataset Categorization

As presented in Table I, nearly no two datasets are exactly the same in terms of what they offer. The datasets are usually collected and annotated for different purposes, and therefore, they are suitable for different autonomous driving tasks. To provide a practical view of how to use these datasets, we categorize the datasets according to tasks. In Table II, we list common autonomous driving tasks based on the autonomous driving frameworks released by both research institutes [118, 119] and industrial companies [120, 121], and the tasks included in KITTI benchmarks [122]. We refer interested readers to [49] for detailed descriptions of these tasks. The role of related tasks in the learning pipeline for driveability assessment is illustrated in Fig. 3. Even though for a specific task, we only include datasets that were originally collected to perform the task, it should be noted that other datasets may also be applicable with additional labeling or data processing. We also show whether a dataset includes separate training and test sets, and whether the providers offer benchmark results online or in publications, which are very useful for comparison of methods developed using the dataset.

Fig. 3: Overview of the learning pipeline for driveability assessment.

From Table II, it can be observed that most of the datasets were collected for object detection, which is one of the early research fields for autonomous driving. There are also a good collection of datasets for object tracking and semantic segmentation. However, only few datasets can be used for reasoning about a scene at a higher level using behavior analysis, which is a research topic that has been studied insufficiently. Furthermore, as the end-to-end approach has been shown to have full potential of driving policy learning [123], a couple of datasets were published recently to encourage research in this direction.

Iv-C Dataset Highlights for Driveability Assessment

While classical datasets such as KITTI provide canonical benchmarks for testing and comparison of baseline algorithms, datasets that capture more adverse scenes or provide different levels of annotations will be more interesting for improving the robustness of autonomous cars. The collection of challenging scenes helps to identify the limitations of state-of-the-art approaches and inspire novel and more robust algorithms. Fortunately, towards this goal, the following trends have been observed in recently released datasets, namely, high complexity and diversity, capturing potentially hazardous events, and providing behavioral and contextual data for prediction and inference. In the following, we highlight the datasets according to these three characteristics and take a deeper dive into what is offered in each dataset.

Iv-C1 High Diversity and Complexity

To capture realistic driving scenes, the data collected should be as natural and diversified as possible in terms of weather, traffic condition, illumination, etc. The most diverse datasets are Mapillary Vistas and NEXET, which consist of images uploaded from drivers all over the globe covering six continents. NEXET was collected by using the Nexar’s dashcam and almost half of the images were taken at night time [105]. Bounding boxes for five classes of vehicles are provided. Compared to NEXET, Mapillary Vistas is also diverse in terms of image sources as the images were taken from different devices including both cameras and mobile phones and by photographers of varying experiences. However, only images with at least 19201080 resolution were selected [104]. Semantic labels for 66 classes of objects are provided by Mapillary Vistas in the research edition, whereas the commercial edition provides labels for 100 classes.

Another two large datasets for semantic segmentation are ApolloScape and CityScapes. CityScapes has gained significant popularity since its release. It provides 5K stereo images with pixel-level semantic labels and 20K stereo images with instance-level semantic labels [83]. The annotated image is taken from a 1.8-second video clip and the raw video clips are provided. 30 classes of objects are annotated and additional information such as outside temperature and precomputed disparity depth maps are also provided. However, the limitation of CityScapes is that most images were collected in daytime under good to medium weather condition. ApolloScape is by far the largest dataset with semantic labels covering 25 classes. Additionally, it provides 28 lane marking classes and depth images. Compared to CityScapes, ApolloScape captures more scenes with bad weather, reflections on vehicles and extreme lighting conditions [19]. The largest dataset that provides the most sensor measurements is nuScenes, which contains 1000 20-second long videos with LiDAR, Radar, camera, IMU and GPS data. It also provides 3D bounding boxes over 25 classes of objects annotated at 2Hz.

Berkeley DeepDrive provides the most comprehensive annotations with bounding boxes for 10 classes and lane markings for 8 classes on 100K images, and semantic labels for 40 classes on 10K images. The raw videos of 40 seconds are provided where the annotated images are extracted. Additionally, a unique annotation of driveable areas and a tag that shows the weather, time of day and scene context (residential, highway, tunnel, etc) of each image are also provided [24].

While most datasets were collected during a short time span, the CMU and Oxford RobotCar datasets were collected over months traversing the same routes multiple times, which capture long-term changes in the environment. Particularly, Oxford RobotCar was collected on the same route twice a week over a year and provides various types of data including stereo and monocular images, 2D and 3D LiDAR scans, GPS, and inertial sensor data, which is good for research on long-term simultaneous localization and mapping (SLAM) in dynamic urban environments [107].

Dataset Total () Average per image
Person Vehicle Person Vehicle
ApolloScape 543 1989 1.1/6.2/16.9 12.7/24.0/38.1
CityScapes 24.4 41 7.0 11.8
KITTI 6.1 30.3 0.8 4.1
Berkeley DeepDrive 86 - 1.2 -
Caltech 192 - 1.5 -
TABLE III: Number of instances in training and validation sets.

Scenes recorded in cluttered urban environments with more traffic participants are also favorable to enhance the robustness of learning models to deal with complex scenarios. Table III provides a comparison of the number of instances labeled in some large datasets which make this data available. ApolloScape is divided into three complexity levels with different numbers of instances per image [19]. ApolloScape and CityScapes contain images with more people as most of the data was collected in urban areas, whereas KITTI and Berkeley DeepDrive contain many scenes on highways with fewer people.

Iv-C2 Hazardous Events

As identified in Section II, hazardous factors include road damages, construction, adverse weathers, hard-to-identify obstacles due to size or occlusion, and so on. Several datasets have been collected to address these challenges. In terms of road condition, Road Damage is the only dataset that annotates road surface damages. The images were taken by a vehicle-mounted smartphone and 8 types of road damages were identified according to the Road Maintenance and Repair Guidebook in Japan including liner crack, alligator crack, bump, line blur, etc [25]. CCSAD was also collected in locations with hazardous road conditions including irregular speed humps, abundant potholes, and peculiar flows of pedestrians [82]. However, only raw data is provided in CCSAD. For road construction, Highway Workzones annotates signs specifically related to workzones, which is, however, limited to only highway traffic.

Among the many datasets that provide multi-class labels for object detection, LostAndFound is a unique dataset that annotates small obstacles caused by lost cargos down to the height of 5 centimeters [26]. In total, 42 classes of small obstacles are labeled such as crate, cardboard box, plastic bag, ball, and many others. In a similar vein, Bosch Small Traffic Lights provides videos that contain traffic lights with a width as small as 2 pixels, which are even difficult for human eyes to distinguish [79]. In total, 13 types of traffic lights including their shapes and colors are annotated.

Regarding weather conditions, many datasets contain scenes with rain, overcast, or even snow. Particularly, HCI Challenging Stereo provides 11 challenging sequences taken under extreme weather conditions which include scenes with flying snow, rain flares at night, snow at night, rain blurs, etc [97].

Iv-C3 Behavioral and Contextual Data

For the purpose of driver attention analysis, driver’s gaze fixation is usually collected. While both DIPLECS and Elektra contain a moderate-sized data subset that provides the gaze information, the only large-scale dataset with accurately measured gaze fixation is Dr(eye)ve. In Dr(eye)ve, the gaze data is projected onto the video captured by a front camera, which can be used to predict driver’s intent, improve road safety, and plan better driving strategies [91]. Instead of gaze data, Brain4Cars recorded videos of drivers inside the car. Together with the videos of the road and the labels of six classes of vehicle maneuvers such as stop, turn and lane change, Brain4Cars is used for anticipating maneuvers several seconds in advance [21]. Another dataset that focuses on driver’s behavior is UAH, which simulates normal, drowsy and aggressive driving and records 7 maneuvers such as lane-drifting, overspeeding, car-following, etc.

While the above datasets only focus on driver behavior analysis, JAAD labels the behaviors of both drivers and pedestrians occurring in the same scene. Driver behaviors include stopping, moving slow, accelerating, etc, whereas pedestrian behaviors include actions such as crossing, looking, slowing down as well as their moving directions and whether they are at an intersection [23]. The JAAD dataset is very useful for studying joint attention of drivers and pedestrians, which is still a challenging problem that awaits effective solutions [66], as mentioned in Section II-C.

V Filling the Data Gap

While there is an increased emphasis on dataset acquisition, the available data is still insufficient for learning an autonomous driving model that can operate robustly anywhere at anytime. In this section, we will discuss different ways to fill this data gap. We address the following three questions: 1) in which areas do we need to collect more naturalistic driving datasets? 2) how can one utilize synthetic data to complement real data? and 3) how can one use driving simulators for developing and testing learning algorithms? We conduct a brief discussion on current research efforts, open issues, and possible future directions related to these topics.

V-a Targeted Data Acquisition

The robustness of autonomous driving models depends on continuously resolving real-world hazardous corner cases, which requires continuously collecting datasets that expose the true diversity of the driving environment globally and include rich scene representations. In the following, we list several directions for new dataset collection that complement existing datasets.

  • For joint attention studies and intent prediction, datasets that simultaneously record the behaviors of pedestrians, cyclists, and drivers of both the ego car and neighboring vehicles are needed. Behaviors include but are not limited to facial expressions, gestures and gaze. It would be better that such data is collected during an interactive event between multiple road users, e.g., a pedestrian establishes eye contact with the driver or the car before crossing the street, because many accidents can be avoided by interactions using eye contact or simple gestures [66].

  • For robust obstacle detection, more datasets that capture uncommon hazards are needed, such as small pieces of cargo or debris falling from a vehicle in front of the ego car, accident scenes, road construction, etc. Recently, a Near-miss Incident DataBase was introduced in [18], but it is now under reconstruction and charges a fee for access.

  • For deep scene understanding, datasets with multi-level annotations are helpful which include low-level object annotation, mid-level trajectory annotation and high-level behavior and relationship annotation [58].

  • For driveability evaluation of various types of scenes, contextual scene descriptions are needed. The data collected is better tagged according to driving contexts such as intersections/roundabouts/non-intersections, signaled/unsignaled pedestrian crossing, number of marked/unmarked lanes, and so on, which could facilitate adaptive driving policy learning in various driving contexts.

Though not discussed here, the data collected could have dependency on the hardware and sensor calibration used during data acquisition. Many datasets provide calibration files for users to get exact positions of sensors and better use the measurements. Nonetheless, the goal is to learn an autonomous driving model that is invariant to hardware changes and work across different platforms.

V-B Synthesizing Data

It can be costly and cumbersome to collect data in the physical world and annotate them at a large scale. Therefore, researchers also resort to synthetic datasets that contain visually realistic images and automatically generated annotations for autonomous driving studies. Generating synthetic data is considered to be one major data augmentation approach for learning-based algorithms [124]. Synthesizing data can create additional samples capturing conditions not covered in the real-world datasets, and can be used to augment both training and testing sets. Some well established synthetic driving datasets include Virtual KITTI [125], Synthia [126], VIPER [127], etc, which are generally produced by video game engines.

The biggest concern with synthetic data is that it is dependent on the data generation model and therefore may present bias and not generalize well enough to the physical world. Fortunately, thanks to the continuous development of game engines and learning algorithms such as generative adversarial networks (GANs) [128], the realism of synthetic images keeps increasing, which can then better generalize the models trained on these synthetic data to real-world scenarios. For example, it has been shown in [129] that a CNN-based hand pose estimator trained only on synthetic data generated by GAN can even outperform a CNN trained on real images when applied to a realistic test dataset. Similar approaches are worth pursuing for training autonomous driving models.

Besides training, synthetic data can also be used to test and identify the limitations of the trained model by simulating not yet encountered conditions. To this end, two very recent studies [1, 32] focus on DNN testing by generating realworld-like images that cover scenes with extreme weather conditions to detect erroneous behaviors of a trained deep end-to-end learning model, where [1] uses affine image transformations and [32] uses GAN to generate the test images. The underlying assumption is that at the same location, the predicted car maneuver should be similar under different weather conditions or illuminations. Both approaches have been reported to detect thousands of erroneous behaviors of top-ranked DNN models in the Udacity challenge [114]. While these findings are enlightening, the coverage of the generated test images in both studies are still limited to only rain and snow conditions. Extended synthetic datasets are needed to test DNNs in additional conditions and in more complex scenarios. Further robust mechanisms to assess and verify generalizability to real-world are needed.

V-C Driving Simulators

As pointed out in [130], it would need test driving for hundreds of millions of miles without accident to prove that an autonomous driving system is safe enough to be adopted. This is in general impractical and still cannot cover all conceivable scenarios. Therefore, simulators play an important role in simulating different scenarios and performing exhaustive tests for both ADAS and autonomous cars. Many simulators have been developed in recent years, such as CARLA [131], Microsoft’s AirSim [132], Baidu’s Apollo [133], and NVIDIA’s DRIVE Constellation [134], just to name a few. These simulators generate driving scenarios from user-defined models of roads, traffic and vehicles, which allow testing of autonomous driving models and discovery of interrelationship among various driveability factors. However, even though high-fidelity environments are simulated, the conditions there are generally fully controlled and can be different from real-world situations that have higher uncertainty and dynamics. Therefore, one should be aware that the conclusions drawn from using these simulators may not be easily translated to real-world scenarios.

With various data sources of both realistic and simulated data, a collective approach is needed to integrate them and reap the most benefit from all available data sources. To this end, the PEGASUS research project [135] develops a data processing pipeline that stores data from different sources such as field tests, driving simulator studies, and traffic simulations into a database [136]. The main objective of constructing this database is to collect relevant traffic scenarios and establish a common evaluation basis for autonomous vehicle testing and validation.

Vi Further Discussion

While this paper focuses on supervised learning that requires sufficient data, we would like to point out that additional advancements are needed in artificial intelligence (AI) to learn from limited data and generalize to unknown environments. Ideally, the AI engine that powers autonomous driving should have the capability of transferring models trained on source data to any target domain.

For example, if a driving model is trained under good lighting condition during midday, will it have similar performance at all times during the day, at different geographic locations, and under all weather conditions? Building a robust model that performs well in all those conditions may require extension or augmentation through unsupervised learning and reinforcement learning techniques that can transfer knowledge and driving policy to unseen scenarios.

Many reinforcement learning based approaches have been reported for driving policy adaptation. For example, a RNN-based approach is proposed in [137] which first learns an optimal initial policy architecture from expert demonstration, and then adapts this policy to a new driving domain using the rewards obtained in the new domain. To better understand a new driving context, it is also important to find a representation of a scene such that the difference between the source and the target scenes and the error made by learning models can be minimized [138]. To address this issue, [31] adds semantic clues from the environment such as the distance of a pedestrian from the curbside and traffic light status to make pedestrian motion prediction more robust and flexible in new environments. Overall, the questions of what to transfer, when to transfer and how to transfer [138] need to be understood better in the context of autonomous driving.

One last remark we would like to make here is that robust AI is only one component contributing to safe autonomous driving. Achieving a safe autonomous vehicle requires solving interdisciplinary problems in domains of computing hardware, robotics, security, social acceptance and many others [67], that are beyond the scope of this paper.

Vii Concluding Remarks

In this paper, we have reviewed recent research efforts from a driveability perspective for autonomous driving. We presented both explicit and implicit factors that contribute to scene driveability, identified the potential risks posed by these factors, and investigated existing methods and metrics used for driveability assessment. With these investigations, we have shown the necessity of pursuing principled metrics for driveability, which can be represented by either a set of novel sophisticated metrics or a composition of metrics. It should also be well understood how the driveability metric interacts with other metrics used in system level verification and validation, which will have implication on optimization of multiple metrics concurrently and the trade-off therein.

Furthermore, we have conducted an exhaustive overview of 45 open datasets collected on public roads and categorized these datasets according to use cases. More importantly, we highlighted datasets that are more suitable for training robust driving models and identified the scenarios that need more data acquisition. We have also proposed ways to fill the data gap including conducting targeted dataset acquisition, using synthetic data for training and testing, exploring driving simulators, and transferring knowledge to unseen scenarios. We hope this paper can serve both as a guidance on dataset selection and construction and as an invitation to pursue novel approaches that enable autonomous cars to navigate through all environments safely and reliably.


The authors would like to thank Prof. Maxim Likhachev from Carnegie Mellon University for his invaluable comments that improved the manuscript.


  • [1] Y. Tian, K. Pei, S. Jana, and B. Ray, “Deeptest: Automated testing of deep-neural-network-driven autonomous cars,” arXiv preprint arXiv:1708.08559, 2017.
  • [2] D. Muoio. 6 scenarios self-driving cars still can’t handle. [Online]. Available:
  • [3] S. Ulbrich, T. Menzel, A. Reschka, F. Schuldt, and M. Maurer, “Defining and substantiating the terms scene, situation, and scenario for automated driving,” in IEEE 18th International Conference on Intelligent Transportation Systems (ITSC), 2015.
  • [4] S. Hecker, D. Dai, and L. Van Gool, “Failure prediction for autonomous driving,” arXiv preprint arXiv:1805.01811, 2018.
  • [5] S. Kumar, S. Gollakota, and D. Katabi, “A cloud-assisted design for autonomous driving,” in Proceedings of the first edition of the MCC workshop on Mobile cloud computing.   ACM, 2012, pp. 41–46.
  • [6] S. Cafiso, G. Cava, and A. Montella, “Safety inspections as supporting tool for safety management of low-volume roads,” Transportation Research Record: Journal of the Transportation Research Board, no. 2203, pp. 116–125, 2011.
  • [7] G. E. Fuchs, “Risk based automotive insurance rating system,” Jun. 30 2016, US Patent App. 14/460,868.
  • [8] C. Hsu-Hoffman, R. Madigan, and T. McKenna, “Subjective route risk mapping and mitigation,” Aug. 3 2017, US Patent App. 15/013,523.
  • [9] Y. Tsukada, T. Okutani, S. Itsubo, and J. Tanabe, “Evaluation of roads network in japan from viewpoint of drivability,” in The 7th International Conference of Eastern Asia Society for Transportation Studies, 2007.
  • [10] E. Bekiaris, A. Amditis, and M. Panou, “Drivability: a new concept for modelling driving performance,” Cognition, Technology & Work, vol. 5, no. 2, pp. 152–161, 2003.
  • [11] J. Leonard, J. How, S. Teller, M. Berger, S. Campbell, G. Fiore, L. Fletcher, E. Frazzoli, A. Huang, S. Karaman et al., “A perception-driven autonomous urban vehicle,” Journal of Field Robotics, vol. 25, no. 10, pp. 727–774, 2008.
  • [12] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann et al., “Stanley: The robot that won the DARPA grand challenge,” Journal of field Robotics, vol. 23, no. 9, pp. 661–692, 2006.
  • [13] S. Sivaraman and M. M. Trivedi, “Dynamic probabilistic drivability maps for lane change and merge driver assistance,” IEEE Trans. Intelligent Transportation Systems, vol. 15, no. 5, pp. 2063–2073, 2014.
  • [14] K. Kim, B. Kim, K. Lee, B. Ko, and K. Yi, “Design of integrated risk management-based dynamic driving control of automated vehicles,” IEEE Intelligent Transportation Systems Magazine, vol. 9, no. 1, pp. 57–73, 2017.
  • [15] D. I. F. Ferguson, A. Wendel, Z. Xu, D. H. Silver, and B. D. Luders, “Determining drivability of objects for autonomous vehicles,” Feb. 1 2018, US Patent App. 15/292,818.
  • [16] O. Scheel, L. Schwarz, N. Navab, and F. Tombari, “Situation assessment for planning lane changes: Combining recurrent models and prediction,” arXiv preprint arXiv:1805.06776, 2018.
  • [17] I. Chowdhury et al., “A user-centered approach to road design: blending distributed situation awareness with self-explaining roads,” Ph.D. dissertation, Heriot-Watt University, 2014.
  • [18] R. Takahashi, N. Inoue, Y. Kuriya, S. Kobayashi, and K. Inui, “Explaining potential risks in traffic scenes by combining logical inference and physical simulation,” International Journal of Machine Learning and Computing, vol. 6, no. 5, p. 248, 2016.
  • [19] X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, and R. Yang, “The apolloscape dataset for autonomous driving,” arXiv preprint arXiv:1803.06184, 2018,
  • [20] Y.-W. Seo, J. Lee, W. Zhang, and D. Wettergreen, “Recognition of highway workzones for reliable autonomous driving.” IEEE Trans. Intelligent Transportation Systems, vol. 16, no. 2, pp. 708–718, 2015,
  • [21] A. Jain, H. S. Koppula, B. Raghavan, S. Soh, and A. Saxena, “Car that knows before you do: Anticipating maneuvers via learning temporal driving models,” in

    Proceedings of the IEEE International Conference on Computer Vision

    , 2015, pp. 3182–3190,
  • [22] A. B. Hillel, R. Lerner, D. Levi, and G. Raz, “Recent progress in road and lane detection: a survey,” Machine Vision and Applications, vol. 25, no. 3, pp. 727–745, 2014.
  • [23] I. Kotseruba, A. Rasouli, and J. K. Tsotsos, “Joint attention in autonomous driving (JAAD),” arXiv preprint arXiv:1609.04741, 2016,
  • [24] F. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan, and T. Darrell, “BDD100K: A diverse driving video database with scalable annotation tooling,” arXiv preprint arXiv:1805.04687, 2018,
  • [25] H. Maeda, Y. Sekimoto, T. Seto, T. Kashiyama, and H. Omata, “Road damage detection using deep neural networks with images captured through a smartphone,” arXiv preprint arXiv:1801.09454, 2018,
  • [26] P. Pinggera, S. Ramos, S. Gehrig, U. Franke, C. Rother, and R. Mester, “Lost and found: detecting small road hazards for self-driving vehicles,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 1099–1106,
  • [27] T. Suzuki, H. Kataoka, Y. Aoki, and Y. Satoh, “Anticipating traffic accidents with adaptive loss and large-scale incident DB,” arXiv preprint arXiv:1804.02675, 2018.
  • [28] X. Li, F. Flohr, Y. Yang, H. Xiong, M. Braun, S. Pan, K. Li, and D. M. Gavrila, “A new benchmark for vision-based cyclist detection,” in IEEE Intelligent Vehicles Symposium, 2016, pp. 1028–1033.
  • [29] H.-B. Kang, “Various approaches for driver and driving behavior monitoring: A review,” IEEE International Conference on Computer Vision Workshops, pp. 616–623, 2013.
  • [30] B. Okumura, M. R. James, Y. Kanzawa, M. Derry, K. Sakai, T. Nishi, and D. Prokhorov, “Challenges in perception and decision making for intelligent automotive vehicles: A case study,” IEEE Trans. Intelligent Vehicles, vol. 1, no. 1, pp. 20–32, 2016.
  • [31] N. Japuria, G. Habibi, and J. P. How, “CASNSC: A context-based approach for accurate pedestrian motion prediction at intersections,” 31st Conference on Neural Information Processing Systems (NIPS), 2017.
  • [32] M. Zhang, Y. Zhang, L. Zhang, C. Liu, and S. Khurshid, “DeepRoad: GAN-based metamorphic autonomous driving system testing,” arXiv preprint arXiv:1802.02295, 2018.
  • [33] A. Gern, R. Moebus, and U. Franke, “Vision-based lane recognition under adverse weather conditions using optical flow,” in IEEE Intelligent Vehicle Symposium, vol. 2, 2002, pp. 652–657.
  • [34] Z. Wang, G. Cheng, and J. Y. Zheng, “All weather road edge identification based on driving video mining,” in IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), 2017.
  • [35] J. M. Alvarez, A. Lopez, and R. Baldrich, “Illuminant-invariant model-based road segmentation,” in IEEE Intelligent Vehicles Symposium, 2008, pp. 1175–1180.
  • [36] P. Corke, R. Paul, W. Churchill, and P. Newman, “Dealing with shadows: Capturing intrinsic scene appearance for image-based outdoor localisation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013, pp. 2085–2092.
  • [37] W. Maddern, A. Stewart, C. McManus, B. Upcroft, W. Churchill, and P. Newman, “Illumination invariant imaging: Applications in robust vision-based localisation, mapping and classification for autonomous vehicles,” in IEEE International Conference on Robotics and Automation (ICRA), 2014.
  • [38] Y.-L. Chen, Y.-H. Chen, C.-J. Chen, and B.-F. Wu, “Nighttime vehicle detection for driver assistance and autonomous vehicles,” in

    IEEE 18th International Conference on Pattern Recognition (ICPR)

    , vol. 1, 2006, pp. 687–690.
  • [39] H. Kuang, L. Chen, F. Gu, J. Chen, L. Chan, and H. Yan, “Combining region-of-interest extraction and image enhancement for nighttime vehicle detection,” IEEE Intelligent Systems, vol. 31, no. 3, pp. 57–65, 2016.
  • [40] J. Wagner, V. Fischer, M. Herman, and S. Behnke, “Multispectral pedestrian detection using deep fusion convolutional neural networks,” in 24th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, 2016, pp. 509–514.
  • [41] A. González, Z. Fang, Y. Socarras, J. Serrat, D. Vázquez, J. Xu, and A. M. López, “Pedestrian detection at day/night time with visible and FIR cameras: A comparison,” Sensors, vol. 16, no. 6, p. 820, 2016.
  • [42]

    G. R. de Campos, P. Falcone, R. Hult, H. Wymeersch, and J. Sjöberg, “Traffic coordination at road intersections: Autonomous decision-making algorithms using model-based heuristics,”

    IEEE Intelligent Transportation Systems Magazine, vol. 9, no. 1, pp. 8–21, 2017.
  • [43] M. S. Shirazi and B. T. Morris, “Looking at intersections: a survey of intersection monitoring, behavior and safety analysis of recent studies,” IEEE Trans. Intelligent Transportation Systems, vol. 18, no. 1, pp. 4–24, 2017.
  • [44] A. Geiger, M. Lauer, C. Wojek, C. Stiller, and R. Urtasun, “3D traffic scene understanding from movable platforms,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 36, no. 5, pp. 1012–1025, 2014.
  • [45] P. Papadakis, “Terrain traversability analysis methods for unmanned ground vehicles: A survey,” Engineering Applications of Artificial Intelligence, vol. 26, no. 4, pp. 1373–1385, 2013.
  • [46] H. Vokhidov, H. G. Hong, J. K. Kang, T. M. Hoang, and K. R. Park, “Recognition of damaged arrow-road markings by visible light camera sensor based on convolutional neural network,” Sensors, vol. 16, no. 12, p. 2160, 2016.
  • [47] J. Bao, Y. Zhang, X. Su, and R. Zheng, “Unpaved road detection based on spatial fuzzy clustering algorithm,” EURASIP Journal on Image and Video Processing, vol. 2018, no. 1, p. 26.
  • [48] X. S. Zhi Huang, Baozheng Fan, “Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions,” Journal of Electronic Imaging, vol. 27, 2018.
  • [49] J. Janai, F. Güney, A. Behl, and A. Geiger, “Computer vision for autonomous vehicles: Problems, datasets and state-of-the-art,” arXiv preprint arXiv:1704.05519, 2017.
  • [50] S. Sivaraman and M. M. Trivedi, “Looking at vehicles on the road: A survey of vision-based vehicle detection, tracking, and behavior analysis,” IEEE Trans. Intelligent Transportation Systems, vol. 14, no. 4, pp. 1773–1795, 2013.
  • [51] A. Mukhtar, L. Xia, and T. B. Tang, “Vehicle detection techniques for collision avoidance systems: A review.” IEEE Trans. Intelligent Transportation Systems, vol. 16, no. 5, pp. 2318–2338, 2015.
  • [52] D. Geronimo, A. M. Lopez, A. D. Sappa, and T. Graf, “Survey of pedestrian detection for advanced driver assistance systems,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 7, pp. 1239–1258, 2010.
  • [53] E. Ohn-Bar and M. M. Trivedi, “Are all objects equal? deep spatio-temporal importance prediction in driving videos,” Pattern Recognition, vol. 64, pp. 425–436, 2017.
  • [54] C. Zhou, J. Yang, C. Zhao, and G. Hua, “Fast, accurate thin-structure obstacle detection for autonomous mobile robots,” arXiv preprint arXiv:1708.04006, 2017.
  • [55] E. Ohn-Bar and M. M. Trivedi, “Looking at humans in the age of self-driving and highly automated vehicles,” IEEE Trans. Intelligent Vehicles, vol. 1, no. 1, pp. 90–104, 2016.
  • [56] J. Parkin, B. Clark, W. Clayton, M. Ricci, and G. Parkhurst, “Understanding interactions between autonomous vehicles and other road users: A literature review,” 2016.
  • [57] D. Bevly, X. Cao, M. Gordon, G. Ozbilgin, D. Kari, B. Nelson, J. Woodruff, M. Barth, C. Murray, A. Kurt et al., “Lane change and merge maneuvers for connected and automated vehicles: A survey,” IEEE Trans. Intelligent Vehicles, vol. 1, no. 1, pp. 105–120, 2016.
  • [58] J.-R. Xue, J.-W. Fang, and P. Zhang, “A survey of scene understanding by event reasoning in autonomous driving,” International Journal of Automation and Computing, vol. 15, no. 3, pp. 249–266, 2018.
  • [59] S. Dixit, S. Fallah, U. Montanaro, M. Dianati, A. Stevens, F. Mccullough, and A. Mouzakitis, “Trajectory planning and tracking for autonomous overtaking: State-of-the-art and future prospects,” Annual Reviews in Control, 2018.
  • [60] X. Geng, H. Liang, B. Yu, P. Zhao, L. He, and R. Huang, “A scenario-adaptive driving behavior prediction approach to urban autonomous driving,” Applied Sciences, vol. 7, no. 4, p. 426, 2017.
  • [61] J. Schulz, C. Hubmann, J. Löchner, and D. Burschka, “Interaction-aware probabilistic behavior prediction in urban environments,” arXiv preprint arXiv:1804.10467, 2018.
  • [62] A. Rasouli and J. K. Tsotsos, “Autonomous vehicles that interact with pedestrians: A survey of theory and practice,” arXiv preprint arXiv:1805.11773, 2018.
  • [63] Y. Dong, Z. Hu, K. Uchimura, and N. Murayama, “Driver inattention monitoring system for intelligent vehicles: A review,” IEEE Trans. Intelligent Transportation Systems, vol. 12, pp. 596–614, 2009.
  • [64] S. Kaplan, M. A. Güvensan, A. G. Yavuz, and Y. Karalurt, “Driver behavior analysis for safe driving: A survey,” IEEE Trans. Intelligent Transportation Systems, vol. 16, pp. 3017–3032, 2015.
  • [65] M. Kuehn, T. Hummel, and A. Lang, “Cyclist-car accidents–their consequences for cyclists and typical accident scenarios,” in Proceedings of the 24th International Conference on the Enhanced Safety of Vehicles, 2015.
  • [66] A. Rasouli and J. K. Tsotsos, “Joint attention in driver-pedestrian interaction: from theory to practice,” arXiv preprint arXiv:1802.02522, 2018.
  • [67] P. Koopman and M. Wagner, “Autonomous vehicle safety: An interdisciplinary challenge,” IEEE Intelligent Transportation Systems Magazine, vol. 9, no. 1, pp. 90–96, 2017.
  • [68] V. V. Dixit, S. Chand, and D. J. Nair, “Autonomous vehicles: disengagements, accidents and reaction times,” PLoS One, vol. 11, no. 12, 2016.
  • [69] D. Sadigh, S. Sastry, S. A. Seshia, and A. D. Dragan, “Planning for autonomous cars that leverage effects on human actions.” in Robotics: Science and Systems, 2016.
  • [70] S. Lefèvre, D. Vasquez, and C. Laugier, “A survey on motion prediction and risk assessment for intelligent vehicles,” Robomech Journal, vol. 1, no. 1, p. 1, 2014.
  • [71] J. Eggert, “Predictive risk estimation for intelligent adas functions,” in IEEE 17th International Conference on Intelligent Transportation Systems (ITSC), 2014, pp. 711–718.
  • [72] C. Laugier, I. E. Paromtchik, M. Perrollaz, M. Yong, J. Yoder, C. Tay, K. Mekhnacha, and A. Nègre, “Probabilistic analysis of dynamic scenes and collision risks assessment to improve driving safety,” IEEE Intelligent Transportation Systems Magazine, vol. 3, no. 4, pp. 4–19, 2011.
  • [73] M. Strickland, G. Fainekos, and H. B. Amor, “Deep predictive models for collision risk assessment in autonomous driving,” arXiv preprint arXiv:1711.10453, 2017.
  • [74] W. Wachenfeld, P. Junietz, R. Wenzel, and H. Winner, “The worst-time-to-collision metric for situation identification,” in IEEE Intelligent Vehicles Symposium, 2016, pp. 729–734.
  • [75] M. Elbanhawi, M. Simic, and R. Jazar, “In the passenger seat: Investigating ride comfort measures in autonomous cars,” IEEE Intelligent Transportation Systems Magazine, vol. 7, no. 3, pp. 4–17, 2015.
  • [76] J. Koo, J. Kwac, W. Ju, M. Steinert, L. Leifer, and C. Nass, “Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance,” International Journal on Interactive Design and Manufacturing (IJIDeM), vol. 9, no. 4, pp. 269–275, 2015.
  • [77] “Apollo data open platform,”, 2018.
  • [78] R. Timofte, K. Zimmermann, and L. Van Gool, “Multi-view traffic sign detection, recognition, and 3D localisation,” Machine vision and applications, vol. 25, no. 3, pp. 633–647, 2014,
  • [79]

    K. Behrendt, L. Novak, and R. Botros, “A deep learning approach to traffic lights: Detection, tracking, and classification,” in

    IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 1370–1377,
  • [80] P. Dollár, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: A benchmark,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 304–311,
  • [81] G. J. Brostow, J. Shotton, J. Fauqueur, and R. Cipolla, “Segmentation and recognition using structure from motion point clouds,” in European Conference on Computer Vision.   Springer, 2008, pp. 44–57,
  • [82] R. Guzmán, J.-B. Hayet, and R. Klette, “Towards ubiquitous autonomous driving: The CCSAD dataset,” in International Conference on Computer Analysis of Images and Patterns.   Springer, 2015, pp. 582–593,
  • [83] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 3213–3223,
  • [84] H. Badino, D. Huber, and T. Kanade, “Real-time topometric localization,” in IEEE International Conference on Robotics and Automation (ICRA), 2012, pp. 1635–1642,
  • [85] E. Santana and G. Hotz, “Learning a driving simulator,” arXiv preprint arXiv:1608.01230, 2016,
  • [86] X. Pan, J. Shi, P. Luo, X. Wang, and X. Tang, “Spatial as deep: Spatial CNN for traffic scene understanding,” in AAAI Conference on Artificial Intelligence, 2018,
  • [87] M. Enzweiler and D. M. Gavrila, “Monocular pedestrian detection: Survey and experiments,” IEEE Trans. Pattern Analysis & Machine Intelligence, no. 12, pp. 2179–2195, 2008,
  • [88] J. Binas, D. Neil, S.-C. Liu, and T. Delbruck, “Ddd17: End-to-end davis driving dataset,” arXiv preprint arXiv:1711.01458, 2017,
  • [89] Y. Chen, J. Wang, J. Li, C. Lu, Z. Luo, H. Xue, and C. Wang, “Lidar-video driving dataset: Learning driving policies effectively,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 5870–5878,
  • [90] N. Pugeault and R. Bowden, “How much of driving is preattentive?” IEEE Trans. Vehicular Technology, vol. 64, no. 12, pp. 5424–5438, 2015,
  • [91] S. Alletto, A. Palazzi, F. Solera, S. Calderara, and R. Cucchiara, “Dr(eye)ve: a dataset for attention-based tasks with applications to autonomous and assisted driving,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2016, pp. 54–60,
  • [92] “Eisats,”, 2010.
  • [93] “Elektra datasets,”, 2016.
  • [94] A. Ess, B. Leibe, K. Schindler, , and L. van Gool, “A mobile vision system for robust multi-person tracking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008,
  • [95] G. Pandey, J. R. McBride, and R. M. Eustice, “Ford campus vision and lidar data set,” The International Journal of Robotics Research, vol. 30, no. 13, pp. 1543–1552, 2011,
  • [96] S. Houben, J. Stallkamp, J. Salmen, M. Schlipsing, and C. Igel, “Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark,” in International Joint Conference on Neural Networks, no. 1288, 2013,
  • [97] S. Meister, B. Jähne, and D. Kondermann, “Outdoor stereo camera system for the generation of real-world benchmark data sets,” Optical Engineering, vol. 51, no. 2, p. 021107, 2012,
  • [98] “HD1K benchmark suite,”, 2018.
  • [99] Y. Choi, N. Kim, S. Hwang, K. Park, J. S. Yoon, K. An, and I. S. Kweon, “Kaist multi-spectral day/night data set for autonomous and assisted driving,” IEEE Trans. Intelligent Transportation Systems, vol. 19, no. 3, pp. 934–948, 2018,
  • [100] J. Jeong, Y. Cho, Y.-S. Shin, H. Roh, and A. Kim, “Complex urban LiDAR data set,” in IEEE International Conference on Robotics and Automation (ICRA), 2018,
  • [101] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” International Journal of Robotics Research (IJRR), 2013,
  • [102] A. Møgelmose, M. M. Trivedi, and T. B. Moeslund, “Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey.” IEEE Trans. Intelligent Transportation Systems, vol. 13, no. 4, pp. 1484–1497, 2012,
  • [103] J.-L. Blanco-Claraco, F.-Á. Moreno-Dueñas, and J. González-Jiménez, “The málaga urban dataset: High-rate stereo and LiDAR in a realistic urban scenario,” The International Journal of Robotics Research, vol. 33, no. 2, pp. 207–214, 2014,
  • [104] G. Neuhold, T. Ollmann, S. R. Bulò, and P. Kontschieder, “The mapillary vistas dataset for semantic understanding of street scenes.” in ICCV, 2017, pp. 5000–5009,
  • [105] “The Nexar challenge and Nexet dataset,”, 2017.
  • [106] Nuscenes dataset. [Online]. Available:
  • [107] W. Maddern, G. Pascoe, C. Linegar, and P. Newman, “1 year, 1000 km: The oxford robotcar dataset,” The International Journal of Robotics Research, vol. 36, no. 1, pp. 3–15, 2017,
  • [108] A. Teichman, J. Levinson, and S. Thrun, “Towards 3D object recognition via classification of arbitrary object tracks,” in IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 4034–4041,
  • [109] D. Pfeiffer, S. Gehrig, and N. Schneider, “Exploiting the power of stereo confidences,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 297–304,
  • [110] C. Caraffi, T. Vojir, J. Trefny, J. Sochman, and J. Matas, “A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera,” in ITS Conference, 2012, pp. 975–982,
  • [111] C. Wojek, S. Walk, and B. Schiele, “Multi-cue onboard pedestrian detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009,
  • [112] “Tusimple benchmark,”, 2018.
  • [113] E. Romera, L. M. Bergasa, and R. Arroyo, “Need data for driver behaviour analysis? presenting the public uah-driveset,” in IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), 2016, pp. 387–392,
  • [114] “Udacity self-driving car,”, 2016.
  • [115] H. Yin and C. Berger, “When to use what data set for your self-driving car algorithm: An overview of publicly available driving datasets,” in IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), 2017, pp. 1–8.
  • [116] S. Wang, M. Bai, G. Mattyus, H. Chu, W. Luo, B. Yang, J. Liang, J. Cheverie, S. Fidler, and R. Urtasun, “Torontocity: Seeing the world with a million eyes,” in IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3028–3036.
  • [117] D. Ramachandram and G. W. Taylor, “Deep multimodal learning: A survey on recent advances and trends,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 96–108, 2017.
  • [118] S. Ulbrich, A. Reschka, J. Rieken, S. Ernst, G. Bagschik, F. Dierkes, M. Nolte, and M. Maurer, “Towards a functional system architecture for automated vehicles,” arXiv preprint arXiv:1703.08557, 2017.
  • [119] S.-C. Lin, Y. Zhang, C.-H. Hsu, M. Skach, M. E. Haque, L. Tang, and J. Mars, “The architectural implications of autonomous driving: Constraints and acceleration,” in ACM 23rd International Conference on Architectural Support for Programming Languages and Operating Systems, 2018, pp. 751–766.
  • [120] Mobileye, “Enabling autonomous,”, 2018.
  • [121] B. Apollo, “Open platform details,”, 2018.
  • [122] “The KITTI vision benchmark suite,”, 2018.
  • [123] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang et al., “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316, 2016.
  • [124] S. C. Wong, A. Gatt, V. Stamatescu, and M. D. McDonnell, “Understanding data augmentation for classification: when to warp?” arXiv preprint arXiv:1609.08764, 2016.
  • [125] A. Gaidon, Q. Wang, Y. Cabon, and E. Vig, “Virtual worlds as proxy for multi-object tracking analysis,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4340–4349.
  • [126] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez, “The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 3234–3243.
  • [127] S. R. Richter, Z. Hayder, and V. Koltun, “Playing for benchmarks,” in International Conference on Computer Vision (ICCV), vol. 2, 2017.
  • [128] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, 2014, pp. 2672–2680.
  • [129] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb, “Learning from simulated and unsupervised images through adversarial training.” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, no. 4, 2017, p. 5.
  • [130] H.-P. Schöner, “Simulation in development and testing of autonomous vehicles,” in Internationales Stuttgarter Symposium.   Springer, 2018, pp. 1083–1095.
  • [131] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “Carla: An open urban driving simulator,” in Conference on Robot Learning, 2017, pp. 1–16.
  • [132] S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and service robotics.   Springer, 2018, pp. 621–635.
  • [133] Apollo simulation. [Online]. Available:
  • [134] Nvidia drive constellation-virtual reality autonomous vehicle simulator. [Online]. Available:
  • [135] Pegasus research project. [Online]. Available:
  • [136] A. Pütz, A. Zlocki, J. Bock, and L. Eckstein, “System validation of highly automated vehicles with a database of relevant traffic scenarios,” 12th ITS European Congress, vol. 1, 2017.
  • [137] S. Ebrahimi, A. Rohrbach, and T. Darrell, “Gradient-free policy architecture search and adaptation,” arXiv preprint arXiv:1710.05958, 2017.
  • [138] S. J. Pan, Q. Yang et al.

    , “A survey on transfer learning,”

    IEEE Trans. Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2010.