We present the largest self-driving dataset for motion prediction to date, with over 1,000 hours of data. This was collected by a fleet of 20 autonomous vehicles along a fixed route in Palo Alto, California over a four-month period. It consists of 170,000 scenes, where each scene is 25 seconds long and captures the perception output of the self-driving system, which encodes the precise positions and motions of nearby vehicles, cyclists, and pedestrians over time. On top of this, the dataset contains a high-definition semantic map with 15,242 labelled elements and a high-definition aerial view over the area. Together with the provided software kit, this collection forms the largest, most complete and detailed dataset to date for the development of self-driving, machine learning tasks such as motion forecasting, planning and simulation. The full dataset is available at http://level5.lyft.com/.READ FULL TEXT VIEW PDF
As autonomous driving systems mature, motion forecasting has received
We present a novel method for testing the safety of self-driving vehicle...
Current perception models in autonomous driving have become notorious fo...
Modern self-driving autonomy systems heavily rely on deep learning. As a...
The autonomous vehicle motion prediction literature is reviewed. Motion
We present Argoverse – two datasets designed to support autonomous vehic...
Human movements are both an area of intense study and the basis of many
Modified l5kit package for the kaggle competition
The availability of large-scale datasets has been a large contributor to AI progress in the recent decade. In the field of self-driving vehicles (SDV), several datasets, such as [9, 6, 13] enabled great progress in the development of perception systems [14, 16, 24, 20]. These let the self-driving vehicle process its LiDAR and camera sensors to understand the positions of other traffic participants including cars, pedestrians and cyclists around the vehicle.
Perception, however, is only the first step in the modern self-driving pipeline. Much work needs to be done around data-driven motion prediction of traffic participants, trajectory planning, and simulation before self-driving vehicles can become a reality. Datasets for developing these methods differ from those used for perception, in that they require large amounts of behavioural observations and interactions. These are obtained by combining the output of perception systems with an understanding of the environment in the form of a semantic map that contains priors over expected behaviour. Broad availability of datasets for these downstream tasks is much more limited and mostly available only to large-scale industrial efforts in the form of in-house collected data. This limits the progress within the computer vision and robotics communities to advance modern machine learning systems for these important tasks.
In this work, we share the largest and most detailed dataset to date for training motion prediction solutions. We are motivated by the following scenario: a self-driving fleet serving a single high-demand route, rather than serving a broad area. We consider this to be a more feasible deployment strategy for rideshare, since self-driving vehicles can be allocated to particular routes while human drivers serve the remaining traffic. This focus allows us to bound the system performance requirements and accident likelihood, both key factors for real-world self-driving deployment. In summary, the released dataset consists of:
The largest dataset to date for motion prediction, comprising 1,000 hours of traffic scenes capturing the motions of traffic participants around 20 self-driving vehicles, driving over 26,000 km along a suburban route.
The most detailed high-definition (HD) semantic map of the area, counting over 15,000 human annotations including 8,500 lane segments.
A high-resolution aerial image of the area, spanning 74 at a resolution of 6 cm per pixel, providing further spatial context about the environment.
L5Kit - a Python software library for accessing the dataset, together with a baseline machine learning solution for the motion prediction task.
|KITTI ||6h||50||None||3D bounding boxes||Perception|
|Oxford RobotCar ||1,000km||100+||None||-||Perception|
|Waymo Open Dataset ||10h||1000||None||3D bounding boxes||Perception|
|ApolloScape Scene Parsing ||2h||-||None||3D bounding boxes||Perception|
|Argoverse 3D Tracking v1.1 ||1h||113||Lane center lines, lane connectivity||3D bounding boxes,||Perception|
|Lyft Perception Dataset ||2.5h||366||Rasterised road geometry||3D bounding boxes||Perception|
|nuScenes ||6h||1000||Rasterised road geometry||3D bounding boxes,||Perception,|
|ApolloScape Trajectory ||2h||103||None||Trajectories||Prediction|
|Argoverse Forecasting v1.1 ||320h||324k||Lane center lines, lane connectivity||Trajectories||Prediction|
|Lyft Prediction Dataset||1,118h||170k||Road geometry, aerial map,||Trajectories||Prediction|
|crosswalks, traffic signs, …|
In this section we review related existing datasets for training autonomous vehicle (AV) systems from the viewpoint of a classical state-of-the-art self-driving stack summarised in Figure 2
. In this stack, first, the raw sensor input is processed by a perception system to estimate positions of nearby vehicles, pedestrians, cyclists and other traffic participants. Next, the future motion and intent of these actors is estimated and used for planning vehicle maneuvers. In Table1 we summarise the current leading datasets when training machine learning solutions for different components of this stack focusing mainly on the perception and prediction components.
The perception task is usually framed as the supervised task of estimating 3D position of nearby objects around the AV. Deep learning approaches are now state of the art for most problems relevant for autonomous driving, such as 3D object detection and semantic segmentation[20, 24, 16, 14].
Among the datasets for training these systems the KITTI dataset  is the most known benchmarking dataset for many computer vision and autonomous driving related tasks. It contains around 6 hours of driving data, recorded from front-facing stereo cameras, LiDAR and GPS/IMU sensors. 3D bounding box annotations are available, including classifications into different classes such as cars, trucks and pedestrians. The Waymo Open Dataset  and nuScenes  are of similar size and structure, providing 3D bounding box labels based on fused sensory inputs. The Oxford RobotCar dataset also allows application for visual tasks, but the focus lies more on localisation and mapping rather than object detection.
Our dataset’s main target is not to train perception systems. Instead, it is a product of an already trained perception system used to process large quantities of new data for motion prediction. For this we refer to the Lyft Level 5 AV Dataset , that was collected along the same geographical route and that was used to train the included perception system.
Prediction datasets The prediction task builds on top of perception by trying to predict the output of the perception system a few seconds into the future. As such, it differs in terms of needed information for both training and evaluation. It builds on top of well-established perception systems and, in order to allow for good results, labelled bounding boxes alone are not sufficient: one needs significantly more detailed information about the environment including, for example, semantic maps that encode possible driving behaviour to reason about future behaviours.
or graph neural networks[4, 8] have established themselves as the leading solutions for this task. Representative large-scale datasets for training these systems are, however, rare. The above mentioned solutions were developed almost exclusively by industrial labs leveraging internal proprietary datasets.
The most relevant existing open dataset is the Argoverse Forecasting dataset 
providing 300 hours of perception data and a lightweight HD semantic map encoding lane center positions. Our dataset differs in three substantial ways: 1) Instead of focusing on a wide city area we provide 1000 hours of data along a single route. This is motivated by the assumption that, particularly in ride-hailing applications, first applications of deploying self-driving fleets are more likely to occur along few high-demand routes. This makes it possible to bound requirements and quantify accident risk. 2) We are contributing higher-quality scene data by providing full perception output including bounding boxes, class probabilities instead of pure centroids. In addition, our semantic map is more complete: it counts more than 15,000 human annotations instead of only lane centers. 3) We also provide a high-resolution aerial image of the area. This is motivated by the fact that much of the information encoded in the semantic map is implicitly accessible in the aerial form. Providing this map can, therefore, unlock the development of semantic-map free solutions.
Here we outline the details of the released dataset, including the process that was used to construct it. An overview of different dataset statistics can be found in Table 2.
The dataset has three components:
170k scenes, each 25 seconds long, capturing the movement of the self-driving vehicle and traffic participants around it.
A high-definition semantic map capturing the road rules and lane geometry and other traffic elements.
A high-resolution aerial picture of the area that can be used to further aid the prediction.
|# self driving vehicles used||20|
|Total data set size||1,118 hours / 26,344 km / 162k scenes|
|Training set size||928 hours / 21,849 km / 134k scenes|
|Validation set size||78 hours / 1,840 km / 11k scenes|
|Test set size||112 hours / 2,656 km / 16k scenes|
|Scene length||25 seconds|
|Total # of traffic participant observations||3,187,838,149|
|Average # of detections per frame||79|
|Labels||Car: 92.47% / Pedestrian: 5.91% / Cyclist: 1.62%|
|Semantic map||15,242 annotations / 8,505 lane segments|
|Aerial map||74 km at 6 cm per pixel|
The dataset consists of 170,000 scenes, each 25 seconds long, totalling over 1,118 hours of logs. Example scenes are shown in Figure 4. All logs were collected by a fleet of self-driving vehicles driving along a fixed route. The sensors for perception include 7 cameras, 3 LiDARs, and 5 radars (see Figure 3). The sensors are positioned as follows: one LiDAR is on the roof of the vehicle, and two LiDARs on the front bumper. The roof LiDAR has 64 channels and spins at 10 Hz, while the bumper LiDARs have 40 channels. All seven cameras are mounted on the roof and together have a 360 degree horizontal field of view. Four radars are also mounted on the roof, and one radar is placed on the forward-facing front bumper.
The dataset was collected between October 2019 and March 2020. It was captured during daytime, between 8 AM and 4 PM. For each scene we detected the visible traffic participants, including vehicles, pedestrians, and cyclists. Each traffic participant is internally represented by a 2.5D cuboid, velocity, acceleration, yaw, yaw rate, and a class label. These traffic participants are detected using our in-house perception system, which fuses data across multiple modalities to produce a 360 degree view of the world surrounding the SDV. Table 2 outlines some more statistics for the dataset.
We split the dataset into train, validation and test sections in a 83–7–10% ratio, where a particular SDV only contributes to a single section. We encode the dataset in the form of -dimensional compressed zarr arrays. The zarr format 111 https://zarr.readthedocs.io/ was chosen to represent individual scenes. This allows for fast random access to different portions of the dataset while minimising the memory footprint, which allows efficient distributed training on the cloud.
|Lane boundaries||sequence of coordinates|
|Lane connectivity||possible lane transitions|
|Driving directions||one way, two way|
|Road class||primary, secondary, tertiary, …|
|Road paintings||solid, dashed, colour|
|Lane restrictions||bus only, bike only, turn only, …|
|Traffic lights||position, lane association|
|Traffic signs||stop, turn, yield, parking, …|
|Restrictions||keep clear zones, no parking, …|
The HD semantic map that we provide encodes information about the road itself, and various traffic elements along the route totalling 15,242 labelled elements including 8,505 lane segments. This map was created by human curators who annotated the underlying localisation map, which in turn was created using a simultaneous localisation and mapping (SLAM) system. Given the use of SLAM, the position of the SDV is always known with centimetre-grade accuracy. Thus, the information in the semantic map can be used both for planning driving behaviour and for anticipating the future movements of other traffic participants.
The aerial map captures the area of Palo Alto that surrounds the route at a resolution of 6 cm per pixel333The aerial map is provided by NearMap https://www.nearmap.com/.. It enables the use of spatial information to aid with motion prediction. Figure 5 shows the map coverage and the level of detail. The covered area of 74 and is provided as 181 GeoTIFF tiles of size pixels, each spanning approximately meters.
Examples of different birds-eye-view scene rasterisations that can be made using the associated software development kit. These can be used, for example, as an input to a convolution neural network architecture.
Together with the dataset, we are releasing a python toolkit named L5Kit. It provides access to some ease-of-use functionalities and a full sample motion prediction pipeline which can be viewed as a baseline. It is available at https://github.com/lyft/l5kit/ and contains the following components:
Multi-threaded data loading and sampling. We provide wrappers around the raw data files that can sample scenes and load the data efficiently. Scenes can be sampled from multiple points of view: for prediction of the ego vehicle motion path, we can center the scene around our SDV. For predicting the motions of other traffic participants, we provide the functionality to recenter the scene around those traffic participants. This process is optimised for scalability and multi-threading to make it suitable for distributed machine learning.
Customisable scene visualisation and rasterisation.
We provide several functions to visualise and rasterise a sampled scene. Our visualisation package can draw additional information, such as the future trajectory, onto an RGB image and save files as images, GIFs or full scene videos.
We support several different rasterisation modes for creating a meaningful representation for the underlying image. Figure 6 shows example images generated by the different modes and created from either the semantic map (upper right image) or the aerial map (lower right), or a combination of both (lower left). Such images can then be used as input to a conventional machine learning pipeline akin to [2, 7].
|Map raster type||History firstname.lastname@example.org||@1s||@2s||@3s||@4s||@5s||ADE|
Baseline motion prediction solution.
In addition to the above-described functionalities, we also provide a complete training pipeline for a motion prediction task including a baseline experiment in Pytorch. This was designed to show how the data can be used for the task at hand. It also includes several helpful functions to ease training with our data, such as data loader classes and evaluation functions.
The presented end-to-end motion prediction pipeline should serve as a baseline and was inspired by the works of [2, 23]. Concretely, the task is to predict the expected future (x,y) positions over a 5-second-horizon for different traffic participants in the scene given their current positions. Implementation-wise, we use a ResNet-50 backbone  with loss that was trained on BEV rasters centered around several different vehicles of interest. To improve accuracy, we can also provide history of vehicle movements over past few seconds by simply stacking BEV rasters together. This allows network to implicitly compute agents current velocity and heading. Figure 7 displays typical predictions after training each vehicle on this architecture for 38k iterations with a batch size 64.
Table 4 summarises the displacement error at various prediction horizons for different configurations (the -norm between the predicted point and the true poition at horizon ), as well as the displacement error averaged over all timesteps (ADE).
The dataset introduced in this paper is the largest and the most detailed dataset available for training prediction solutions. It is three times larger and significantly more descriptive than the current best alternative . Although it was tailored towards the motion prediction task, we believe the rich observational data that additionally describes the motion of all other traffic participants can also be used in the development of new machine learning solutions for the downstream tasks within planning and simulation akin to the recently proposed works of [2, 23]. We believe that publicising this dataset marks an important next step towards the democratisation within the development of self-driving applications. This, in turn, can result in faster progress towards a fully autonomous future. For additional questions about the dataset feel free to reach out to email@example.com.
This work was done thanks to many members of the Lyft Level 5 team. Specifically, we would like to thank Emil Praun, Christy Robertson, Oliver Scheel, Stefanie Speichert, Liam Kelly, Chih Hu, Usman Muhammad, Lei Zhang, Dmytro Korduban, Jason Zhao and Hugo Grimmett.
Int. Conf. on Computer Vision and Pattern Recognition (CVPR). Cited by: §2.
VectorNet: encoding hd maps and agent dynamics from vectorized representation. Int. Conf. on Computer Vision and Pattern Recognition (CVPR). Cited by: §2.
AAAI Conference on Artificial Intelligence. Cited by: Table 1.