Log In Sign Up

One Thousand and One Hours: Self-driving Motion Prediction Dataset

by   John Houston, et al.

We present the largest self-driving dataset for motion prediction to date, with over 1,000 hours of data. This was collected by a fleet of 20 autonomous vehicles along a fixed route in Palo Alto, California over a four-month period. It consists of 170,000 scenes, where each scene is 25 seconds long and captures the perception output of the self-driving system, which encodes the precise positions and motions of nearby vehicles, cyclists, and pedestrians over time. On top of this, the dataset contains a high-definition semantic map with 15,242 labelled elements and a high-definition aerial view over the area. Together with the provided software kit, this collection forms the largest, most complete and detailed dataset to date for the development of self-driving, machine learning tasks such as motion forecasting, planning and simulation. The full dataset is available at


page 1

page 2

page 3

page 4

page 5

page 6

page 7


Large Scale Interactive Motion Forecasting for Autonomous Driving : The Waymo Open Motion Dataset

As autonomous driving systems mature, motion forecasting has received in...

Testing the Safety of Self-driving Vehicles by Simulating Perception and Prediction

We present a novel method for testing the safety of self-driving vehicle...

One Million Scenes for Autonomous Driving: ONCE Dataset

Current perception models in autonomous driving have become notorious fo...

Diverse Complexity Measures for Dataset Curation in Self-driving

Modern self-driving autonomy systems heavily rely on deep learning. As a...

Scalability in Perception for Autonomous Driving: An Open Dataset Benchmark

The research community has increasing interest in autonomous driving res...

Argoverse: 3D Tracking and Forecasting with Rich Maps

We present Argoverse – two datasets designed to support autonomous vehic...

SimNet: Learning Reactive Self-driving Simulations from Real-world Observations

In this work, we present a simple end-to-end trainable machine learning ...

Code Repositories


GRIP++ implementation

view repo


Modified l5kit package for the kaggle competition

view repo

1 Introduction

Figure 2: An example of a state-of-the-art self-driving pipeline. First, the raw LiDAR and camera data are processed to detect the positions of nearby objects around the vehicle. Then, their motion is predicted to allow the SDV to plan a safe collision-free trajectory. The released dataset enables the modelling of a motion prediction component.

The availability of large-scale datasets has been a large contributor to AI progress in the recent decade. In the field of self-driving vehicles (SDV), several datasets, such as [9, 6, 13] enabled great progress in the development of perception systems [14, 16, 24, 20]. These let the self-driving vehicle process its LiDAR and camera sensors to understand the positions of other traffic participants including cars, pedestrians and cyclists around the vehicle.

Perception, however, is only the first step in the modern self-driving pipeline. Much work needs to be done around data-driven motion prediction of traffic participants, trajectory planning, and simulation before self-driving vehicles can become a reality. Datasets for developing these methods differ from those used for perception, in that they require large amounts of behavioural observations and interactions. These are obtained by combining the output of perception systems with an understanding of the environment in the form of a semantic map that contains priors over expected behaviour. Broad availability of datasets for these downstream tasks is much more limited and mostly available only to large-scale industrial efforts in the form of in-house collected data. This limits the progress within the computer vision and robotics communities to advance modern machine learning systems for these important tasks.

In this work, we share the largest and most detailed dataset to date for training motion prediction solutions. We are motivated by the following scenario: a self-driving fleet serving a single high-demand route, rather than serving a broad area. We consider this to be a more feasible deployment strategy for rideshare, since self-driving vehicles can be allocated to particular routes while human drivers serve the remaining traffic. This focus allows us to bound the system performance requirements and accident likelihood, both key factors for real-world self-driving deployment. In summary, the released dataset consists of:

  • The largest dataset to date for motion prediction, comprising 1,000 hours of traffic scenes capturing the motions of traffic participants around 20 self-driving vehicles, driving over 26,000 km along a suburban route.

  • The most detailed high-definition (HD) semantic map of the area, counting over 15,000 human annotations including 8,500 lane segments.

  • A high-resolution aerial image of the area, spanning 74 at a resolution of 6 cm per pixel, providing further spatial context about the environment.

  • L5Kit - a Python software library for accessing the dataset, together with a baseline machine learning solution for the motion prediction task.

2 Related Work

Name Size Scenes Map Annotations Task
KITTI [9] 6h 50 None 3D bounding boxes Perception
Oxford RobotCar [18] 1,000km 100+ None - Perception
Waymo Open Dataset [21] 10h 1000 None 3D bounding boxes Perception
ApolloScape Scene Parsing [22] 2h - None 3D bounding boxes Perception
Argoverse 3D Tracking v1.1 [6] 1h 113 Lane center lines, lane connectivity 3D bounding boxes, Perception
Lyft Perception Dataset [13] 2.5h 366 Rasterised road geometry 3D bounding boxes Perception
nuScenes [3] 6h 1000 Rasterised road geometry 3D bounding boxes, Perception,
trajectories Prediction
ApolloScape Trajectory [17] 2h 103 None Trajectories Prediction
Argoverse Forecasting v1.1 [6] 320h 324k Lane center lines, lane connectivity Trajectories Prediction
Lyft Prediction Dataset 1,118h 170k Road geometry, aerial map, Trajectories Prediction
crosswalks, traffic signs, …
Table 1: A comparison of various self-driving datasets available up-to date. Our dataset surpasses all others in terms of size, as well as level of detail of the semantic map (see Section 3).

In this section we review related existing datasets for training autonomous vehicle (AV) systems from the viewpoint of a classical state-of-the-art self-driving stack summarised in Figure 2

. In this stack, first, the raw sensor input is processed by a perception system to estimate positions of nearby vehicles, pedestrians, cyclists and other traffic participants. Next, the future motion and intent of these actors is estimated and used for planning vehicle maneuvers. In Table

1 we summarise the current leading datasets when training machine learning solutions for different components of this stack focusing mainly on the perception and prediction components.

Perception datasets

The perception task is usually framed as the supervised task of estimating 3D position of nearby objects around the AV. Deep learning approaches are now state of the art for most problems relevant for autonomous driving, such as 3D object detection and semantic segmentation

[20, 24, 16, 14].

Among the datasets for training these systems the KITTI dataset [9] is the most known benchmarking dataset for many computer vision and autonomous driving related tasks. It contains around 6 hours of driving data, recorded from front-facing stereo cameras, LiDAR and GPS/IMU sensors. 3D bounding box annotations are available, including classifications into different classes such as cars, trucks and pedestrians. The Waymo Open Dataset [21] and nuScenes [3] are of similar size and structure, providing 3D bounding box labels based on fused sensory inputs. The Oxford RobotCar dataset also allows application for visual tasks, but the focus lies more on localisation and mapping rather than object detection.

Our dataset’s main target is not to train perception systems. Instead, it is a product of an already trained perception system used to process large quantities of new data for motion prediction. For this we refer to the Lyft Level 5 AV Dataset [13], that was collected along the same geographical route and that was used to train the included perception system.

Prediction datasets The prediction task builds on top of perception by trying to predict the output of the perception system a few seconds into the future. As such, it differs in terms of needed information for both training and evaluation. It builds on top of well-established perception systems and, in order to allow for good results, labelled bounding boxes alone are not sufficient: one needs significantly more detailed information about the environment including, for example, semantic maps that encode possible driving behaviour to reason about future behaviours.

Deep learning solutions leveraging the birds-eye-view (BEV) representation of the scene [1, 10, 15, 7, 5, 12]

or graph neural networks

[4, 8] have established themselves as the leading solutions for this task. Representative large-scale datasets for training these systems are, however, rare. The above mentioned solutions were developed almost exclusively by industrial labs leveraging internal proprietary datasets.

The most relevant existing open dataset is the Argoverse Forecasting dataset [6]

providing 300 hours of perception data and a lightweight HD semantic map encoding lane center positions. Our dataset differs in three substantial ways: 1) Instead of focusing on a wide city area we provide 1000 hours of data along a single route. This is motivated by the assumption that, particularly in ride-hailing applications, first applications of deploying self-driving fleets are more likely to occur along few high-demand routes. This makes it possible to bound requirements and quantify accident risk. 2) We are contributing higher-quality scene data by providing full perception output including bounding boxes, class probabilities instead of pure centroids. In addition, our semantic map is more complete: it counts more than 15,000 human annotations instead of only lane centers. 3) We also provide a high-resolution aerial image of the area. This is motivated by the fact that much of the information encoded in the semantic map is implicitly accessible in the aerial form. Providing this map can, therefore, unlock the development of semantic-map free solutions.

Figure 3: The self-driving vehicle configuration used to collect the data. Raw data from LiDARs and cameras were processed by perception system to generate the dataset, capturing the poses and motion of nearby vehicles.
Figure 4: Examples from the scenes in the dataset, projected over a birds-eye-view of the rasterised semantic map. The self-driving vehicle is shown in red, other traffic participants in yellow, and lane colour denotes driving direction. The dataset contains of 170k such sequences, each 25 seconds long with sensor data at 10Hz.

3 Dataset

Here we outline the details of the released dataset, including the process that was used to construct it. An overview of different dataset statistics can be found in Table 2.

The dataset has three components:

  1. 170k scenes, each 25 seconds long, capturing the movement of the self-driving vehicle and traffic participants around it.

  2. A high-definition semantic map capturing the road rules and lane geometry and other traffic elements.

  3. A high-resolution aerial picture of the area that can be used to further aid the prediction.

Statistic Value
# self driving vehicles used 20
Total data set size 1,118 hours / 26,344 km / 162k scenes
Training set size 928 hours / 21,849 km / 134k scenes
Validation set size 78 hours / 1,840 km / 11k scenes
Test set size 112 hours / 2,656 km / 16k scenes
Scene length 25 seconds
Total # of traffic participant observations 3,187,838,149
Average # of detections per frame 79
Labels Car: 92.47% / Pedestrian: 5.91% / Cyclist: 1.62%
Semantic map 15,242 annotations / 8,505 lane segments
Aerial map 74 km at 6 cm per pixel
Table 2: Statistics of the released dataset.

3.1 Scenes

The dataset consists of 170,000 scenes, each 25 seconds long, totalling over 1,118 hours of logs. Example scenes are shown in Figure 4. All logs were collected by a fleet of self-driving vehicles driving along a fixed route. The sensors for perception include 7 cameras, 3 LiDARs, and 5 radars (see Figure 3). The sensors are positioned as follows: one LiDAR is on the roof of the vehicle, and two LiDARs on the front bumper. The roof LiDAR has 64 channels and spins at 10 Hz, while the bumper LiDARs have 40 channels. All seven cameras are mounted on the roof and together have a 360 degree horizontal field of view. Four radars are also mounted on the roof, and one radar is placed on the forward-facing front bumper.

The dataset was collected between October 2019 and March 2020. It was captured during daytime, between 8 AM and 4 PM. For each scene we detected the visible traffic participants, including vehicles, pedestrians, and cyclists. Each traffic participant is internally represented by a 2.5D cuboid, velocity, acceleration, yaw, yaw rate, and a class label. These traffic participants are detected using our in-house perception system, which fuses data across multiple modalities to produce a 360 degree view of the world surrounding the SDV. Table 2 outlines some more statistics for the dataset.

We split the dataset into train, validation and test sections in a 83–7–10% ratio, where a particular SDV only contributes to a single section. We encode the dataset in the form of -dimensional compressed zarr arrays. The zarr format 111 was chosen to represent individual scenes. This allows for fast random access to different portions of the dataset while minimising the memory footprint, which allows efficient distributed training on the cloud.

3.2 High-definition semantic map

Property Values
Lane boundaries sequence of coordinates
Lane connectivity possible lane transitions
Driving directions one way, two way
Road class primary, secondary, tertiary, …
Road paintings solid, dashed, colour
Speed limits mph
Lane restrictions bus only, bike only, turn only, …
Crosswalks position
Traffic lights position, lane association
Traffic signs stop, turn, yield, parking, …
Restrictions keep clear zones, no parking, …
Speed bumps position
Table 3: Elements of the provided HD semantic map. We provide 15,242 human annotations including 8,505 individual lane segments.

The HD semantic map that we provide encodes information about the road itself, and various traffic elements along the route totalling 15,242 labelled elements including 8,505 lane segments. This map was created by human curators who annotated the underlying localisation map, which in turn was created using a simultaneous localisation and mapping (SLAM) system. Given the use of SLAM, the position of the SDV is always known with centimetre-grade accuracy. Thus, the information in the semantic map can be used both for planning driving behaviour and for anticipating the future movements of other traffic participants.

Figure 5: An overview of the included aerial map of Palo Alto, California in the dataset surrounding the driving route. The map covers at a resolution of 6 cm per pixel.

The semantic map is given in the form of a protocol buffer222 We provide precise road geometry through the encoding of the lane segments, their connectivity, and other properties (as summarised in Table 3).

3.3 Aerial map

The aerial map captures the area of Palo Alto that surrounds the route at a resolution of 6 cm per pixel333The aerial map is provided by NearMap It enables the use of spatial information to aid with motion prediction. Figure 5 shows the map coverage and the level of detail. The covered area of 74 and is provided as 181 GeoTIFF tiles of size pixels, each spanning approximately meters.

4 Development tools

Figure 6:

Examples of different birds-eye-view scene rasterisations that can be made using the associated software development kit. These can be used, for example, as an input to a convolution neural network architecture.

Figure 7: Example output of the motion prediction solution supplied as part of the software development kit. A convolution neural network takes rasterised scenes around nearby vehicles as input, and predicts their future motion.

Together with the dataset, we are releasing a python toolkit named L5Kit. It provides access to some ease-of-use functionalities and a full sample motion prediction pipeline which can be viewed as a baseline. It is available at and contains the following components:

Multi-threaded data loading and sampling. We provide wrappers around the raw data files that can sample scenes and load the data efficiently. Scenes can be sampled from multiple points of view: for prediction of the ego vehicle motion path, we can center the scene around our SDV. For predicting the motions of other traffic participants, we provide the functionality to recenter the scene around those traffic participants. This process is optimised for scalability and multi-threading to make it suitable for distributed machine learning.

Customisable scene visualisation and rasterisation.

We provide several functions to visualise and rasterise a sampled scene. Our visualisation package can draw additional information, such as the future trajectory, onto an RGB image and save files as images, GIFs or full scene videos.

We support several different rasterisation modes for creating a meaningful representation for the underlying image. Figure 6 shows example images generated by the different modes and created from either the semantic map (upper right image) or the aerial map (lower right), or a combination of both (lower left). Such images can then be used as input to a conventional machine learning pipeline akin to [2, 7].

Configuration Displacement [m]
Map raster type History length @0.5s @1s @2s @3s @4s @5s ADE
Semantic 0 sec 0.61 1.15 2.25 3.38 4.58 5.87 2.81
Semantic 1sec 0.42 0.84 1.76 2.71 3.78 5.00 2.28
Table 4: Performance of the baseline motion prediction solution on the dataset. We list the displacement error for different prediction horizons.

Baseline motion prediction solution.

In addition to the above-described functionalities, we also provide a complete training pipeline for a motion prediction task including a baseline experiment in Pytorch

[19]. This was designed to show how the data can be used for the task at hand. It also includes several helpful functions to ease training with our data, such as data loader classes and evaluation functions.

The presented end-to-end motion prediction pipeline should serve as a baseline and was inspired by the works of [2, 23]. Concretely, the task is to predict the expected future (x,y) positions over a 5-second-horizon for different traffic participants in the scene given their current positions. Implementation-wise, we use a ResNet-50 backbone [11] with loss that was trained on BEV rasters centered around several different vehicles of interest. To improve accuracy, we can also provide history of vehicle movements over past few seconds by simply stacking BEV rasters together. This allows network to implicitly compute agents current velocity and heading. Figure 7 displays typical predictions after training each vehicle on this architecture for 38k iterations with a batch size 64.

Table 4 summarises the displacement error at various prediction horizons for different configurations (the -norm between the predicted point and the true poition at horizon ), as well as the displacement error averaged over all timesteps (ADE).

5 Conclusion

The dataset introduced in this paper is the largest and the most detailed dataset available for training prediction solutions. It is three times larger and significantly more descriptive than the current best alternative [6]. Although it was tailored towards the motion prediction task, we believe the rich observational data that additionally describes the motion of all other traffic participants can also be used in the development of new machine learning solutions for the downstream tasks within planning and simulation akin to the recently proposed works of [2, 23]. We believe that publicising this dataset marks an important next step towards the democratisation within the development of self-driving applications. This, in turn, can result in faster progress towards a fully autonomous future. For additional questions about the dataset feel free to reach out to

6 Acknowledgements

This work was done thanks to many members of the Lyft Level 5 team. Specifically, we would like to thank Emil Praun, Christy Robertson, Oliver Scheel, Stefanie Speichert, Liam Kelly, Chih Hu, Usman Muhammad, Lei Zhang, Dmytro Korduban, Jason Zhao and Hugo Grimmett.


  • [1] A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese (2016) Social lstm: human trajectory prediction in crowded spaces.

    Int. Conf. on Computer Vision and Pattern Recognition (CVPR)

    Cited by: §2.
  • [2] M. Bansal, A. Krizhevsky, and A. Ogale (2019) ChauffeurNet: learning to drive by imitating the best and synthesizing the worst. Robotics: Science and Systems (RSS). Cited by: §4, §4, §5.
  • [3] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom (2019) NuScenes: a multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027. Cited by: Table 1, §2.
  • [4] S. Casas, C. Gulino, R. Liao, and R. Urtasun (2020) Spatially-aware graph neural networks for relational behavior forecasting from sensor data. Int. Conf. on Robotics and Automation (ICRA). Cited by: §2.
  • [5] Y. Chai, B. Sapp, M. Bansal, and D. Anguelov (2019) MultiPath: multiple probabilistic anchor trajectory hypotheses for behavior prediction. Cited by: §2.
  • [6] M. Chang, J. Lambert, P. Sangkloy, J. Singh, S. Bak, A. Hartnett, D. Wang, P. Carr, S. Lucey, D. Ramanan, and J. Hays (2019) Argoverse: 3d tracking and forecasting with rich maps. Int. Conf. on Computer Vision and Pattern Recognition (CVPR). Cited by: §1, Table 1, §2, §5.
  • [7] H. Cui, V. Radosavljevic, F. Chou, T. Lin, T. Nguyen, T. Huang, J. Schneider, and N. Djuric (2019) Multimodal trajectory predictions for autonomous driving using deep convolutional networks. Int. Conf. on Robotics and Automation (ICRA). Cited by: §2, §4.
  • [8] J. Gao, C. Sun, H. Zhao, Y. Shen, D. Anguelov, C. Li, and C. Schmid (2020)

    VectorNet: encoding hd maps and agent dynamics from vectorized representation

    Int. Conf. on Computer Vision and Pattern Recognition (CVPR). Cited by: §2.
  • [9] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. Int. Journal of Robotics Research (IJRR). Cited by: §1, Table 1, §2.
  • [10] A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese, and A. Alahi (2018) Social gan: socially acceptable trajectories with generative adversarial networks. In Int. Conf. on Computer Vision and Pattern Recognition, Cited by: §2.
  • [11] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. Int. Conf. on Computer Vision and Pattern Recognition (CVPR). Cited by: §4.
  • [12] J. Hong, B. Sapp, and J. Philbin (2019) Rules of the road: predicting driving behavior with a convolutional model of semantic interactions. Int. Conf. on Computer Vision and Pattern Recognition (CVPR). Cited by: §2.
  • [13] R. Kesten, M. Usman, J. Houston, T. Pandya, K. Nadhamuni, A. Ferreira, M. Yuan, B. Low, A. Jain, P. Ondruska, S. Omari, S. Shah, A. Kulkarni, A. Kazakova, C. Tao, L. Platinsky, W. Jiang, and V. Shet (2019) Lyft level 5 av dataset 2019. Note: Cited by: §1, Table 1, §2.
  • [14] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2018) PointPillars: fast encoders for object detection from point clouds. Int. Conf. on Computer Vision and Pattern Recognition (CVPR). Cited by: §1, §2.
  • [15] N. Lee, W. Choi, P. Vernaza, C. B. Choy, P. H. S. Torr, and M. K. Chandraker (2017) DESIRE: distant future prediction in dynamic scenes with interacting agents. Int. Conf. on Computer Vision and Pattern Recognition (CVPR). Cited by: §2.
  • [16] M. Liang, B. Yang, Y. Chen, R. Hu, and R. Urtasun (2019) Multi-task multi-sensor fusion for 3d object detection. Int. Conf. on Computer Vision and Pattern Recognition. Cited by: §1, §2.
  • [17] Y. Ma, X. Zhu, S. Zhang, R. Yang, W. Wang, and D. Manocha (2019) TrafficPredict: trajectory prediction for heterogeneous traffic-agents.

    AAAI Conference on Artificial Intelligence

    Cited by: Table 1.
  • [18] W. Maddern, G. Pascoe, C. Linegar, and P. Newman (2017) 1 year, 1000km: the oxford robotcar dataset. Int. Journal of Robotics Research (IJRR). Cited by: Table 1.
  • [19] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems (NeurIPS). Cited by: §4.
  • [20] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas (2018) Frustum pointnets for 3d object detection from RGB-D data. Int. Conf. on Computer Vision and Pattern Recognition (CVPR). Cited by: §1, §2.
  • [21] P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, V. Vasudevan, W. Han, J. Ngiam, H. Zhao, A. Timofeev, S. Ettinger, M. Krivokon, A. Gao, A. Joshi, Y. Zhang, J. Shlens, Z. Chen, and D. Anguelov (2019) Scalability in perception for autonomous driving: waymo open dataset. Cited by: Table 1, §2.
  • [22] P. Wang, X. Huang, X. Cheng, D. Zhou, Q. Geng, and R. Yang (2019) The apolloscape open dataset for autonomous driving and its application.. Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Cited by: Table 1.
  • [23] W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, and R. Urtasun (2019) End-to-end interpretable neural motion planner. Int. Conf. on Computer Vision and Pattern Recognition (CVPR). Cited by: §4, §5.
  • [24] Y. Zhou and O. Tuzel (2018) VoxelNet: end-to-end learning for point cloud based 3d object detection. Int. Conf. on Computer Vision and Pattern Recognition (CVPR). Cited by: §1, §2.