DeepAI AI Chat
Log In Sign Up

What data do we need for training an AV motion planner?

by   Long Chen, et al.

We investigate what grade of sensor data is required for training an imitation-learning-based AV planner on human expert demonstration. Machine-learned planners are very hungry for training data, which is usually collected using vehicles equipped with the same sensors used for autonomous operation. This is costly and non-scalable. If cheaper sensors could be used for collection instead, data availability would go up, which is crucial in a field where data volume requirements are large and availability is small. We present experiments using up to 1000 hours worth of expert demonstration and find that training with 10x lower-quality data outperforms 1x AV-grade data in terms of planner performance. The important implication of this is that cheaper sensors can indeed be used. This serves to improve data access and democratize the field of imitation-based motion planning. Alongside this, we perform a sensitivity analysis of planner performance as a function of perception range, field-of-view, accuracy, and data volume, and the reason why lower-quality data still provide good planning results.


page 1

page 3

page 5

page 6


Quantity over Quality: Training an AV Motion Planner with Large Scale Commodity Vision Data

With the Autonomous Vehicle (AV) industry shifting towards Autonomy 2.0,...

Self-Imitation Learning by Planning

Imitation learning (IL) enables robots to acquire skills quickly by tran...

Imitation Learning for Robust and Safe Real-time Motion Planning: A Contraction Theory Approach

This paper presents Learning-based Autonomous Guidance with Robustness, ...

Continuous Relaxation of Symbolic Planner for One-Shot Imitation Learning

We address one-shot imitation learning, where the goal is to execute a p...

Iterative Imitation Policy Improvement for Interactive Autonomous Driving

We propose an imitation learning system for autonomous driving in urban ...

A Multi-Resolution Frontier-Based Planner for Autonomous 3D Exploration

In this paper we propose a planner for 3D exploration that is suitable f...

Deep-PANTHER: Learning-Based Perception-Aware Trajectory Planner in Dynamic Environments

This paper presents Deep-PANTHER, a learning-based perception-aware traj...