IntentNet: Learning to Predict Intention from Raw Sensor Data

01/20/2021
by   Sergio Casas, et al.
0

In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high-level behaviors as well as continuous trajectories describing future motion. In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.

READ FULL TEXT

page 3

page 8

research
06/22/2020

High-Precision Digital Traffic Recording with Multi-LiDAR Infrastructure Sensor Setups

Large driving datasets are a key component in the current development an...
research
01/16/2021

LookOut: Diverse Multi-Future Prediction and Planning for Self-Driving

Self-driving vehicles need to anticipate a diverse set of future traffic...
research
07/10/2020

VRUNet: Multi-Task Learning Model for Intent Prediction of Vulnerable Road Users

Advanced perception and path planning are at the core for any self-drivi...
research
06/03/2020

MultiNet: Multiclass Multistage Multimodal Motion Prediction

One of the critical pieces of the self-driving puzzle is understanding t...
research
01/18/2021

MP3: A Unified Model to Map, Perceive, Predict and Plan

High-definition maps (HD maps) are a key component of most modern self-d...
research
06/07/2021

Extending counterfactual accounts of intent to include oblique intent

One approach to defining Intention is to use the counterfactual tools de...
research
08/03/2023

UniSim: A Neural Closed-Loop Sensor Simulator

Rigorously testing autonomy systems is essential for making safe self-dr...

Please sign up or login with your details

Forgot password? Click here to reset