DeepAI AI Chat
Log In Sign Up

LEADER: Learning Attention over Driving Behaviors for Planning under Uncertainty

by   Mohamad H. Danesh, et al.

Uncertainty on human behaviors poses a significant challenge to autonomous driving in crowded urban environments. The partially observable Markov decision processes (POMDPs) offer a principled framework for planning under uncertainty, often leveraging Monte Carlo sampling to achieve online performance for complex tasks. However, sampling also raises safety concerns by potentially missing critical events. To address this, we propose a new algorithm, LEarning Attention over Driving bEhavioRs (LEADER), that learns to attend to critical human behaviors during planning. LEADER learns a neural network generator to provide attention over human behaviors in real-time situations. It integrates the attention into a belief-space planner, using importance sampling to bias reasoning towards critical events. To train the algorithm, we let the attention generator and the planner form a min-max game. By solving the min-max game, LEADER learns to perform risk-aware planning without human labeling.


Autonomous Driving at Intersections: A Critical-Turning-Point Approach for Left Turns

Left-turn planning is one of the formidable challenges for autonomous ve...

Anytime Game-Theoretic Planning with Active Reasoning About Humans' Latent States for Human-Centered Robots

A human-centered robot needs to reason about the cognitive limitation an...

Uncertainty-Aware Learning from Demonstration using Mixture Density Networks with Sampling-Free Variance Modeling

In this paper, we propose an uncertainty-aware learning from demonstrati...

Closing the Planning-Learning Loop with Application to Autonomous Driving in a Crowd

Imagine an autonomous robot vehicle driving in dense, possibly unregulat...

Improving Automated Driving through Planning with Human Internal States

This work examines the hypothesis that partially observable Markov decis...

DESPOT: Online POMDP Planning with Regularization

The partially observable Markov decision process (POMDP) provides a prin...