DeepAI AI Chat
Log In Sign Up

LEADER: Learning Attention over Driving Behaviors for Planning under Uncertainty

09/23/2022
by   Mohamad H. Danesh, et al.
0

Uncertainty on human behaviors poses a significant challenge to autonomous driving in crowded urban environments. The partially observable Markov decision processes (POMDPs) offer a principled framework for planning under uncertainty, often leveraging Monte Carlo sampling to achieve online performance for complex tasks. However, sampling also raises safety concerns by potentially missing critical events. To address this, we propose a new algorithm, LEarning Attention over Driving bEhavioRs (LEADER), that learns to attend to critical human behaviors during planning. LEADER learns a neural network generator to provide attention over human behaviors in real-time situations. It integrates the attention into a belief-space planner, using importance sampling to bias reasoning towards critical events. To train the algorithm, we let the attention generator and the planner form a min-max game. By solving the min-max game, LEADER learns to perform risk-aware planning without human labeling.

READ FULL TEXT
03/05/2020

Autonomous Driving at Intersections: A Critical-Turning-Point Approach for Left Turns

Left-turn planning is one of the formidable challenges for autonomous ve...
09/26/2021

Anytime Game-Theoretic Planning with Active Reasoning About Humans' Latent States for Human-Centered Robots

A human-centered robot needs to reason about the cognitive limitation an...
09/03/2017

Uncertainty-Aware Learning from Demonstration using Mixture Density Networks with Sampling-Free Variance Modeling

In this paper, we propose an uncertainty-aware learning from demonstrati...
01/11/2021

Closing the Planning-Learning Loop with Application to Autonomous Driving in a Crowd

Imagine an autonomous robot vehicle driving in dense, possibly unregulat...
05/28/2020

Improving Automated Driving through Planning with Human Internal States

This work examines the hypothesis that partially observable Markov decis...
09/12/2016

DESPOT: Online POMDP Planning with Regularization

The partially observable Markov decision process (POMDP) provides a prin...