Improving Automated Driving through Planning with Human Internal States

by   Zachary Sunberg, et al.

This work examines the hypothesis that partially observable Markov decision process (POMDP) planning with human driver internal states can significantly improve both safety and efficiency in autonomous freeway driving. We evaluate this hypothesis in a simulated scenario where an autonomous car must safely perform three lane changes in rapid succession. Approximate POMDP solutions are obtained through the partially observable Monte Carlo planning with observation widening (POMCPOW) algorithm. This approach outperforms over-confident and conservative MDP baselines and matches or outperforms QMDP. Relative to the MDP baselines, POMCPOW typically cuts the rate of unsafe situations in half or increases the success rate by 50


The Value of Inferring the Internal State of Traffic Participants for Autonomous Freeway Driving

Safe interaction with human drivers is one of the primary challenges for...

RAPID: A Reachable Anytime Planner for Imprecisely-sensed Domains

Despite the intractability of generic optimal partially observable Marko...

Anytime Game-Theoretic Planning with Active Reasoning About Humans' Latent States for Human-Centered Robots

A human-centered robot needs to reason about the cognitive limitation an...

Adaptive Informative Path Planning with Multimodal Sensing

Adaptive Informative Path Planning (AIPP) problems model an agent tasked...

Autonomous Driving at Intersections: A Critical-Turning-Point Approach for Left Turns

Left-turn planning is one of the formidable challenges for autonomous ve...

Planning and Acting under Uncertainty: A New Model for Spoken Dialogue Systems

Uncertainty plays a central role in spoken dialogue systems. Some stocha...

AI based Safety System for Employees of Manufacturing Industries in Developing Countries

In this paper authors are going to present a Markov Decision Process (MD...