Probabilistic Prediction of Interactive Driving Behavior via Hierarchical Inverse Reinforcement Learning

09/09/2018
by   Liting Sun, et al.
0

Autonomous vehicles (AVs) are on the road. To safely and efficiently interact with other road participants, AVs have to accurately predict the behavior of surrounding vehicles and plan accordingly. Such prediction should be probabilistic, to address the uncertainties in human behavior. Such prediction should also be interactive, since the distribution over all possible trajectories of the predicted vehicle depends not only on historical information, but also on future plans of other vehicles that interact with it. To achieve such interaction-aware predictions, we propose a probabilistic prediction approach based on hierarchical inverse reinforcement learning (IRL). First, we explicitly consider the hierarchical trajectory-generation process of human drivers involving both discrete and continuous driving decisions. Based on this, the distribution over all future trajectories of the predicted vehicle is formulated as a mixture of distributions partitioned by the discrete decisions. Then we apply IRL hierarchically to learn the distributions from real human demonstrations. A case study for the ramp-merging driving scenario is provided. The quantitative results show that the proposed approach can accurately predict both the discrete driving decisions such as yield or pass as well as the continuous trajectories.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2019

Generic Prediction Architecture Considering both Rational and Irrational Driving Behaviors

Accurately predicting future behaviors of surrounding vehicles is an ess...
research
09/10/2018

Towards a Fatality-Aware Benchmark of Probabilistic Reaction Prediction in Highly Interactive Driving Scenarios

Autonomous vehicles should be able to generate accurate probabilistic pr...
research
09/16/2019

Off-road Autonomous Vehicles Traversability Analysis and Trajectory Planning Based on Deep Inverse Reinforcement Learning

Terrain traversability analysis is a fundamental issue to achieve the au...
research
11/04/2020

IDE-Net: Interactive Driving Event and Pattern Extraction from Human Data

Autonomous vehicles (AVs) need to share the road with multiple, heteroge...
research
06/22/2020

Efficient Sampling-Based Maximum Entropy Inverse Reinforcement Learning with Application to Autonomous Driving

In the past decades, we have witnessed significant progress in the domai...
research
09/27/2021

Validating human driver models for interaction-aware automated vehicle controllers: A human factors approach

A major challenge for autonomous vehicles is interacting with other traf...
research
06/02/2019

Longitudinal Trajectory Prediction of Human-driven Vehicles Near Traffic Lights

Predicting future trajectories of human-driven vehicles is a crucial pro...

Please sign up or login with your details

Forgot password? Click here to reset