Efficient Sampling-Based Maximum Entropy Inverse Reinforcement Learning with Application to Autonomous Driving

by   Zheng Wu, et al.

In the past decades, we have witnessed significant progress in the domain of autonomous driving. Advanced techniques based on optimization and reinforcement learning (RL) become increasingly powerful at solving the forward problem: given designed reward/cost functions, how should we optimize them and obtain driving policies that interact with the environment safely and efficiently. Such progress has raised another equally important question: what should we optimize? Instead of manually specifying the reward functions, it is desired that we can extract what human drivers try to optimize from real traffic data and assign that to autonomous vehicles to enable more naturalistic and transparent interaction between humans and intelligent agents. To address this issue, we present an efficient sampling-based maximum-entropy inverse reinforcement learning (IRL) algorithm in this paper. Different from existing IRL algorithms, by introducing an efficient continuous-domain trajectory sampler, the proposed algorithm can directly learn the reward functions in the continuous domain while considering the uncertainties in demonstrated trajectories from human drivers. We evaluate the proposed algorithm on real driving data, including both non-interactive and interactive scenarios. The experimental results show that the proposed algorithm achieves more accurate prediction performance with faster convergence speed and better generalization compared to other baseline IRL algorithms.



There are no comments yet.


page 1


Modeling Human Driving Behavior in Highway Scenario using Inverse Reinforcement Learning

Human driving behavior modeling is of great importance for designing saf...

Probabilistic Prediction of Interactive Driving Behavior via Hierarchical Inverse Reinforcement Learning

Autonomous vehicles (AVs) are on the road. To safely and efficiently int...

Learning Trajectory Prediction with Continuous Inverse Optimal Control via Langevin Sampling of Energy-Based Models

Autonomous driving is a challenging multiagent domain which requires opt...

Expressing Diverse Human Driving Behavior with Probabilistic Rewards and Online Inference

In human-robot interaction (HRI) systems, such as autonomous vehicles, u...

An Auto-tuning Framework for Autonomous Vehicles

Many autonomous driving motion planners generate trajectories by optimiz...

Automatically Learning Fallback Strategies with Model-Free Reinforcement Learning in Safety-Critical Driving Scenarios

When learning to behave in a stochastic environment where safety is crit...

Improving Cost Learning for JPEG Steganography by Exploiting JPEG Domain Knowledge

Although significant progress in automatic learning of steganographic co...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.