Context-Hierarchy Inverse Reinforcement Learning

02/25/2022
by   Wei Gao, et al.
0

An inverse reinforcement learning (IRL) agent learns to act intelligently by observing expert demonstrations and learning the expert's underlying reward function. Although learning the reward functions from demonstrations has achieved great success in various tasks, several other challenges are mostly ignored. Firstly, existing IRL methods try to learn the reward function from scratch without relying on any prior knowledge. Secondly, traditional IRL methods assume the reward functions are homogeneous across all the demonstrations. Some existing IRL methods managed to extend to the heterogeneous demonstrations. However, they still assume one hidden variable that affects the behavior and learn the underlying hidden variable together with the reward from demonstrations. To solve these issues, we present Context Hierarchy IRL(CHIRL), a new IRL algorithm that exploits the context to scale up IRL and learn reward functions of complex behaviors. CHIRL models the context hierarchically as a directed acyclic graph; it represents the reward function as a corresponding modular deep neural network that associates each network module with a node of the context hierarchy. The context hierarchy and the modular reward representation enable data sharing across multiple contexts and state abstraction, significantly improving the learning performance. CHIRL has a natural connection with hierarchical task planning when the context hierarchy represents subtask decomposition. It enables to incorporate the prior knowledge of causal dependencies of subtasks and make it capable of solving large complex tasks by decoupling it into several subtasks and conquering each subtask to solve the original task. Experiments on benchmark tasks, including a large scale autonomous driving task in the CARLA simulator, show promising results in scaling up IRL for tasks with complex reward functions.

READ FULL TEXT

page 14

page 15

research
09/20/2019

Meta-Inverse Reinforcement Learning with Probabilistic Context Variables

Providing a suitable reward function to reinforcement learning can be di...
research
10/07/2017

Meta Inverse Reinforcement Learning via Maximum Reward Sharing for Human Motion Analysis

This work handles the inverse reinforcement learning (IRL) problem where...
research
09/10/2021

Potential-based Reward Shaping in Sokoban

Learning to solve sparse-reward reinforcement learning problems is diffi...
research
06/23/2019

On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference

Our goal is for agents to optimize the right reward function, despite ho...
research
05/23/2019

Inverse Reinforcement Learning in Contextual MDPs

We consider the Inverse Reinforcement Learning (IRL) problem in Contextu...
research
01/17/2022

Spatiotemporal Costmap Inference for MPC via Deep Inverse Reinforcement Learning

It can be difficult to autonomously produce driver behavior so that it a...
research
11/01/2019

Situated GAIL: Multitask imitation using task-conditioned adversarial inverse reinforcement learning

Generative adversarial imitation learning (GAIL) has attracted increasin...

Please sign up or login with your details

Forgot password? Click here to reset