Autonomous Extraction of a Hierarchical Structure of Tasks in Reinforcement Learning, A Sequential Associate Rule Mining Approach

11/17/2018
by   Behzad Ghazanfari, et al.
0

Reinforcement learning (RL) techniques, while often powerful, can suffer from slow learning speeds, particularly in high dimensional spaces. Decomposition of tasks into a hierarchical structure holds the potential to significantly speed up learning, generalization, and transfer learning. However, the current task decomposition techniques often rely on high-level knowledge provided by an expert (e.g. using dynamic Bayesian networks) to extract a hierarchical task structure; which is not necessarily available in autonomous systems. In this paper, we propose a novel method based on Sequential Association Rule Mining that can extract Hierarchical Structure of Tasks in Reinforcement Learning (SARM-HSTRL) in an autonomous manner for both Markov decision processes (MDPs) and factored MDPs. The proposed method leverages association rule mining to discover the causal and temporal relationships among states in different trajectories, and extracts a task hierarchy that captures these relationships among sub-goals as termination conditions of different sub-tasks. We prove that the extracted hierarchical policy offers a hierarchically optimal policy in MDPs and factored MDPs. It should be noted that SARM-HSTRL extracts this hierarchical optimal policy without having dynamic Bayesian networks in scenarios with a single task trajectory and also with multiple tasks' trajectories. Furthermore, it has been theoretically and empirically shown that the extracted hierarchical task structure is consistent with trajectories and provides the most efficient, reliable, and compact structure under appropriate assumptions. The numerical results compare the performance of the proposed SARM-HSTRL method with conventional HRL algorithms in terms of the accuracy in detecting the sub-goals, the validity of the extracted hierarchies, and the speed of learning in several testbeds.

READ FULL TEXT
research
09/14/2017

Autonomous Extracting a Hierarchical Structure of Tasks in Reinforcement Learning and Multi-task Reinforcement Learning

Reinforcement learning (RL), while often powerful, can suffer from slow ...
research
11/09/2019

Hierarchical Reinforcement Learning Method for Autonomous Vehicle Behavior Planning

In this work, we propose a hierarchical reinforcement learning (HRL) str...
research
12/05/2012

Multiscale Markov Decision Problems: Compression, Solution, and Transfer Learning

Many problems in sequential decision making and stochastic control often...
research
10/05/2018

Compositional planning in Markov decision processes: Temporal abstraction meets generalized logic composition

In hierarchical planning for Markov decision processes (MDPs), temporal ...
research
08/29/2022

Categorical semantics of compositional reinforcement learning

Reinforcement learning (RL) often requires decomposing a problem into su...
research
04/03/2023

Action Pick-up in Dynamic Action Space Reinforcement Learning

Most reinforcement learning algorithms are based on a key assumption tha...
research
05/04/2019

Hierarchical Policy Learning is Sensitive to Goal Space Design

Hierarchy in reinforcement learning agents allows for control at multipl...

Please sign up or login with your details

Forgot password? Click here to reset