DeepAI AI Chat
Log In Sign Up

Interpretable Reinforcement Learning with Multilevel Subgoal Discovery

by   Alexander Demin, et al.

We propose a novel Reinforcement Learning model for discrete environments, which is inherently interpretable and supports the discovery of deep subgoal hierarchies. In the model, an agent learns information about environment in the form of probabilistic rules, while policies for (sub)goals are learned as combinations thereof. No reward function is required for learning; an agent only needs to be given a primary goal to achieve. Subgoals of a goal G from the hierarchy are computed as descriptions of states, which if previously achieved increase the total efficiency of the available policies for G. These state descriptions are introduced as new sensor predicates into the rule language of the agent, which allows for sensing important intermediate states and for updating environment rules and policies accordingly.


Investigating Reinforcement Learning Agents for Continuous State Space Environments

Given an environment with continuous state spaces and discrete actions, ...

Learning user-defined sub-goals using memory editing in reinforcement learning

The aim of reinforcement learning (RL) is to allow the agent to achieve ...

Language as a Cognitive Tool to Imagine Goals in Curiosity-Driven Exploration

Autonomous reinforcement learning agents must be intrinsically motivated...

Unsupervised Discovery of Decision States for Transfer in Reinforcement Learning

We present a hierarchical reinforcement learning (HRL) or options framew...

Language Grounding through Social Interactions and Curiosity-Driven Multi-Goal Learning

Autonomous reinforcement learning agents, like children, do not have acc...

Probabilistic hypergraph grammars for efficient molecular optimization

We present an approach to make molecular optimization more efficient. We...

RTFM: Generalising to Novel Environment Dynamics via Reading

Obtaining policies that can generalise to new environments in reinforcem...