Symbolic Relational Deep Reinforcement Learning based on Graph Neural Networks

by   Jaromír Janisch, et al.

We present a novel deep reinforcement learning framework for solving relational problems. The method operates with a symbolic representation of objects, their relations and multi-parameter actions, where the objects are the parameters. Our framework, based on graph neural networks, is completely domain-independent and can be applied to any relational problem with existing symbolic-relational representation. We show how to represent relational states with arbitrary goals, multi-parameter actions and concurrent actions. We evaluate the method on a set of three domains: BlockWorld, Sokoban and SysAdmin. The method displays impressive generalization over different problem sizes (e.g., in BlockWorld, the method trained exclusively with 5 blocks still solves 78


page 6

page 7


Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective

Neural-symbolic computing has now become the subject of interest of both...

Modeling Content and Context with Deep Relational Learning

Building models for realistic natural language tasks requires dealing wi...

Learning Relational Rules from Rewards

Humans perceive the world in terms of objects and relations between them...

Relational Reinforcement Learning in Infinite Mario

Relational representations in reinforcement learning allow for the use o...

Multitask Learning on Graph Neural Networks - Learning Multiple Graph Centrality Measures with a Unified Network

The application of deep learning to symbolic domains remains an active r...

Learning with Molecules beyond Graph Neural Networks

We demonstrate a deep learning framework which is inherently based in th...

Solving Relational MDPs with Exogenous Events and Additive Rewards

We formalize a simple but natural subclass of service domains for relati...