Symbolic Relational Deep Reinforcement Learning based on Graph Neural Networks

09/25/2020
by   Jaromír Janisch, et al.
0

We present a novel deep reinforcement learning framework for solving relational problems. The method operates with a symbolic representation of objects, their relations and multi-parameter actions, where the objects are the parameters. Our framework, based on graph neural networks, is completely domain-independent and can be applied to any relational problem with existing symbolic-relational representation. We show how to represent relational states with arbitrary goals, multi-parameter actions and concurrent actions. We evaluate the method on a set of three domains: BlockWorld, Sokoban and SysAdmin. The method displays impressive generalization over different problem sizes (e.g., in BlockWorld, the method trained exclusively with 5 blocks still solves 78

READ FULL TEXT

page 6

page 7

02/29/2020

Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective

Neural-symbolic computing has now become the subject of interest of both...
10/20/2020

Modeling Content and Context with Deep Relational Learning

Building models for realistic natural language tasks requires dealing wi...
03/25/2022

Learning Relational Rules from Rewards

Humans perceive the world in terms of objects and relations between them...
02/28/2012

Relational Reinforcement Learning in Infinite Mario

Relational representations in reinforcement learning allow for the use o...
09/11/2018

Multitask Learning on Graph Neural Networks - Learning Multiple Graph Centrality Measures with a Unified Network

The application of deep learning to symbolic domains remains an active r...
11/06/2020

Learning with Molecules beyond Graph Neural Networks

We demonstrate a deep learning framework which is inherently based in th...
06/26/2013

Solving Relational MDPs with Exogenous Events and Additive Rewards

We formalize a simple but natural subclass of service domains for relati...