-
Transfer of Deep Reactive Policies for MDP Planning
Domain-independent probabilistic planners input an MDP description in a ...
read it
-
Approximate Policy Iteration with a Policy Language Bias: Solving Relational Markov Decision Processes
We study an approach to policy selection for large relational Markov Dec...
read it
-
Size Independent Neural Transfer for RDDL Planning
Neural planners for RDDL MDPs produce deep reactive policies in an offli...
read it
-
Action Schema Networks: Generalised Policies with Deep Learning
In this paper, we introduce the Action Schema Network (ASNet): a neural ...
read it
-
Features, Projections, and Representation Change for Generalized Planning
Generalized planning is concerned with the characterization and computat...
read it
-
Inductive Policy Selection for First-Order MDPs
We select policies for large Markov Decision Processes (MDPs) with compa...
read it
-
ASNets: Deep Learning for Generalised Planning
In this paper, we discuss the learning of generalised policies for proba...
read it
Generalized Neural Policies for Relational MDPs
A Relational Markov Decision Process (RMDP) is a first-order representation to express all instances of a single probabilistic planning domain with possibly unbounded number of objects. Early work in RMDPs outputs generalized (instance-independent) first-order policies or value functions as a means to solve all instances of a domain at once. Unfortunately, this line of work met with limited success due to inherent limitations of the representation space used in such policies or value functions. Can neural models provide the missing link by easily representing more complex generalized policies, thus making them effective on all instances of a given domain? We present the first neural approach for solving RMDPs, expressed in the probabilistic planning language of RDDL. Our solution first converts an RDDL instance into a ground DBN. We then extract a graph structure from the DBN. We train a relational neural model that computes an embedding for each node in the graph and also scores each ground action as a function over the first-order action variable and object embeddings on which the action is applied. In essence, this represents a neural generalized policy for the whole domain. Given a new test problem of the same domain, we can compute all node embeddings using trained parameters and score each ground action to choose the best action using a single forward pass without any retraining. Our experiments on nine RDDL domains from IPPC demonstrate that neural generalized policies are significantly better than random and sometimes even more effective than training a state-of-the-art deep reactive policy from scratch.
READ FULL TEXT
Comments
There are no comments yet.