Approximate Policy Iteration with a Policy Language Bias: Solving Relational Markov Decision Processes

09/09/2011
by   A. Fern, et al.
0

We study an approach to policy selection for large relational Markov Decision Processes (MDPs). We consider a variant of approximate policy iteration (API) that replaces the usual value-function learning step with a learning step in policy space. This is advantageous in domains where good policies are easier to represent and learn than the corresponding value functions, which is often the case for the relational MDPs we are interested in. In order to apply API to such problems, we introduce a relational policy language and corresponding learner. In addition, we introduce a new bootstrapping routine for goal-based planning domains, based on random walks. Such bootstrapping is necessary for many large relational MDPs, where reward is extremely sparse, as API is ineffective in such domains when initialized with an uninformed policy. Our experiments show that the resulting system is able to find good policies for a number of classical planning domains and their stochastic variants by solving them as extremely large relational MDPs. The experiments also point to some limitations of our approach, suggesting future work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/12/2012

Inductive Policy Selection for First-Order MDPs

We select policies for large Markov Decision Processes (MDPs) with compa...
research
06/11/2015

Bootstrapping Skills

The monolithic approach to policy representation in Markov Decision Proc...
research
01/16/2014

Automatic Induction of Bellman-Error Features for Probabilistic Planning

Domain-specific features are important in representing problem structure...
research
10/31/2011

First Order Decision Diagrams for Relational MDPs

Markov decision processes capture sequential decision making under uncer...
research
07/04/2012

Representation Policy Iteration

This paper addresses a fundamental issue central to approximation method...
research
02/18/2020

Generalized Neural Policies for Relational MDPs

A Relational Markov Decision Process (RMDP) is a first-order representat...
research
06/26/2013

Solving Relational MDPs with Exogenous Events and Additive Rewards

We formalize a simple but natural subclass of service domains for relati...

Please sign up or login with your details

Forgot password? Click here to reset