Representation Policy Iteration

07/04/2012
by   Sridhar Mahadevan, et al.
0

This paper addresses a fundamental issue central to approximation methods for solving large Markov decision processes (MDPs): how to automatically learn the underlying representation for value function approximation? A novel theoretically rigorous framework is proposed that automatically generates geometrically customized orthonormal sets of basis functions, which can be used with any approximate MDP solver like least squares policy iteration (LSPI). The key innovation is a coordinate-free representation of value functions, using the theory of smooth functions on a Riemannian manifold. Hodge theory yields a constructive method for generating basis functions for approximating value functions based on the eigenfunctions of the self-adjoint (Laplace-Beltrami) operator on manifolds. In effect, this approach performs a global Fourier analysis on the state space graph to approximate value functions, where the basis functions reflect the largescale topology of the underlying state space. A new class of algorithms called Representation Policy Iteration (RPI) are presented that automatically learn both basis functions and approximately optimal policies. Illustrative experiments compare the performance of RPI with that of LSPI using two handcoded basis functions (RBF and polynomial state encodings).

READ FULL TEXT
research
08/28/2015

Learning Efficient Representations for Reinforcement Learning

Markov decision processes (MDPs) are a well studied framework for solvin...
research
09/09/2011

Approximate Policy Iteration with a Policy Language Bias: Solving Relational Markov Decision Processes

We study an approach to policy selection for large relational Markov Dec...
research
06/09/2011

Efficient Solution Algorithms for Factored MDPs

This paper addresses the problem of planning under uncertainty in large ...
research
04/22/2022

Adaptive Online Value Function Approximation with Wavelets

Using function approximation to represent a value function is necessary ...
research
01/31/2012

Learning RoboCup-Keepaway with Kernels

We apply kernel-based methods to solve the difficult reinforcement learn...
research
01/09/2020

Self-guided Approximate Linear Programs

Approximate linear programs (ALPs) are well-known models based on value ...
research
07/22/2018

Optimal Continuous State POMDP Planning with Semantic Observations: A Variational Approach

This work develops novel strategies for optimal planning with semantic o...

Please sign up or login with your details

Forgot password? Click here to reset