PEGASUS: A Policy Search Method for Large MDPs and POMDPs

01/16/2013
by   Andrew Y. Ng, et al.
0

We propose a new approach to the problem of searching a space of policies for a Markov decision process (MDP) or a partially observable Markov decision process (POMDP), given a model. Our approach is based on the following observation: Any (PO)MDP can be transformed into an "equivalent" POMDP in which all state transitions (given the current state and action) are deterministic. This reduces the general problem of policy search to one in which we need only consider POMDPs with deterministic transitions. We give a natural way of estimating the value of all policies in these transformed POMDPs. Policy search is then simply performed by searching for a policy with high estimated value. We also establish conditions under which our value estimates will be good, recovering theoretical results similar to those of Kearns, Mansour and Ng (1999), but with "sample complexity" bounds that have only a polynomial rather than exponential dependence on the horizon time. Our method applies to arbitrary POMDPs, including ones with infinite state and action spaces. We also present empirical results for our approach on a small discrete problem, and on a complex continuous state/continuous action problem involving learning to ride a bicycle.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

research
10/17/2017

Primal-Dual π Learning: Sample Complexity and Sublinear Run Time for Ergodic Markov Decision Problems

Consider the problem of approximating the optimal policy of a Markov dec...
research
09/20/2021

A Reinforcement Learning Approach to the Stochastic Cutting Stock Problem

We propose a formulation of the stochastic cutting stock problem as a di...
research
04/03/2023

Investigation of risk-aware MDP and POMDP contingency management autonomy for UAS

Unmanned aircraft systems (UAS) are being increasingly adopted for vario...
research
07/12/2012

Learning Diagnostic Policies from Examples by Systematic Search

A diagnostic policy specifies what test to perform next, based on the re...
research
12/21/2020

Universal Policies for Software-Defined MDPs

We introduce a new programming paradigm called oracle-guided decision pr...
research
10/14/2021

The Geometry of Memoryless Stochastic Policy Optimization in Infinite-Horizon POMDPs

We consider the problem of finding the best memoryless stochastic policy...
research
02/23/2018

Novel Approaches to Accelerating the Convergence Rate of Markov Decision Process for Search Result Diversification

Recently, some studies have utilized the Markov Decision Process for div...

Please sign up or login with your details

Forgot password? Click here to reset