ChronosPerseus: Randomized Point-based Value Iteration with Importance Sampling for POSMDPs

07/16/2022
by   Richard Kohar, et al.
0

In reinforcement learning, agents have successfully used environments modeled with Markov decision processes (MDPs). However, in many problem domains, an agent may suffer from noisy observations or random times until its subsequent decision. While partially observable Markov decision processes (POMDPs) have dealt with noisy observations, they have yet to deal with the unknown time aspect. Of course, one could discretize the time, but this leads to Bellman's Curse of Dimensionality. To incorporate continuous sojourn-time distributions in the agent's decision making, we propose that partially observable semi-Markov decision processes (POSMDPs) can be helpful in this regard. We extend <cit.> randomized point-based value iteration (PBVI) Perseus algorithm used for POMDP to POSMDP by incorporating continuous sojourn time distributions and using importance sampling to reduce the solver complexity. We call this new PBVI algorithm with importance sampling for POSMDPs – ChronosPerseus. This further allows for compressed complex POMDPs requiring temporal state information by moving this information into state sojourn time of a POMSDP. The second insight is that keeping a set of sampled times and weighting it by its likelihood can be used in a single backup; this helps further reduce the algorithm complexity. The solver also works on episodic and non-episodic problems. We conclude our paper with two examples, an episodic bus problem and a non-episodic maintenance problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/23/2013

A Method for Speeding Up Value Iteration in Partially Observable Markov Decision Processes

We present a technique for speeding up the convergence of value iteratio...
research
06/01/2011

Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes

Partially observable Markov decision processes (POMDPs) have recently be...
research
06/01/2011

Value-Function Approximations for Partially Observable Markov Decision Processes

Partially observable Markov decision processes (POMDPs) provide an elega...
research
11/12/2021

A Minimax Learning Approach to Off-Policy Evaluation in Partially Observable Markov Decision Processes

We consider off-policy evaluation (OPE) in Partially Observable Markov D...
research
01/11/2018

Counterfactual equivalence for POMDPs, and underlying deterministic environments

Partially Observable Markov Decision Processes (POMDPs) are rich environ...
research
06/09/2021

Information Avoidance and Overvaluation in Sequential Decision Making under Epistemic Constraints

Decision makers involved in the management of civil assets and systems u...
research
10/15/2018

Factorized Machine Self-Confidence for Decision-Making Agents

Algorithmic assurances from advanced autonomous systems assist human use...

Please sign up or login with your details

Forgot password? Click here to reset