Fast Online Exact Solutions for Deterministic MDPs with Sparse Rewards

05/08/2018
by   Joshua R. Bertram, et al.
0

Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision making under uncertainty. The classical approaches for solving MDPs are well known and have been widely studied, some of which rely on approximation techniques to solve MDPs with large state space and/or action space. However, most of these classical solution approaches and their approximation techniques still take much computation time to converge and usually must be re-computed if the reward function is changed. This paper introduces a novel alternative approach for exactly and efficiently solving deterministic, continuous MDPs with sparse reward sources. When the environment is such that the "distance" between states can be determined in constant time, e.g. grid world, our algorithm offers O( |R|^2 × |A|^2 × |S|), where |R| is the number of reward sources, |A| is the number of actions, and |S| is the number of states. Memory complexity for the algorithm is O( |S| + |R| × |A|). This new approach opens new avenues for boosting computational performance for certain classes of MDPs and is of tremendous value for MDP applications such as robotics and unmanned systems. This paper describes the algorithm and presents numerical experiment results to demonstrate its powerful computational performance. We also provide rigorous mathematical description of the approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2018

Memoryless Exact Solutions for Deterministic MDPs with Sparse Rewards

We propose an algorithm for deterministic continuous Markov Decision Pro...
research
09/26/2013

Solution Methods for Constrained Markov Decision Process with Continuous Probability Modulation

We propose solution methods for previously-unsolved constrained MDPs in ...
research
07/13/2020

Efficient Planning in Large MDPs with Weak Linear Function Approximation

Large-scale Markov decision processes (MDPs) require planning algorithms...
research
12/07/2016

Effect of Reward Function Choices in MDPs with Value-at-Risk

This paper studies Value-at-Risk (VaR) problems in short- and long-horiz...
research
07/27/2014

MDPs with Unawareness

Markov decision processes (MDPs) are widely used for modeling decision-m...
research
05/20/2020

MDPs with Unawareness in Robotics

We formalize decision-making problems in robotics and automated control ...
research
08/31/2021

Approximation Methods for Partially Observed Markov Decision Processes (POMDPs)

POMDPs are useful models for systems where the true underlying state is ...

Please sign up or login with your details

Forgot password? Click here to reset