An Adaptive State Aggregation Algorithm for Markov Decision Processes

07/23/2021
by   Guanting Chen, et al.
6

Value iteration is a well-known method of solving Markov Decision Processes (MDPs) that is simple to implement and boasts strong theoretical convergence guarantees. However, the computational cost of value iteration quickly becomes infeasible as the size of the state space increases. Various methods have been proposed to overcome this issue for value iteration in large state and action space MDPs, often at the price, however, of generalizability and algorithmic simplicity. In this paper, we propose an intuitive algorithm for solving MDPs that reduces the cost of value iteration updates by dynamically grouping together states with similar cost-to-go values. We also prove that our algorithm converges almost surely to within 2ε / (1 - γ) of the true optimal value in the ℓ^∞ norm, where γ is the discount factor and aggregated states differ by at most ε. Numerical experiments on a variety of simulated environments confirm the robustness of our algorithm and its ability to solve MDPs with much cheaper updates especially as the scale of the MDP problem increases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2022

Geometric Policy Iteration for Markov Decision Processes

Recently discovered polyhedral structures of the value function for fini...
research
06/20/2019

Max-Plus Matching Pursuit for Deterministic Markov Decision Processes

We consider deterministic Markov decision processes (MDPs) and apply max...
research
01/16/2014

Topological Value Iteration Algorithms

Value iteration is a powerful yet inefficient algorithm for Markov decis...
research
01/16/2015

Value Iteration with Options and State Aggregation

This paper presents a way of solving Markov Decision Processes that comb...
research
12/12/2012

Polynomial Value Iteration Algorithms for Detrerminstic MDPs

Value iteration is a commonly used and empirically competitive method in...
research
09/14/2020

First-Order Methods for Wasserstein Distributionally Robust MDP

Markov Decision Processes (MDPs) are known to be sensitive to parameter ...
research
01/03/2023

Faster Approximate Dynamic Programming by Freezing Slow States

We consider infinite horizon Markov decision processes (MDPs) with fast-...

Please sign up or login with your details

Forgot password? Click here to reset