DeepAI AI Chat
Log In Sign Up

Topological Value Iteration Algorithms

by   Xin Yu, et al.

Value iteration is a powerful yet inefficient algorithm for Markov decision processes (MDPs) because it puts the majority of its effort into backing up the entire state space, which turns out to be unnecessary in many cases. In order to overcome this problem, many approaches have been proposed. Among them, ILAO* and variants of RTDP are state-of-the-art ones. These methods use reachability analysis and heuristic search to avoid some unnecessary backups. However, none of these approaches build the graphical structure of the state transitions in a pre-processing step or use the structural information to systematically decompose a problem, whereby generating an intelligent backup sequence of the state space. In this paper, we present two optimal MDP algorithms. The first algorithm, topological value iteration (TVI), detects the structure of MDPs and backs up states based on topological sequences. It (1) divides an MDP into strongly-connected components (SCCs), and (2) solves these components sequentially. TVI outperforms VI and other state-of-the-art algorithms vastly when an MDP has multiple, close-to-equal-sized SCCs. The second algorithm, focused topological value iteration (FTVI), is an extension of TVI. FTVI restricts its attention to connected components that are relevant for solving the MDP. Specifically, it uses a small amount of heuristic search to eliminate provably sub-optimal actions; this pruning allows FTVI to find smaller connected components, thus running faster. We demonstrate that FTVI outperforms TVI by an order of magnitude, averaged across several domains. Surprisingly, FTVI also significantly outperforms popular heuristically-informed MDP algorithms such as ILAO*, LRTDP, BRTDP and Bayesian-RTDP in many domains, sometimes by as much as two orders of magnitude. Finally, we characterize the type of domains where FTVI excels --- suggesting a way to an informed choice of solver.


page 1

page 2

page 3

page 4


An Adaptive State Aggregation Algorithm for Markov Decision Processes

Value iteration is a well-known method of solving Markov Decision Proces...

On the Complexity of Value Iteration

Value iteration is a fundamental algorithm for solving Markov Decision P...

Intermittently Observable Markov Decision Processes

This paper investigates MDPs with intermittent state information. We con...

Model Reduction Techniques for Computing Approximately Optimal Solutions for Markov Decision Processes

We present a method for solving implicit (factored) Markov decision proc...

A Practitioner's Guide to MDP Model Checking Algorithms

Model checking undiscounted reachability and expected-reward properties ...

Solving Markov Decision Processes with Reachability Characterization from Mean First Passage Times

A new mechanism for efficiently solving the Markov decision processes (M...

RAPID: A Reachable Anytime Planner for Imprecisely-sensed Domains

Despite the intractability of generic optimal partially observable Marko...

Code Repositories


Tools to do extraction from text.

view repo