On the Complexity of Iterative Tropical Computation with Applications to Markov Decision Processes

07/13/2018
by   Nikhil Balaji, et al.
0

We study the complexity of evaluating powered functions implemented by straight-line programs (SLPs) over the tropical semiring (i.e., with max and + operations). In this problem, a given (max,+)-SLP with the same number of input and output wires is composed with H copies of itself, where H is given in binary. The problem of evaluating powered SLPs is intimately connected with iterative arithmetic computations that arise in algorithmic decision making and operations research. Specifically, it is essentially equivalent to finding optimal strategies in finite-horizon Markov Decision Processes (MDPs). We show that evaluating powered SLPs and finding optimal strategies in finite-horizon MDPs are both EXPTIME-complete problems. This resolves an open problem that goes back to the seminal 1987 paper on the complexity of MDPs by Papadimitriou and Tsitsiklis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2018

On the Complexity of Value Iteration

Value iteration is a fundamental algorithm for solving Markov Decision P...
research
08/24/2023

Optimal data pooling for shared learning in maintenance operations

This paper addresses the benefits of pooling data for shared learning in...
research
07/24/2019

An Overview for Markov Decision Processes in Queues and Networks

Markov decision processes (MDPs) in queues and networks have been an int...
research
06/05/2022

Formally Verified Solution Methods for Infinite-Horizon Markov Decision Processes

We formally verify executable algorithms for solving Markov decision pro...
research
01/13/2020

Fixed Points of the Set-Based Bellman Operator

Motivated by uncertain parameters encountered in Markov decision process...
research
06/20/2019

Max-Plus Matching Pursuit for Deterministic Markov Decision Processes

We consider deterministic Markov decision processes (MDPs) and apply max...
research
04/25/2018

Distribution-based objectives for Markov Decision Processes

We consider distribution-based objectives for Markov Decision Processes ...

Please sign up or login with your details

Forgot password? Click here to reset