Linear programming-based solution methods for constrained POMDPs
Constrained partially observable Markov decision processes (CPOMDPs) have been used to model various real-world phenomena. However, they are notoriously difficult to solve to optimality, and there exist only a few approximation methods for obtaining high-quality solutions. In this study, we use grid-based approximations in combination with linear programming (LP) models to generate approximate policies for CPOMDPs. We consider five CPOMDP problem instances and conduct a detailed numerical study of both their finite and infinite horizon formulations. We first establish the quality of the approximate unconstrained POMDP policies through a comparative analysis with exact solution methods. We then show the performance of the LP-based CPOMDP solution approaches for varying budget levels (i.e., cost limits) for different problem instances. Finally, we show the flexibility of LP-based approaches by applying deterministic policy constraints, and investigate the impact that these constraints have on collected rewards and CPU run time. Our analysis demonstrates that LP models can effectively generate approximate policies for both finite and infinite horizon problems, while providing the flexibility to incorporate various additional constraints into the underlying model.
READ FULL TEXT