Safe Posterior Sampling for Constrained MDPs with Bounded Constraint Violation
Constrained Markov decision processes (CMDPs) model scenarios of sequential decision making with multiple objectives that are increasingly important in many applications. However, the model is often unknown and must be learned online while still ensuring the constraint is met, or at least the violation is bounded with time. Some recent papers have made progress on this very challenging problem but either need unsatisfactory assumptions such as knowledge of a safe policy, or have high cumulative regret. We propose the Safe PSRL (posterior sampling-based RL) algorithm that does not need such assumptions and yet performs very well, both in terms of theoretical regret bounds as well as empirically. The algorithm achieves an efficient tradeoff between exploration and exploitation by use of the posterior sampling principle, and provably suffers only bounded constraint violation by leveraging the idea of pessimism. Our approach is based on a primal-dual approach. We establish a sub-linear ðŠĖ(H^2.5â(|ðŪ|^2 |ð| K)) upper bound on the Bayesian reward objective regret along with a bounded, i.e., ðŠĖ(1) constraint violation regret over K episodes for an |ðŪ|-state, |ð|-action and horizon H CMDP.
READ FULL TEXT