Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization

03/12/2023
by   Esther Derman, et al.
0

Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other hand, regularized MDPs show more stability in policy learning without impairing time complexity. Yet, they generally do not encompass uncertainty in the model dynamics. In this work, we aim to learn robust MDPs using regularization. We first show that regularized MDPs are a particular instance of robust MDPs with uncertain reward. We thus establish that policy iteration on reward-robust MDPs can have the same time complexity as on regularized MDPs. We further extend this relationship to MDPs with uncertain transitions: this leads to a regularization term with an additional dependence on the value function. We then generalize regularized MDPs to twice regularized MDPs (R^2 MDPs), i.e., MDPs with both value and policy regularization. The corresponding Bellman operators enable us to derive planning and learning schemes with convergence and generalization guarantees, thus reducing robustness to regularization. We numerically show this two-fold advantage on tabular and physical domains, highlighting the fact that R^2 preserves its efficacy in continuous environments.

READ FULL TEXT
research
10/12/2021

Twice regularized MDPs and the equivalence between robustness and regularization

Robust Markov decision processes (MDPs) aim to handle changing or partia...
research
09/24/2021

Regularization Guarantees Generalization in Bayesian Reinforcement Learning through Algorithmic Stability

In the Bayesian reinforcement learning (RL) setting, a prior distributio...
research
03/05/2020

Distributional Robustness and Regularization in Reinforcement Learning

Distributionally Robust Optimization (DRO) has enabled to prove the equi...
research
09/21/2022

On the convex formulations of robust Markov decision processes

Robust Markov decision processes (MDPs) are used for applications of dyn...
research
09/03/2023

Solving Non-Rectangular Reward-Robust MDPs via Frequency Regularization

In robust Markov decision processes (RMDPs), it is assumed that the rewa...
research
02/02/2023

Avoiding Model Estimation in Robust Markov Decision Processes with a Generative Model

Robust Markov Decision Processes (MDPs) are getting more attention for l...
research
05/28/2022

Efficient Policy Iteration for Robust Markov Decision Processes via Regularization

Robust Markov decision processes (MDPs) provide a general framework to m...

Please sign up or login with your details

Forgot password? Click here to reset