DeepAI AI Chat
Log In Sign Up

First-Order Methods for Wasserstein Distributionally Robust MDP

09/14/2020
by   Julien Grand-Clément, et al.
0

Markov Decision Processes (MDPs) are known to be sensitive to parameter specification. Distributionally robust MDPs alleviate this issue by allowing for ambiguity sets which give a set of possible distributions over parameter sets. The goal is to find an optimal policy with respect to the worst-case parameter distribution. We propose a first-order methods framework for solving Distributionally robust MDPs, and instantiate it for several types of Wasserstein ambiguity sets. By developing efficient proximal updates, our algorithms achieve a convergence rate of O(NA^2.5S^3.5log(S)log(ϵ^-1)ϵ^-1.5) for the number of kernels N in the support of the nominal distribution, states S, and actions A (this rate varies slightly based on the Wasserstein setup). Our dependence on N, A and S is significantly better than existing methods; compared to Value Iteration, it is better by a factor of O(N^2.5A S). Numerical experiments on random instances and instances inspired from a machine replacement example show that our algorithm is significantly more scalable than state-of-the-art approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/11/2020

Scalable First-Order Methods for Robust MDPs

Markov Decision Processes (MDP) are a widely used model for dynamic deci...
01/03/2023

Risk-Averse MDPs under Reward Ambiguity

We propose a distributionally robust return-risk model for Markov decisi...
07/23/2021

An Adaptive State Aggregation Algorithm for Markov Decision Processes

Value iteration is a well-known method of solving Markov Decision Proces...
06/16/2020

Partial Policy Iteration for L1-Robust Markov Decision Processes

Robust Markov decision processes (MDPs) allow to compute reliable soluti...
05/27/2022

Robust Phi-Divergence MDPs

In recent years, robust Markov decision processes (MDPs) have emerged as...
12/04/2019

Optimizing Norm-Bounded Weighted Ambiguity Sets for Robust MDPs

Optimal policies in Markov decision processes (MDPs) are very sensitive ...
10/23/2019

High-Confidence Policy Optimization: Reshaping Ambiguity Sets in Robust MDPs

Robust MDPs are a promising framework for computing robust policies in r...