DeepAI AI Chat
Log In Sign Up

Scalable First-Order Methods for Robust MDPs

by   Julien Grand-Clément, et al.

Markov Decision Processes (MDP) are a widely used model for dynamic decision-making problems. However, MDPs require precise specification of model parameters, and often the cost of a policy can be highly sensitive to the estimated parameters. Robust MDPs ameliorate this issue by allowing one to specify uncertainty sets around the parameters, which leads to a non-convex optimization problem. This non-convex problem can be solved via the Value Iteration (VI) algorithm, but VI requires repeatedly solving convex programs that become prohibitively expensive as MDPs grow larger. We propose a first-order methods (FOM) algorithmic framework, where we interleave approximate value iteration update with a first-order-based computation of the robust Bellman update. Our algorithm relies on having a proximal setup for the uncertainty sets. We go on to instantiate this proximal setup for s-rectangular ellipsoidal uncertainty sets and Kullback-Leibler (KL) uncertainty sets. By carefully controlling the warm-starts of our FOM and the increasing approximation rate at each VI iteration, our algorithm achieves a convergence rate of O ( A^2 S^3log(S)log(ϵ^-1) ϵ^-1) for the best choice of parameters, where S,A are the numbers of states and actions. Our dependence on the number of states and actions is significantly better than that of VI algorithms. In numerical experiments on ellipsoidal uncertainty sets we show that our algorithm is significantly more scalable than state-of-the-art approaches. In the class of s-rectangular robust MDPs, to the best of our knowledge, our algorithm is the only one addressing KL uncertainty sets. It is also the only one to solve ellipsoidal uncertainty sets to optimality when the state and actions spaces become on the order of several hundreds.


page 1

page 2

page 3

page 4


First-Order Methods for Wasserstein Distributionally Robust MDP

Markov Decision Processes (MDPs) are known to be sensitive to parameter ...

On the convex formulations of robust Markov decision processes

Robust Markov decision processes (MDPs) are used for applications of dyn...

An Adaptive State Aggregation Algorithm for Markov Decision Processes

Value iteration is a well-known method of solving Markov Decision Proces...

Partial Policy Iteration for L1-Robust Markov Decision Processes

Robust Markov decision processes (MDPs) allow to compute reliable soluti...

Risk-Sensitive and Robust Decision-Making: a CVaR Optimization Approach

In this paper we address the problem of decision making within a Markov ...

Scaling Up Robust MDPs by Reinforcement Learning

We consider large-scale Markov decision processes (MDPs) with parameter ...

Robust Combination of Local Controllers

Planning problems are hard, motion planning, for example, isPSPACE-hard....