Scalable First-Order Methods for Robust MDPs

05/11/2020
by   Julien Grand-Clément, et al.
0

Markov Decision Processes (MDP) are a widely used model for dynamic decision-making problems. However, MDPs require precise specification of model parameters, and often the cost of a policy can be highly sensitive to the estimated parameters. Robust MDPs ameliorate this issue by allowing one to specify uncertainty sets around the parameters, which leads to a non-convex optimization problem. This non-convex problem can be solved via the Value Iteration (VI) algorithm, but VI requires repeatedly solving convex programs that become prohibitively expensive as MDPs grow larger. We propose a first-order methods (FOM) algorithmic framework, where we interleave approximate value iteration update with a first-order-based computation of the robust Bellman update. Our algorithm relies on having a proximal setup for the uncertainty sets. We go on to instantiate this proximal setup for s-rectangular ellipsoidal uncertainty sets and Kullback-Leibler (KL) uncertainty sets. By carefully controlling the warm-starts of our FOM and the increasing approximation rate at each VI iteration, our algorithm achieves a convergence rate of O ( A^2 S^3log(S)log(ϵ^-1) ϵ^-1) for the best choice of parameters, where S,A are the numbers of states and actions. Our dependence on the number of states and actions is significantly better than that of VI algorithms. In numerical experiments on ellipsoidal uncertainty sets we show that our algorithm is significantly more scalable than state-of-the-art approaches. In the class of s-rectangular robust MDPs, to the best of our knowledge, our algorithm is the only one addressing KL uncertainty sets. It is also the only one to solve ellipsoidal uncertainty sets to optimality when the state and actions spaces become on the order of several hundreds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2020

First-Order Methods for Wasserstein Distributionally Robust MDP

Markov Decision Processes (MDPs) are known to be sensitive to parameter ...
research
09/21/2022

On the convex formulations of robust Markov decision processes

Robust Markov decision processes (MDPs) are used for applications of dyn...
research
05/30/2023

Policy Gradient Algorithms for Robust MDPs with Non-Rectangular Uncertainty Sets

We propose a policy gradient algorithm for robust infinite-horizon Marko...
research
06/16/2020

Partial Policy Iteration for L1-Robust Markov Decision Processes

Robust Markov decision processes (MDPs) allow to compute reliable soluti...
research
05/17/2023

Model-Free Robust Average-Reward Reinforcement Learning

Robust Markov decision processes (MDPs) address the challenge of model u...
research
06/06/2015

Risk-Sensitive and Robust Decision-Making: a CVaR Optimization Approach

In this paper we address the problem of decision making within a Markov ...
research
01/30/2022

The Geometry of Robust Value Functions

The space of value functions is a fundamental concept in reinforcement l...

Please sign up or login with your details

Forgot password? Click here to reset