Set-based value operators for non-stationary Markovian environments

07/15/2022
by   Sarah H. Q. Li, et al.
0

This paper analyzes finite state Markov Decision Processes (MDPs) with uncertain parameters in compact sets and re-examines results from robust MDP via set-based fixed point theory. We generalize the Bellman and policy evaluation operators to operators that contract on the space of value functions and denote them as value operators. We generalize these value operators to act on the space of value function sets and denote them as set-based value operators. We prove that these set-based value operators are contractions in the space of compact value function sets. Leveraging insights from set theory, we generalize the rectangularity condition for the Bellman operator from classic robust MDP literature to a containment condition for a generic value operator, which is weaker and can be applied to a larger set of parameter-uncertain MDPs and contractive operators in dynamic programming and reinforcement learning. We prove that both the rectangularity condition and the containment condition sufficiently ensure that the set-based value operator's fixed point set contains its own supremum and infimum elements. For convex and compact sets of uncertain MDP parameters, we show equivalence between the classic robust value function and the supremum of the fixed point set of the set-based Bellman operator. Under dynamically changing MDP parameters in compact sets, we prove a set convergence result for value iteration, which otherwise may not converge to a single value function.

READ FULL TEXT

page 1

page 7

research
01/13/2020

Fixed Points of the Set-Based Bellman Operator

Motivated by uncertain parameters encountered in Markov decision process...
research
01/22/2020

Bounding Fixed Points of Set-Based Bellman Operator and Nash Equilibria of Stochastic Games

Motivated by uncertain parameters encountered in Markov decision process...
research
06/12/2022

Geometric Policy Iteration for Markov Decision Processes

Recently discovered polyhedral structures of the value function for fini...
research
01/19/2023

Shapley Values with Uncertain Value Functions

We propose a novel definition of Shapley values with uncertain value fun...
research
04/10/2019

Solving Dynamic Discrete Choice Models Using Smoothing and Sieve Methods

We propose to combine smoothing, simulations and sieve approximations to...
research
05/25/2020

Dynamic Value Estimation for Single-Task Multi-Scene Reinforcement Learning

Training deep reinforcement learning agents on environments with multipl...
research
03/05/2020

Distributional Robustness and Regularization in Reinforcement Learning

Distributionally Robust Optimization (DRO) has enabled to prove the equi...

Please sign up or login with your details

Forgot password? Click here to reset