An Approximate Solution Method for Large Risk-Averse Markov Decision Processes

10/16/2012
by   Marek Petrik, et al.
0

Stochastic domains often involve risk-averse decision makers. While recent work has focused on how to model risk in Markov decision processes using risk measures, it has not addressed the problem of solving large risk-averse formulations. In this paper, we propose and analyze a new method for solving large risk-averse MDPs with hybrid continuous-discrete state spaces and continuous action spaces. The proposed method iteratively improves a bound on the value function using a linearity structure of the MDP. We demonstrate the utility and properties of the method on a portfolio optimization problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2012

Metrics for Markov Decision Processes with Infinite State Spaces

We present metrics for measuring state similarity in Markov decision pro...
research
12/04/2020

Constrained Risk-Averse Markov Decision Processes

We consider the problem of designing policies for Markov decision proces...
research
05/11/2018

Stochastic Approximation for Risk-aware Markov Decision Processes

In this paper, we develop a stochastic approximation type algorithm to s...
research
07/11/2012

Solving Factored MDPs with Continuous and Discrete Variables

Although many real-world stochastic planning problems are more naturally...
research
11/02/2022

Interval Markov Decision Processes with Continuous Action-Spaces

Interval Markov Decision Processes (IMDPs) are uncertain Markov models, ...
research
03/15/2012

A Scalable Method for Solving High-Dimensional Continuous POMDPs Using Local Approximation

Partially-Observable Markov Decision Processes (POMDPs) are typically so...
research
06/06/2023

Finding Counterfactually Optimal Action Sequences in Continuous State Spaces

Humans performing tasks that involve taking a series of multiple depende...

Please sign up or login with your details

Forgot password? Click here to reset