
Constrained RiskAverse Markov Decision Processes
We consider the problem of designing policies for Markov decision proces...
read it

RiskAverse Decision Making Under Uncertainty
A large class of decision making under uncertainty problems can be descr...
read it

Adversarial Stochastic Shortest Path
Stochastic shortest path (SSP) is a wellknown problem in planning and c...
read it

A Theory of GoalOriented MDPs with Dead Ends
Stochastic Shortest Path (SSP) MDPs is a problem class widely studied in...
read it

Finding RiskAverse Shortest Path with Timedependent Stochastic Costs
In this paper, we tackle the problem of riskaverse route planning in a ...
read it

RiskAverse Equilibrium for Autonomous Vehicles in Stochastic Congestion Games
The fastgrowing market of autonomous vehicles, unmanned aerial vehicles...
read it

Gaussian Belief Space Path Planning for Minimum Sensing Navigation
We propose a path planning methodology for a mobile robot navigating thr...
read it
RiskAverse Stochastic Shortest Path Planning
We consider the stochastic shortest path planning problem in MDPs, i.e., the problem of designing policies that ensure reaching a goal state from a given initial state with minimum accrued cost. In order to account for rare but important realizations of the system, we consider a nested dynamic coherent risk total cost functional rather than the conventional riskneutral total expected cost. Under some assumptions, we show that optimal, stationary, Markovian policies exist and can be found via a special Bellman's equation. We propose a computational technique based on difference convex programs (DCPs) to find the associated value functions and therefore the riskaverse policies. A rover navigation MDP is used to illustrate the proposed methodology with conditionalvalueatrisk (CVaR) and entropicvalueatrisk (EVaR) coherent risk measures.
READ FULL TEXT
Comments
There are no comments yet.