-
Mean-Variance Optimization in Markov Decision Processes
We consider finite horizon Markov decision processes under performance m...
read it
-
Optimizing Quantiles in Preference-based Markov Decision Processes
In the Markov decision process model, policies are usually evaluated by ...
read it
-
Policy Evaluation with Variance Related Risk Criteria in Markov Decision Processes
In this paper we extend temporal difference policy evaluation algorithms...
read it
-
A Complete Algebraic Transformational Solution for the Optimal Dynamic Policy in Inventory Rationing across Two Demand Classes
In this paper, we apply the sensitivity-based optimization to propose an...
read it
-
Scalable methods for computing state similarity in deterministic Markov Decision Processes
We present new algorithms for computing and approximating bisimulation m...
read it
-
Optimal Policies for Observing Time Series and Related Restless Bandit Problems
The trade-off between the cost of acquiring and processing data, and unc...
read it
-
A Scheme for Dynamic Risk-Sensitive Sequential Decision Making
We present a scheme for sequential decision making with a risk-sensitive...
read it
Risk-Sensitive Markov Decision Processes with Combined Metrics of Mean and Variance
This paper investigates the optimization problem of an infinite stage discrete time Markov decision process (MDP) with a long-run average metric considering both mean and variance of rewards together. Such performance metric is important since the mean indicates average returns and the variance indicates risk or fairness. However, the variance metric couples the rewards at all stages, the traditional dynamic programming is inapplicable as the principle of time consistency fails. We study this problem from a new perspective called the sensitivity-based optimization theory. A performance difference formula is derived and it can quantify the difference of the mean-variance combined metrics of MDPs under any two different policies. The difference formula can be utilized to generate new policies with strictly improved mean-variance performance. A necessary condition of the optimal policy and the optimality of deterministic policies are derived. We further develop an iterative algorithm with a form of policy iteration, which is proved to converge to local optima both in the mixed and randomized policy space. Specially, when the mean reward is constant in policies, the algorithm is guaranteed to converge to the global optimum. Finally, we apply our approach to study the fluctuation reduction of wind power in an energy storage system, which demonstrates the potential applicability of our optimization method.
READ FULL TEXT
Comments
There are no comments yet.