Optimizing the CVaR via Sampling

Conditional Value at Risk (CVaR) is a prominent risk measure that is being used extensively in various domains. We develop a new formula for the gradient of the CVaR in the form of a conditional expectation. Based on this formula, we propose a novel sampling-based estimator for the CVaR gradient, in the spirit of the likelihood-ratio method. We analyze the bias of the estimator, and prove the convergence of a corresponding stochastic gradient descent algorithm to a local CVaR optimum. Our method allows to consider CVaR optimization in new domains. As an example, we consider a reinforcement learning application, and learn a risk-sensitive controller for the game of Tetris.

Authors

• 45 publications
• 2 publications
• 127 publications
06/12/2014

Algorithms for CVaR Optimization in MDPs

In many sequential decision-making problems we may want to manage risk b...
04/28/2020

Avoiding zero probability events when computing Value at Risk contributions: a Malliavin calculus approach

This paper is concerned with the process of risk allocation for a generi...
09/06/2019

Gradient Q(σ, λ): A Unified Algorithm with Function Approximation for Reinforcement Learning

Full-sampling (e.g., Q-learning) and pure-expectation (e.g., Expected Sa...
04/03/2021

STL Robustness Risk over Discrete-Time Stochastic Processes

We present a framework to interpret signal temporal logic (STL) formulas...
06/27/2012

Policy Gradients with Variance Related Risk Criteria

Managing risk in dynamic decision problems is of cardinal importance in ...
09/02/2020

Adaptive CVaR Optimization for Dynamical Systems with Path Space Stochastic Search

We present a general framework for optimizing the Conditional Value-at-R...
07/03/2020

A method to find an efficient and robust sampling strategy under model uncertainty

We consider the problem of deciding on sampling strategy, in particular ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Conditional Value at Risk (CVaR; rockafellar2000optimization, rockafellar2000optimization) is an established risk measure that has found extensive use in finance among other fields. For a random payoff , whose distribution is parameterized by a controllable parameter , the -CVaR is defined as the expected payoff over the worst outcomes of :

 Φ(θ)=Eθ[R|R≤να(θ)],

where is the

-quantile of

. CVaR optimization aims to find a parameter that maximizes .

When the payoff is of the structure , where is a deterministic function, and is random but does not depend on , CVaR optimization may be formulated as a stochastic program, and solved using various approaches [Rockafellar and Uryasev2000, Hong and Liu2009, Iyengar and Ma2013]. Such a payoff structure is appropriate for certain domains, such as portfolio optimization, in which the investment strategy generally does not affect the asset prices. However, in many important domains, for example queueing systems, resource allocation, and reinforcement learning, the tunable parameters also control the distribution of the random outcomes. Since existing CVaR optimization methods are not suitable for such cases, and due to increased interest in risk-sensitive optimization recently in these domains [Tamar, Di Castro, and Mannor2012, Prashanth and Ghavamzadeh2013], there is a strong incentive to develop more general CVaR optimization algorithms.

In this work, we propose a CVaR optimization approach that is applicable when also controls the distribution of . The basis of our approach is a new formula that we derive for the CVaR gradient in the form of a conditional expectation. Based on this formula, we propose a sampling-based estimator for the CVaR gradient, and use it to optimize the CVaR by stochastic gradient descent.

In addition, we analyze the bias of our estimator, and use the result to prove convergence of the stochastic gradient descent algorithm to a local CVaR optimum. Our method allows us to consider CVaR optimization in new domains. As an example, we consider a reinforcement learning application, and learn a risk-sensitive controller for the game of Tetris. To our knowledge, CVaR optimization for such a domain is beyond the reach of existing approaches. Considering Tetris also allows us to easily interpret our results, and show that we indeed learn sensible policies.

We remark that in certain domains, CVaR is often not maximized directly, but used as a constraint in an optimization problem of the form

. Extending our approach to such problems is straightforward, using standard penalty method techniques (see, e.g., tamar2012policy, tamar2012policy, and prashanth2013actor, prashanth2013actor, for a such an approach with a variance-constrained objective), since the key component for these methods is the CVaR gradient estimator we provide here. Another appealing property of our estimator is that it naturally incorporates importance sampling, which is important when

is small, and the CVaR captures rare events.

Related Work

Our approach is similar in spirit to the likelihood-ratio method (LR; glynn1990likelihood, glynn1990likelihood), that estimates the gradient of the expected payoff. The LR method has been successfully applied in diverse domains such as queueing systems, inventory management, and financial engineering [Fu2006], and also in reinforcement learning (RL; sutton_reinforcement_1998, sutton_reinforcement_1998), where it is commonly known as the policy gradient method [Baxter and Bartlett2001, Peters and Schaal2008]. Our work extends the LR method to estimating the gradient of the CVaR of the payoff.

Closely related to our work are the studies of hong_simulating_2009 (hong_simulating_2009) and scaillet_nonparametric_2004 (scaillet_nonparametric_2004), who proposed perturbation analysis style estimators for the gradient of the CVaR, for the setting mentioned above, in which does not affect the distribution of . Indeed, their gradient formulae are different than ours, and do not apply in our setting.

LR gradient estimators for other risk measures have been proposed by borkar2001sensitivity (borkar2001sensitivity) for exponential utility functions, and by tamar2012policy (tamar2012policy) for mean–variance. These measures, however, consider a very different notion of risk than the CVaR. For example, the mean–variance measure is known to underestimate the risk of rare, but catastrophic events [Agarwal and Naik2004].

Risk-sensitive optimization in RL is receiving increased interest recently. A mean-variance criterion was considered by tamar2012policy (tamar2012policy) and prashanth2013actor (prashanth2013actor). morimura_nonparametric_2010 (morimura_nonparametric_2010) consider the expected return, with a CVaR based risk-sensitive policy for guiding the exploration while learning. Their method, however, does not scale to large problems. borkar2014risk (borkar2014risk) optimize a CVaR constrained objective using dynamic programming, by augmenting the state space with the accumulated reward. As such, that method is only suitable for a finite horizon and a small state-space, and does not scale-up to problems such as the Tetris domain we consider. A function approximation extension of [Borkar and Jain2014] is mentioned, using a three time scales stochastic approximation algorithm. In that work, three different learning rates are decreased to 0, and convergence is determined by the slowest one, leading to an overall slow convergence. In contrast, our approach requires only a single learning rate. Recently, prashanth2014cvar (prashanth2014cvar) used our gradient formula of Proposition 2 (from a preliminary version of this paper) in a two time-scale stochastic approximation scheme to show convergence of CVaR optimization. Besides providing the theoretical basis for that work, our current convergence result (Theorem 5) obviates the need for the extra time-scale, and results in a simpler and faster algorithm.

In this section we present a new LR-style formula for the gradient of the CVaR. This gradient will be used in subsequent sections to optimize the CVaR with respect to some parametric family. We start with a formal definition of the CVaR, and then present a CVaR gradient formula for 1-dimensional random variables. We then extend our result to the multi-dimensional case.

Let

denote a random variable with a cumulative distribution function (C.D.F.)

. For convenience, we assume that

is a continuous random variable, meaning that

is everywhere continuous. We also assume that is bounded. Given a confidence level , the -Value-at-Risk, (VaR; or -quantile) of is denoted , and given by

 να(Z)=F−1Z(α)≐inf{z:FZ(z)≥α}. (1)

The -Conditional-Value-at-Risk of is denoted by and defined as the expectation of the fraction of the worst outcomes of

 Φα(Z)=E[Z|Z≤να(Z)]. (2)

We next present a formula for the sensitivity of to changes in .

2.1 CVaR Gradient of a 1-Dimensional Variable

Consider again a random variable

, but now let its probability density function (P.D.F.)

be parameterized by a vector

. We let and denote the VaR and CVaR of as defined in Eq. (1) and (2), when the parameter is , respectively.

We are interested in the sensitivity of the CVaR to the parameter vector, as expressed by the gradient . In all but the most simple cases, calculating the gradient analytically is intractable. Therefore, we derive a formula in which is expressed as a conditional expectation, and use it to calculate the gradient by sampling. For technical convenience, we make the following assumption:

Assumption 1.

is a continuous random variable, and bounded in for all .

We also make the following smoothness assumption on and

Assumption 2.

For all and , the gradients and exist and are bounded.

Note that since is continuous, Assumption 2 is satisfied whenever is bounded. Relaxing Assumptions 1 and 2 is possible, but involves technical details that would complicate the presentation, and is left to future work. The next assumption is standard in LR gradient estimates

Assumption 3.

For all , , and , we have that exists and is bounded.

In the next proposition we present a LR-style sensitivity formula for , in which the gradient is expressed as a conditional expectation. In Section 3 we shall use this formula to suggest a sampling algorithm for the gradient.

Proposition 1.

Let Assumptions 1, 2, and 3 hold. Then

Proof.

Define the level-set By definition, , and Taking a derivative and using the Leibniz rule we obtain

 0=∂∂θj∫να(Z;θ)−bfZ(z;θ)dz=∫να(Z;θ)−b∂fZ(z;θ)∂θjdz+∂να(Z;θ)∂θjfZ(να(Z;θ);θ). (3)

By definition (2) we have Now, taking a derivative and using the Leibniz rule we obtain

 ∂∂θjΦα(Z;θ)=α−1∫να(Z;θ)−b∂fZ(z;θ)∂θjzdz+α−1∂να(Z;θ)∂θjfZ(να(Z;θ);θ)να(Z;θ). (4)

Rearranging, and plugging (3) in (4) we obtain Finally, using the likelihood ratio trick – multiplying and dividing by inside the integral, which is justified due to Assumption 3, we obtain the required expectation. ∎

Let us contrast the CVaR LR formula of Proposition 1 with the standard LR formula for the expectation [Glynn1990] , where the baseline could be any arbitrary constant. Note that in the CVaR case the baseline is specific, and, as seen in the proof, accounts for the sensitivity of the level-set . Quite surprisingly, this specific baseline turns out to be exactly the VaR, , which, as we shall see later, also leads to an elegant sampling based estimator.

In a typical application, would correspond to the performance of some system, such as the profit in portfolio optimization, or the total reward in RL. Note that in order to use Proposition 1 in a gradient estimation algorithm, one needs access to

: the sensitivity of the performance distribution to the parameters. Typically, the system performance is a complicated function of a high-dimensional random variable. For example, in RL and queueing systems, the performance is a function of a trajectory from a stochastic dynamical system, and calculating its probability distribution is usually intractable. The sensitivity of the trajectory distribution to the parameters, however, is often easy to calculate, since the parameters typically control how the trajectory is generated. We shall now generalize Proposition

1 to such cases. The utility of this generalization is further exemplified in Section 5, for the RL domain.

2.2 CVaR Gradient Formula – General Case

Let denote an dimensional random variable with a finite support , and let

denote a discrete random variable taking values in some countable set

. Let denote the probability mass function of , and let denote the probability density function of given . Let the reward function be a bounded mapping from to , and consider the random variable . We are interested in a formula for .

We make the following assumption, similar to Assumptions 1, 2, and 3.

Assumption 4.

The reward is a continuous random variable for all . Furthermore, for all and , the gradients and are well defined and bounded. In addition and exist and are bounded for all , , and .

Define the level-set We require some smoothness of the function , that is captured by the following assumption on .

Assumption 5.

For all and , the set may be written as a finite sum of disjoint, closed, and connected components , each with positive measure:

Assumption 5 may satisfied, for example, when is Lipschitz in for all . We now present a sensitivity formula for .

Proposition 2.

Let Assumption 4 and 5 hold. Then

 ∂∂θjΦα(R;θ)=Eθ[(∂logfY(Y;θ)∂θj+∂logfX|Y(X|Y;θ)∂θj)(R−να(R;θ))∣∣ ∣∣R≤να(R;θ)].

The proof of Proposition 2 is similar in spirit to the proof of Proposition 1, but involves some additional difficulties of applying the Leibnitz rule in a multidimensional setting. It is given in [tamar2014cvar]. We reiterate that relaxing Assumptions 4 and 5 is possible, but is technically involved, and left for future work. In the next section we show that the formula in Proposition 2 leads to an effective algorithm for estimating by sampling.

3 A CVaR Gradient Estimation Algorithm

The sensitivity formula in Proposition 2 suggests a natural Monte–Carlo (MC) estimation algorithm. The method, which we label GCVaR (Gradient estimator for CVaR), is described as follows. Let be samples drawn i.i.d. from

, the joint distribution of

and . We first estimate using the empirical -quantile111Algorithmically, this is equivalent to first sorting the ’s in ascending order, and then selecting as the term in the sorted list.

 ~v=infz^F(z)≥α, (5)

where is the empirical C.D.F. of : The MC estimate of the gradient is given by

 Δj;N=1αNN∑i=1(∂logfY(yi;θ)∂θj+∂logfX|Y(xi|yi;θ)∂θj)××(r(xi,yi)−~v)1r(xi,yi)≤~v. (6)

It is known that the empirical -quantile is a biased estimator of . Therefore, is also a biased estimator of . In the following we analyze and bound this bias. We first show that is a consistent estimator. The proof is similar to the proof of Theorem 4.1 in [Hong and Liu2009], and given in the supplementary material.

Theorem 3.

Let Assumption 4 and 5 hold. Then w.p. 1 as .

With an additional smoothness assumption we can explicitly bound the bias. Let denote the P.D.F. of , and define the function

Assumption 6.

For all , and are continuous at , and .

Assumption 6 is similar to Assumption 4 of [Hong and Liu2009], and may be satisfied, for example, when is continuous and is Lipschitz in . The next theorem shows that the bias is . The proof, given in the supplementary material, is based on separating the bias to a term that is bounded using a result of hong_simulating_2009 (hong_simulating_2009), and an additional term that we bound using well-known results for the bias of empirical quantiles.

Theorem 4.

Let Assumptions 4, 5, and 6 hold. Then is .

At this point, let us again contrast GCVaR with the standard LR method. One may naively presume that applying a standard LR gradient estimator to the worst samples would work as a CVaR gradient estimator. This corresponds to applying the GCVaR algorithm without subtracting the baseline from the reward in (6). Theorems 3 and 4 show that such an estimator would not be consistent. In fact, in the supplementary material we give an example where the gradient error of such an approach may be arbitrarily large.

In the sequel, we use GCVaR as part of a stochastic gradient descent algorithm for CVaR optimization. An asymptotically decreasing gradient bias, as may be established from Theorem 3, is necessary to guarantee convergence of such a procedure. Furthermore, the bound of Theorem 4 will allow us to quantify how many samples are needed at each iteration for such convergence to hold.

Variance Reduction by Importance Sampling

For very low quantiles, i.e., close to , the GCVaR estimator would suffer from a high variance, since the averaging is effectively only over samples. This is a well-known issue in sampling based approaches to VaR and CVaR estimation, and is often mitigated using variance reduction techniques such as Importance Sampling (IS; rubinstein2011simulation, rubinstein2011simulation; bardou2009computing, bardou2009computing). In IS, the variance of a MC estimator is reduced by using samples from a different sampling distribution, and suitably modifying the estimator to keep it unbiased. It is straightforward to incorporate IS into LR gradient estimators in general, and to our GCVaR estimator in particular. Due to space constraints, and since this is fairly standard textbook material (e.g., rubinstein2011simulation, rubinstein2011simulation), we provide the full technical details in the supplementary material. In our empirical results we show that using IS indeed leads to significantly better performance.

4 CVaR Optimization

In this section, we consider the setting of Section 2.2, and aim to solve the CVaR optimization problem:

 maxθ∈RkΦα(R;θ). (7)

For this goal we propose CVaRSGD: a stochastic gradient descent algorithm, based on the GCVaR gradient estimator. We now describe the CVaRSGD algorithm in detail, and show that it converges to a local optimum of (7).

In CVaRSGD, we start with an arbitrary initial parameter . The algorithm proceeds iteratively as follows. At each iteration of the algorithm, we first sample i.i.d. realizations of the random variables and , from the distribution . We then apply the GCVaR algorithm to obtain an estimate of , using the samples . Finally, we update the parameter according to

 θi+1j=Γ(θij+ϵiΔj;ni), (8)

where is a positive step size, and is a projection to some compact set with a smooth boundary. The purpose of the projection is to facilitate convergence of the algorithm, by guaranteeing that the iterates remain bounded (this is a common stochastic approximation technique; kushner2003stochastic, kushner2003stochastic). In practice, if is chosen large enough so that it contains the local optima of , the projection would rarely occur, and would have a negligible effect on the algorithm. Let denote an operator that, given a direction of change to the parameter , returns a modified direction that keeps within

. Consider the following ordinary differential equation:

 ˙θ=^Γθ(∇Φα(R;θ)),θ(0)∈Θ. (9)

Let denote the set of all asymptotically stable equilibria of (9). The next theorem shows that under suitable technical conditions, the CVaRSGD algorithm converges to almost surely. The theorem is a direct application of Theorem 5.2.1 of kushner2003stochastic (kushner2003stochastic), and given here without proof.

Theorem 5.

Consider the CVaRSGD algorithm (8). Let Assumptions 4, 5, and 6 hold, and assume that is continuously differentiable in . Also, assume that , , and that w.p. 1 for all . Then almost surely.

Note that from the discussion in Section 3, the requirement implies that we must have . However, the rate of could be very slow, for example, using the bound of Theorem 4 the requirement may be satisfied by choosing and .

5 Application to Reinforcement Learning

In this section we show that the CVaRSGD algorithm may be used in an RL policy-gradient type scheme, for optimizing performance criteria that involve the CVaR of the total return. We first describe some preliminaries and our RL setting, and then describe our algorithm.

We consider an episodic222Also known as a stochastic shortest path [Bertsekas2012]. Markov Decision Problem (MDP) in discrete time with a finite state space and a finite action space . At time the state is , and an action is chosen according to a parameterized policy , which assigns a distribution over actions according to the observed history of states . Then, an immediate random reward is received, and the state transitions to according to the MDP transition probability . We denote by the initial state distribution and by a terminal state, and we assume that for all , is reached w.p. 1.

For some policy , let denote a state-action-reward trajectory from the MDP under that policy, that terminates at time , i.e., . The trajectory is a random variable, and we decompose333This decomposition is not restrictive, and used only to illustrate the definitions of Section 2. One may alternatively consider a continuous state space, or discrete rewards, so long as Assumptions 4, 5, and 6 hold. it into a discrete part and a continuous part . Our quantity of interest is the total reward along the trajectory . In standard RL, the objective is to find the parameter that maximizes the expected return . Policy gradient methods [Baxter and Bartlett2001, Marbach and Tsitsiklis1998, Peters and Schaal2008] use simulation to estimate , and then perform stochastic gradient ascent on the parameters . In this work we are risk-sensitive, and our goal is to maximize the CVaR of the total return In the spirit of policy gradient methods, we estimate from simulation, using GCVaR, and optimize using CVaRSGD. We now detail our approach.

First, it is well known [Marbach and Tsitsiklis1998] that by the Markov property of the state transitions:

 ∂logfY(Y;θ)/∂θ=τ−1∑t=0∂logfa|h(at|ht;θ)/∂θ. (10)

Also, note that in our formulation we have

 ∂logfX|Y(xi|yi;θ)/∂θ=0, (11)

since the reward does not depend on directly.

To apply CVaRSGD in the RL setting, at each iteration of the algorithm we simulate trajectories of the MDP using policy (each and here together correspond to a single trajectory, as realizations of the random variables and defined above). We then apply the GCVaR algorithm to obtain an estimate of , using the simulated trajectories , Eq. (10), and Eq. (11). Finally, we update the policy parameter according to Eq. (8). Note that due to Eq. (10), the transition probabilities of the MDP, which are generally not known to the decision maker, are not required for estimating the gradient using GCVaR. Only policy-dependent terms are required.

We should remark that for the standard RL criterion , a Markov policy that depends only on the current state suffices to achieve optimality [Bertsekas2012]. For the CVaR criterion this is not necessarily the case. bauerle2011markov (bauerle2011markov) show that under certain conditions, an augmentation of the current state with a function of the accumulated reward suffices for optimality. In our simulations, we used a Markov policy, and still obtained useful and sensible results.

Assumptions 4, 5, and 6, that are required for convergence of the algorithm, are reasonable for the RL setting, and may be satisfied, for example, when is smooth, and is well defined and bounded. This last condition is standard in policy gradient literature, and a popular policy representation that satisfies it is softmax action selection [Sutton et al.2000, Marbach and Tsitsiklis1998], given by where are a set of features that depend on the history and action.

In some RL domains, the reward takes only discrete values. While this case is not specifically covered by the theory in this paper, one may add an arbitrarily small smooth noise to the total reward for our results to hold. Since such a modification has negligible impact on performance, this issue is of little importance in practice. In our experiments the reward was discrete, and we did not observe any problem.

5.1 Experimental Results

We examine Tetris as a test case for our algorithms. Tetris is a popular RL benchmark that has been studied extensively. The main challenge in Tetris is its large state space, which necessitates some form of approximation in the solution technique. Many approaches to learning controllers for Tetris are described in the literature, among them are approximate value iteration [Tsitsiklis and Van Roy1996], policy gradients [Kakade2001, Furmston and Barber2012], and modified policy iteration [Gabillon, Ghavamzadeh, and Scherrer2013]. The standard performance measure in Tetris is the expected number of cleared lines in the game. Here, we are interested in a risk-averse performance measure, captured by the CVaR of the total game score. Our goal in this section is to compare the performance of a policy optimized for the CVaR criterion versus a policy obtained using the standard policy gradient method. As we will show, optimizing the CVaR indeed produces a different policy, characterized by a risk-averse behavior. We note that at present, the best results in the literature (for the standard performance measure) were obtained using a modified policy iteration approach [Gabillon, Ghavamzadeh, and Scherrer2013], and not using policy gradients. We emphasize that our goal here is not to compete with those results, but rather to illustrate the application of CVaRSGD. We do point out, however, that whether the approach of gabillon2013approximate (gabillon2013approximate) could be extended to handle a CVaR objective is currently not known.

We used the regular Tetris board with the 7 standard shapes (a.k.a. tetrominos). In order to induce risk-sensitive behavior, we modified the reward function of the game as follows. The score for clearing 1,2,3 and 4 lines is 1,4,8 and 16 respectively. In addition, we limited the maximum number of steps in the game to 1000. These modifications strengthened the difference between the risk-sensitive and nominal policies, as they induce a tradeoff between clearing many ’single’ lines with a low profit, or waiting for the more profitable, but less frequent, ’batches’.

We used the softmax policy, with the feature set of thiery2009improvements (thiery2009improvements). Starting from a fixed policy parameter , which was obtained by running several iterations of standard policy gradient (giving both methods a ’warm start’), we ran both CVaRSGD and standard policy gradient444Standard policy gradient is similar to CVaRSGD when . However, it is common to subtract a baseline from the reward in order to reduce the variance of the gradient estimate. In our experiments, we used the average return as a baseline, and our gradient estimate was . for enough iterations such that both algorithms (approximately) converged. We set and .

In Fig. 1A and Fig. 1B we present the average return and CVaR of the return for the policies of both algorithms at each iteration (evaluated by MC on independent trajectories). Observe that for CVaRSGD, the average return has been compromised for a higher CVaR value.

This compromise is further explained in Fig. 1C, where we display the reward distribution of the final policies. It may be observed that the left-tail distribution of the CVaR policy is significantly lower than the standard policy. For the risk-sensitive decision maker, such results are very important, especially if the left-tail contains catastrophic outcomes, as is common in many real-world domains, such as finance. To better understand the differences between the policies, we compare the final policy parameters in Fig. 1D. The most significant difference is in the parameter that corresponds to the Board Well feature. A well is a succession of unoccupied cells in a column, such that their left and right cells are both occupied. The controller trained by CVaRSGD has a smaller negative weight for this feature, compared to the standard controller, indicating that actions which create deep-wells are repressed. Such wells may lead to a high reward when they get filled, but are risky as they heighten the board.

To demonstrate the importance of IS in optimizing the CVaR when is small, we chose , and , and compared CVaRSGD against its IS version, IS_CVaRSGD, described in the supplementary material. As Fig. 1E shows, IS_GCVaRSGD converged significantly faster, improving the convergence rate by more than a factor of 2. The full details are provided in the supplementary material.

6 Conclusion and Future Work

We presented a novel LR-style formula for the gradient of the CVaR performance criterion. Based on this formula, we proposed a sampling-based gradient estimator, and a stochastic gradient descent procedure for CVaR optimization that is guaranteed to converge to a local optimum. To our knowledge, this is the first extension of the LR method to the CVaR performance criterion, and our results extend CVaR optimization to new domains.

We evaluated our approach empirically in an RL domain: learning a risk-sensitive policy for Tetris. To our knowledge, such a domain is beyond the reach of existing CVaR optimization approaches. Moreover, our empirical results show that optimizing the CVaR indeed results in useful risk-sensitive policies, and motivates the use of simulation-based optimization for risk-sensitive decision making.

Acknowledgments

The authors thank Odalric-Ambrym Maillard for many helpful discussions. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Program (FP/2007-2013) / ERC Grant Agreement n. 306638.

References

• [Agarwal and Naik2004] Agarwal, V., and Naik, N. Y. 2004. Risks and portfolio decisions involving hedge funds. Review of Financial Studies 17(1):63–98.
• [Bardou, Frikha, and Pagès2009] Bardou, O.; Frikha, N.; and Pagès, G. 2009. Computing var and cvar using stochastic approximation and adaptive unconstrained importance sampling. Monte Carlo Methods and Applications 15(3):173–210.
• [Bäuerle and Ott2011] Bäuerle, N., and Ott, J. 2011. Markov decision processes with average-value-at-risk criteria. Mathematical Methods of Operations Research 74(3):361–379.
• [Baxter and Bartlett2001] Baxter, J., and Bartlett, P. L. 2001. Infinite-horizon policy-gradient estimation. JAIR 15:319–350.
• [Bertsekas2012] Bertsekas, D. P. 2012. Dynamic Programming and Optimal Control, Vol II. Athena Scientific, 4th edition.
• [Borkar and Jain2014] Borkar, V., and Jain, R. 2014. Risk-constrained Markov decision processes. IEEE TAC PP(99):1–1.
• [Borkar2001] Borkar, V. S. 2001. A sensitivity formula for risk-sensitive cost and the actor–critic algorithm. Systems & Control Letters 44(5):339–346.
• [Boyan2002] Boyan, J. A. 2002. Technical update: Least-squares temporal difference learning. Machine Learning 49(2):233–246.
• [David1981] David, H. 1981. Order Statistics. A Wiley publication in applied statistics. Wiley.
• [Flanders1973] Flanders, H. 1973. Differentiation under the integral sign. The American Mathematical Monthly 80(6):615–627.
• [Fu2006] Fu, M. C. 2006. Gradient estimation. In Henderson, S. G., and Nelson, B. L., eds., Simulation, volume 13 of Handbooks in Operations Research and Management Science. Elsevier. 575 – 616.
• [Furmston and Barber2012] Furmston, T., and Barber, D. 2012. A unifying perspective of parametric policy search methods for Markov decision processes. In NIPS.
• [Gabillon, Ghavamzadeh, and Scherrer2013] Gabillon, V.; Ghavamzadeh, M.; and Scherrer, B. 2013. Approximate dynamic programming finally performs well in the game of tetris. In NIPS.
• [Glynn1990] Glynn, P. W. 1990. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM 33(10):75–84.
• [Glynn1996] Glynn, P. W. 1996. Importance sampling for monte carlo estimation of quantiles. In Mathematical Methods in Stochastic Simulation and Experimental Design: Proceedings of the 2nd St. Petersburg Workshop on Simulation, 180–185.
• [Hong and Liu2009] Hong, L. J., and Liu, G. 2009. Simulating sensitivities of conditional value at risk. Management Science.
• [Iyengar and Ma2013] Iyengar, G., and Ma, A. 2013. Fast gradient descent method for mean-cvar optimization. Annals of Operations Research 205(1):203–212.
• [Kushner and Yin2003] Kushner, H., and Yin, G. 2003. Stochastic approximation and recursive algorithms and applications. Springer Verlag.
• [Marbach and Tsitsiklis1998] Marbach, P., and Tsitsiklis, J. N. 1998. Simulation-based optimization of Markov reward processes. IEEE Transactions on Automatic Control 46(2):191–209.
• [Morimura et al.2010] Morimura, T.; Sugiyama, M.; Kashima, H.; Hachiya, H.; and Tanaka, T. 2010. Nonparametric return distribution approximation for reinforcement learning. In ICML, 799–806.
• [Peters and Schaal2008] Peters, J., and Schaal, S. 2008. Reinforcement learning of motor skills with policy gradients. Neural Networks 21(4):682–697.
• [Prashanth and Ghavamzadeh2013] Prashanth, L., and Ghavamzadeh, M. 2013. Actor-critic algorithms for risk-sensitive mdps. In NIPS.
• [Prashanth2014] Prashanth, L. 2014. Policy gradients for CVaR-constrained MDPs. In International Conference on Algorithmic Learning Theory.
• [Rockafellar and Uryasev2000] Rockafellar, R. T., and Uryasev, S. 2000. Optimization of conditional value-at-risk. Journal of risk 2:21–42.
• [Rubinstein and Kroese2011] Rubinstein, R. Y., and Kroese, D. P. 2011. Simulation and the Monte Carlo method. John Wiley & Sons.
• [Scaillet2004] Scaillet, O. 2004. Nonparametric estimation and sensitivity analysis of expected shortfall. Mathematical Finance.
• [Sutton and Barto1998] Sutton, R. S., and Barto, A. G. 1998. Reinforcement learning: An introduction. Cambridge Univ Press.
• [Sutton et al.2000] Sutton, R. S.; McAllester, D.; Singh, S.; and Mansour, Y. 2000. Policy gradient methods for reinforcement learning with function approximation. In NIPS.
• [Tamar, Di Castro, and Mannor2012] Tamar, A.; Di Castro, D.; and Mannor, S. 2012. Policy gradients with variance related risk criteria. In ICML.
• [Thiery and Scherrer2009] Thiery, C., and Scherrer, B. 2009. Improvements on learning tetris with cross entropy. International Computer Games Association Journal 32.
• [Tsitsiklis and Van Roy1996] Tsitsiklis, J. N., and Van Roy, B. 1996. Feature-based methods for large scale dynamic programming. Machine Learning 22(1-3):59–94.

Appendix A Proof of Proposition 2

Proof.

The main difficulty in extending the proof of Proposition 1 to this case is in applying the Leibnitz rule in a multi-dimensional case. Such an extension is given by [Flanders1973], which we now state.

We are given an dimensional dependent chain (field of integration) in . We also have an exterior differential form whose coefficients are -dependent:

 ω=f(x,θ)dx1∧⋯∧dxn

The general Leibnitz rule555The formula in [Flanders1973] is for a more general case where is not necessarily dimensional. That formula includes an additional term , where is the exterior derivative, which cancels in our case. is given by

 ∂∂θ∫Dθω=∫∂Dθv\righthalfcupω+∫Dθ∂ω∂θ (12)

where denotes the vector field of velocities of , and denotes the interior product between and (see [Flanders1973] for more details).

We now write the CVaR explicitly as

 Φα(R;θ)=1α∑y∈YfY(y;θ)∫x∈Dy;θfX|Y(x|y;θ)r(x,y)dx=1α∑y∈YfY(y;θ)Ly;θ∑i=1∫x∈Diy;θfX|Y(x|y;θ)r(x,y)dx,

therefore

 ∂∂θjΦα(R;θ)=1α∑y∈Y∂fY(y;θ)∂θjLy;θ∑i=1∫x∈Diy;θfX|Y(x|y;θ)r(x,y)dx+1α∑y∈YfY(y;θ)Ly;θ∑i=1∂∂θj∫x∈Diy;θfX|Y(x|y;θ)r(x,y)dx (13)

We now treat each in the last sum separately. Let denote the set over which is defined. Obviously, .

We now make an important observation. By definition of the level-set , and since it is closed by Assumption 5, for every we have that either

 (a) r(x,y)=να(R;θ), (14)

or

 (b) x∈∂X, and r(x,y)<να(R;θ). (15)

We write where the two last terms correspond to the two possibilities in (14) and (15).

We now claim that for the boundary term , we have

 ∫∂Di,by;θv\righthalfcupω=0. (16)

To see this, first note that by definition of , the boundary is smooth and has a unique normal vector at each point, except for a set of measure zero (the corners of ). Let denote the set of all points in for which a unique normal vector exists. For each we let and denote the normal and tangent (with respect to ) elements of the velocity at , respectively. Thus,

 v=v⊥+v∥.

For some let denote the set . From Assumption 4 we have that is bounded, therefore there exists such that for all that satisfy we have , and therefore . Since this holds for every , we conclude that a small change in does not change , and therefore we have

 v⊥=0,∀x∈∂~Di,by;θ.

Furthermore, by definition of the interior product we have

 v∥\righthalfcupω=0.

Therefore we have

 ∫∂Di,by;θv\righthalfcupω=∫∂~Di,by;θv\righthalfcupω=∫∂~Di,by;θv∥\righthalfcupω=0,

and the claim follows.

Now, let . Using (12), we have

 ∂∂θj∫x∈Diy;θωy=∫∂Diy;θv\righthalfcupωy+∫Diy;θ∂ωy∂θ=∫∂Di,ay;θv\righthalfcupωy+∫Diy;θ∂ωy∂θ (17)

where the last equality follows from (16) and the definition of .

Let . By the definition of we have that for all

 α=∑y∈YfY(y;θ)∫Dy;θ~ωy,

therefore, by taking a derivative, and using (16) we have

 0=∂∂θj(∑y∈YfY(y;θ)∫Dy;θ~ωy)=∑y∈Y∂fY(y;θ)∂θj∫Dy;θ~ωy+∑y∈YfY(y;θ)Ly;θ∑i=1(∫∂Di,ay;θv\righthalfcup~ωy+∫Diy;θ∂~ωy∂θ) (18)

From (14), and linearity of the interior product we have

 ∫∂Di,ay;θv\righthalfcupωy=να(R;θ)∫∂Di,ay;θv\righthalfcup~ωy,

therefore, plugging in (18) we have

 ∑y∈YfY(y;θ)Ly;θ∑i=1∫∂Di,ay;θv\righthalfcupωy=−να(R;θ)∑y∈YfY(y;θ)Ly;θ∑i=1∫Diy;θ∂~ωy∂θ−να(R;θ)∑y∈Y∂fY(y;θ)∂θj∫Dy;θ~ωy (19)

Now, note that from (13) and (17) we have

 ∂∂θjΦα(R;θ)=1α∑y∈Y∂fY(y;θ)∂θjLy;θ∑i=1∫x∈Diy;θωy+1α∑y∈YfY(y;θ)Ly;θ∑i=1∫Diy;θ∂ωy∂θ+1α∑y∈YfY(y;θ)Ly;θ∑i=1∫∂Di,ay;θv\righthalfcupωy,

and by plugging in (19) we obtain

Finally, using the standard likelihood ratio trick – multiplying and dividing by inside the first sum, and multiplying and dividing by inside the second integral we obtain the required expectation. ∎

Appendix B Proof of Theorem 3

Proof.

Let . To simplify notation, we also introduce the functions , and . Thus we have

 Δj;N=1αNN∑i=1(h1(xi,yi)−h2(xi,yi)~v)1r(xi,yi)≤~v=1αNN∑i=1(h1(xi,yi)−h2(xi,yi)ν)1r(xi,yi)≤ν+1αNN∑i=1(h1(xi,yi)−h2(xi,y