Policy Evaluation Networks

by   Jean Harb, et al.

Many reinforcement learning algorithms use value functions to guide the search for better policies. These methods estimate the value of a single policy while generalizing across many states. The core idea of this paper is to flip this convention and estimate the value of many policies, for a single set of states. This approach opens up the possibility of performing direct gradient ascent in policy space without seeing any new data. The main challenge for this approach is finding a way to represent complex policies that facilitates learning and generalization. To address this problem, we introduce a scalable, differentiable fingerprinting mechanism that retains essential policy information in a concise embedding. Our empirical results demonstrate that combining these three elements (learned Policy Evaluation Network, policy fingerprints, gradient ascent) can produce policies that outperform those that generated the training data, in zero-shot manner.



page 1

page 2

page 3

page 4


Off-Policy Shaping Ensembles in Reinforcement Learning

Recent advances of gradient temporal-difference methods allow to learn o...

PG3: Policy-Guided Planning for Generalized Policy Generation

A longstanding objective in classical planning is to synthesize policies...

Chaining Value Functions for Off-Policy Learning

To accumulate knowledge and improve its policy of behaviour, a reinforce...

Learning from Scarce Experience

Searching the space of policies directly for the optimal policy has been...

Cascade Attribute Network: Decomposing Reinforcement Learning Control Policies using Hierarchical Neural Networks

Reinforcement learning methods have been developed to achieve great succ...

Generalizing Off-Policy Learning under Sample Selection Bias

Learning personalized decision policies that generalize to the target po...

Approximate discounting-free policy evaluation from transient and recurrent states

In order to distinguish policies that prescribe good from bad actions in...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Value functions are core quantities estimated by most reinforcement learning (RL) algorithms, and used to inform the search for good policies. Usually, value functions receive as input a description of the state (or state-action pair) and estimate the expected return for some distribution of inputs, conditioned on some behavior, or policy. When the policy changes, value estimates have to keep up.

One way to view this process is that value functions are trained using large amounts of transitions, coming from the set of policies that have been used by the agent in the past, without seeing from which policy each transition came. As the policy changes, the value function estimate is still influenced by previously seen policies, possibly in an unpredictable way, because the policy is not typically represented as an input. Our goal is to explore the idea of pushing the agent to generalize its value representation among different policies, by providing a policy description as input. We hypothesize that an agent trained in this way could predict the value of a new, unseen policy with sufficient accuracy to guide further search through policy space. In particular, if we were to train a network to estimate the value of a policy from an initial start state distribution, we would have access to the gradient of the expected return with respect to the policy’s parameters and could use it to improve the policy. The value function would not need to take states as input at all, relying instead on an encoding of the policy itself in order to predict the expected return of the entire episode, from the starting state distribution.

Our first contribution is to introduce the Policy Evaluation Network (PVN), a network that learns to predict the expected return of a policy in a fully differentiable way. Using a dataset of policy networks along with their returns, the PVN is trained with supervised learning.

However, it is not trivial to embed a policy in a way that allows the embedding to be sufficiently informative for the value function, yet not too large. For example, the naive approach of flattening a policy network into a large vector loses information on the dependencies between layers, while the vector can still become intractably large. The second major contribution of this paper is a new method to fingerprint a policy network in order to create an embedding that is sufficiently small, yet retains information about the network structure.

Finally, we introduce a novel policy gradient algorithm, which performs gradient ascent through a learned PVN in order to obtain, in zero-shot manner, policies that are superior to any of those evaluated during training.

We present small scale experiments to illustrate the behavior of our approach, then present experiments that show how fingerprinting allows us to evaluate not only linear policy networks, but also multi-layered ones.

2 Background

In RL, an agent’s goal is to learn how to maximize its return by interacting with an environment modelled, which is usually assumed to be a Markov Decision Processes (MDP)

, where is a set of states, is a set of actions, represents the environment dynamics, is a discount factor and is the reward function. A randomized policy is a distribution over actions, conditioned on states.

In this paper, we consider the problem of finding the parameters of a stochastic policy which maximize the expected discounted return from a distribution over initial states : . The policy gradient theorem (Sutton et al., 2000) shows that gradient of this objective is given by:


and where is the action-value function corresponding to policy and is a discounted weighting of states:


is the probability of reaching

from at time step , given the environment dynamics and the fact that actions are chosen from .

The gradient (1) is typically estimated with samples taken under the undiscounted distribution induced by in the MDP (Thomas, 2014). In actor-critic architectures (Sutton, 1984), a learned estimate of the action-value function is usually maintained in a two-timescale manner (Konda & Tsitsiklis, 2000), ie., by allowing the iterates of to converge faster than the policy parameters. This requirement is crucial for the stability of such methods in practice (Fujimoto et al., 2018).

In this paper, we propose a new gradient-based optimization method which does not rely on maintaining a value function estimate in this two-timescale fashion. Our key observation is that rather than using a stochastic gradient of the performance measure, we can learn an estimate of

directly (as a neural network) and compute a deterministic gradient from it using any automatic differentiation package. We will now introduce the core idea of this approach.

3 Policy Evaluation Networks

By definition, a value function represents the expected return associated with a given policy. So, one could expect, as in regression methods, that training could be done on some policies and generalize to others. But, value function approximation methods are not typically designed with the goal of leveraging any information about the policy itself in order to generalize to new policies. Conceptually, though, there exists a function capable of computing the expected return for any policy in a zero-shot fashion: the performance measure itself. In vector notation, we can write the performance measure explicitly as:


where and are the transition matrix and reward model induced by . Hence, there is a function of that can compute the value of .

Policy Evaluation Networks aim to approximate this function , so that the gradient of a parameterized policy can be obtained instantaneously. Furthermore, we show that this can be achieved in a completely model-free fashion, without having to estimate and , or to form any of the matrices in (2).

Distributional Predictions. While approximating could be directly cast as a regression problem, we view it instead as a classification problem: that of predicting the bucket index corresponding to the estimated return of a given input policy. This strategy has proven to be effective in the training of deep neural networks, both in the supervised learning regime (van den Oord et al., 2016) and in reinforcement learning (Bellemare et al., 2017a). Predicting buckets instead of exact real values provides a regularization effect, similar to the idea of learning with auxiliary tasks (Caruana, 1997; Sutton et al., 2011; Jaderberg et al., 2017c; Lyle et al., 2019).

Inspired by Bellemare et al. (2017b)

, we discretize the set of sampled returns into buckets of the same size. The PVN then outputs a probability distribution over the set of buckets, and the loss we use is the KL-divergence between the predicted and target distributions. In practice, the target is determined by rolling out multiple episodes per policy and discretizing the resulting returns.

Maximizing Undiscounted Returns. RL algorithms have traditionally been viewed as optimizing for the discounted return, with some given discount factor

. However, the practice of policy gradient methods has evolved towards the use of discounting as a knob to control the bias-variance tradeoff 

(Xi-Ren Cao & Yat-Wah Wan, 1998; Baxter & Bartlett, 2001). Controlling the variance of the gradient estimator is of paramount importance, because it can grow exponentially in the horizon (Glynn & Olvera-Cravioto, 2019)

. Hence, rather than seeing the discount factor as being part of the task specification, practitioners tend to view it as a hyperparameter to be optimized 

(Schulman et al., 2016).

We are ultimately interested in the undiscounted performance of a policy as in Thomas (2014). Because Policy Evaluation Networks form a deterministic gradient of an approximated loss, they sidestep the need to pick a discount factor and optimize a proxy objective. Our experiments show that our approach is capable of optimizing for the undiscounted performance directly, and outperforms the state-of-the art discounted methods.

Learning Objective for the PVN. When the PVN (denoted ) provides a categorical output over return bins, we need to specify how a scalar expected return estimate is obtained from this output. A straightforward approach consists in using the midpoint of each interval as a representative of the return, which we then weight by the predicted probability (denoted by ) for this interval:

where is the number of bins, is the width of each bin, and and are the minimum and maximum returns observed in the dataset. However, rather than minimizing the L2 distance between and samples of , we use a classification loss: the KL-divergence between the output of the PVN and an empirical probability mass function over the discretized returns. We then view a PVN as a function of the form where are the parameters of the network itself and

are the policy network parameters fed as input. We then use stochastic gradient descent to minimize the expected KL loss:

where the expectation is taken under a given distribution over policy network weights (random in our experiments) and is obtained from a histogram of the returns induced by a policy .

4 Network Fingerprinting

While the performance measure is by definition a function of the policy network parameters, we need to figure out how to provide the policy as an input to a PVN, in a way which does not require too much space and also leads to good generalization among policies. If we try to naively flatten the policy into a vector input, we run into severla issues. First, the number of weights in the policy network can be very large, which requires a weight matrix in the input layer of the PVN of at least the same size. Second, the dependencies between layers contain valuable information, which is lost by a direct concatenation of the parameters.

Figure 1: Diagram of the complete Policy Evaluation Network setup, including Network Fingerprinting (in gray). The blue color of the probing states and PVN indicates that they can be seen as one set of weights, trained in unison.

To address these issues, we propose network fingerprinting: a methodology which allows us to learn a neural network embedding in a fully differentiable way, independently of the number of parameters in the policy network. The intuitive idea is to characterize the response of a policy network by feeding it a set of learned probing states as input , where is the dimensionality of a state vector. is a synthetic input to the policy network whose sole purpose is to elicit a representative output. This output can be a distribution over discrete actions or continuous action vectors, which we then concatenate and use as input to the PVN. In discrete action spaces, we can view a PVN as function receiving a -dimensional vector of probabilities over actions and returning a distribution over categories corresponding to the discretized return. The output of the PVN is then computed through the composition

Motivation for Network Fingerprinting. To see why this might be a viable method, we can show an equivalence between using probing states and having hidden units in the first layer of a PVN which evaluates linear policies. Consider a linear, deterministic policy in a 1D action space, defined by . Probing this policy in states produces the fingerprint vector . On the other hand, feeding the full policy weights as input to a PVN with hidden units in the first layer will produce the hidden activations at that layer. Clearly then, there is a choice of probing states and network initialisation that produces exactly equivalent results.

Choosing Probing States. The choice of probing states is important. One possible approach is to randomly generate states. Since we know the dimensionality of inputs to the policy, one could simply choose a sampling distribution and create fake states made of noise. Alternatively, we could also sample states from the environment, because we are interested in learning to evaluate the performance of the policy on states that we actually observe.

Because the entire PVN architecture is differentiable, we can also use backpropagation to learn the probing states, as we can see in Fig. 

1. These states can be viewed as weights of the PVN, helping extract information from the policy to improve prediction accuracy. In this paper, we adopt an hybrid approach. First, we initialize the probing states by sampling random noise. Then, we refine those probing states throughout learning, while learning the PVN weights jointly. When using the classification loss, our minimization problem becomes:

which we optimize by stochastic gradient descent.

5 Policy Improvement By Gradient Ascent

Using a trained PVN, it is possible to do gradient ascent in the space of parameterized policies without having to interact with the environment. Because the PVN is an approximation to the real performance measure in functional form, we can directly apply automatic differentiation to obtain an exact deterministic gradient. For example, when using network fingerprinting inside a PVN parameterized by , our gradient ascent procedure computes the iterates:

where is the learning rate at time and are the learned probe states.

A benefit of not having to interact with the environment to get fresh gradient estimates for every new policy is that we can do gradient ascent in parallel via our learned PVN. This feature is particularly useful to escape local maxima, as many concurrent solutions can be maintained simultaneously as in (Jaderberg et al., 2017b)

. Statistical active learning techniques

(Cohn et al., 1995) could make this process more efficient, but would require PVNs with epistemic uncertainty.

Because the gradient ascent procedure may lead us to regions of the policy space where fewer samples have been obtained to train the PVN, being able to maintain multiple candidate solutions in parallel is particularly useful. To further avoid falling too quickly into the out-of-distribution regime, we can also limit the number of gradient steps and periodically verify that the performance is still increasing.

6 Experiments

In order to test our proposed method, we first create a set of policies by choosing a neural network architecture and initializing it a number of times. Second, we obtain the expected returns of these policies by averaging returns from a number of Monte-Carlo rollouts. This results in a dataset, where the policies are the inputs and the expected returns are the targets. Finally, we create a PVN and regress on the dataset.

We ran experiments on a variety of environments of different complexities. We start with a simple 2-state MDP with known dynamics, which allows us to visualize how our algorithm works. We then move on to function approximation on the Cart Pole environment, where we analyze the effects of using Network Fingerprinting on single and multi-layered networks. Finally, we move to the Swimmer environment to show our algorithm’s performance on a continuous control task.

  Initialize PVN parameters , , choose learning rate
  for step to  do
     Sample training batch
     update parameters
     update parameters
  end for
Algorithm 1 Train PVN with fingerprinting
  for ascent policies to  do
     Initialize policy parameters using Glorot Initialization
     for ascent steps to  do
        Sample Monte-Carlo return using policy
     end for
  end for
Algorithm 2 Gradient Ascent Through a Trained PVN

6.1 Polytope

(a) This polytope represents the space of policies (in green) in value space. The corners (in orange) are deterministic policies.
(b) Sampled training set (in blue) and test set (in red) policies in the value polytope.
(c) Training curves on training and test sets.
Figure 2: Visualization of a value polytope and a sampled dataset of policies. Training curves show a PVN can learn to generalize and predict the points in the test set.
(a) Exact gradient field of the value in policy space.
(b) Approximated gradient field of the value in policy space, calculated from a trained PVN.
(c) Comparison of gradient ascent through the exact and approximated value functions.
Figure 3: Comparison of gradient fields of the exact and approximated value functions. The two axes in Figures 2(a) and 2(b) are the policy spaces in each of the two states, and the arrows represent the gradient and . The blue and red dots are steps of the gradient ascent process, mapped onto the polytope in Figure 2(c). Both ascents were run for 100 steps.

For the first set of experiments, we built a 2-state, 2-action MDP (details described in Appendix), for which we know the transition matrix and reward function. Having this information allows us to calculate exact expected returns and policy gradients for any policy, . Moreover, because the MDP is so small, we can visualize the value polytope (Dadashi et al., 2019), as seen in Figure 1(a). A value polytope maps the space of policies to value space, meaning that any one point in the polytope is a policy and its coordinates are its values in states 1 and 2. Note that the corners of the polytope represent deterministic policies. In this environment, a policy can be represented as a simple 2-dimensional vector, , from which we can infer .

To run our experiments, we first obtain a random set of 40 policies by sampling from a uniform distribution over the policy space. Then we evaluate them, using eq. (

2), to get their values, and split the data into a training and test set. The two sets of policies can be seen in Figure 1(b). Finally, we train a 2-layered neural network on this regression task, where the network takes policy vectors as input and outputs their expected return.

Policy Search. Once we have a learned PVN, we can perform gradient ascent through the network to search for the optimal policy. First, we can compare the exact policy gradients and those calculated from the learned PVN . In Figures 2(a) and 2(b), we show the difference in the discretized gradient fields of the true policy gradients and the learned ones. As we can see, the gradient fields are quite similar, with only a few differences around the edges of the policy space.

Finally, to perform gradient ascent, we start with an arbitrary policy, in this case , calculate its gradient through the learned PVN, take a small step in that direction and repeat the process with the new policy. In Figures 2(a) and 2(b), we compare performing gradient ascent with the exact policy gradient and with our approximated one. The dots represent the paths taken by the gradient ascent process, and we can see that while the path given by the learned function is noisier, they still both converge to the optimal solution. Figure 2(c) compares the two paths in polytope space, to give a different perspective. Both ascents performed a series of 100 policy gradient steps.

These results indicate that PVNs can both approximate policy values and be used to calculate policy gradients in a practical manner.

(a) Linear Policy
(b) MLP Policy
Figure 4: Plots showing histograms of training policies’ expected returns and the performance of gradient ascent through a learned PVN. The effects of Network Fingerprinting are drastic when using MLP policies.

6.2 Cart Pole

In the next set of experiments, we move to the Cart Pole environment with policy functions that map states to action distributions. The environment was run using ’CartPole-v0’ in OpenAI Gym (Brockman et al., 2016). This environment has an observation space of 4 features and 2 discrete actions, accelerating to the left and right. The main goal of this section is to show that Network Fingerprinting is crucial to scaling PVNs to larger networks.

Linear Policy.

First we start with the simplest case, a policy network consisting of a single linear layer with a softmax over the logits.

To generate a dataset in this setting, we start by creating a set of randomly generated linear policy networks, and getting a set of Monte-Carlo returns for each network. As the exact expected returns cannot be known, we instead have to approximate them from samples. Of course, with a large enough number of rollouts, the approximation can become accurate. This is now our dataset, where the policy networks are the inputs and the discretized distributions of sampled returns are the desired outputs to the PVN. In these experiments, we generate a set of 1000 randomly initialized policies and run each policy for 100 episodes. The return distributions are discretized into 41 evenly sized bins.

When generating randomly initialized networks, some of the networks can have near optimal performance, making it difficult to show that our method can lead to policies better than the training set. In order to deal with this, we filter out of the training set any policy with an expected return above a specified level. This allows us to see if the PVN generalizes outside of the distribution of policies seen in training and whether the gradient ascent can be effective and find substantially better policies. In our experiments, the training set consists of randomly initialized policies with an expected undiscounted return no higher than 30, while the maximum possible return is 100.

Once we have a trained PVN, we then perform gradient ascent for 100 steps on 5 randomly sampled starting policies. When comparing methods, the same 5 policies are given to all. This allows for a fairer comparison, as one policy might be easier to improve than another.

The remaining figures are designed in two parts. The left subplot is a histogram showing the distribution of expected returns achieved by the training set policies. In green is the training data used by the PVN to train. In red is the data that was initially generated but then thrown away, as it was above our set expected return limit of 30. This allows us to show the performance achievable by gradient ascent relative to the training data. The right subplot is the performance of policies as they perform a number of gradient steps, in a lighter color. The darker line is the average policy ascent path. The starting policies are randomly selected by network initialization. Finally, the red dashed line marks the training set performance limit of 30.

We compare training PVNs in 2 ways: with and without Network Fingerprinting. As discussed in Section 4, performing network fingerprinting on linear networks is equivalent to flattening the network’s weights and feeding them directly as input to a PVN. As expected, both the flattened weights and network fingerprinting worked similarly. Furthermore, gradient ascent using either approach led to policies with an expected return of around 70, well above the training set limit of 30.

MLP Policy. The main challenge of building PVNs was to build a scalable mechanism allowing us to give a multi-layered policy network (MLP) as input to a PVN. In these experiments we will show that network fingerprinting allows us to do so.

We built the network fingerprinting with 20 probing states, leading to a policy fingerprint of 40 dimensions. These probing states were randomly initialized and trained jointly with the PVN. We performed 400 steps of gradient ascent. The rest of the training procedure is exactly as described for the linear policy.

In Figure 3(b), we can clearly see that giving a flattened neural net as input to a PVN does not work. On the other hand, network fingerprinting sees no scalability issues and allows the policy to improve up to the optimal policy. Furthermore, gradient ascent was consistently successful across the different starting policies, improving to near optimal performance from all starting points.

Comparing the histograms from Figures 3(a) and 3(b), we can notice that the distribution of generated linear policies has a long tail, with some randomly generated policies achieving near optimal performance. On the other hand, the distribution of randomly generated MLP policies has much lower ceiling of performance. As networks become larger, it becomes more difficult to randomly generate good policies.

6.3 MuJoCo - Swimmer

Our last set of experiments is on Swimmer, a MuJoCo (Todorov et al., 2012) continuous control environment where the agent is a small worm that has to move forward by moving its joints. We test our approach on this task to show whether PVNs can scale to larger experiments. Also, as explained in Section 3, state of the art RL algorithms tend to do poorly on Swimmer, compared to other MuJoCo tasks, because of the discounting. Agents optimizing for the discounted return learn to act in a myopic way, sacrificing long term gains, due to the discount. Since our approach can be used without discounting, we can avoid this problem by optimizing for the true objective directly.

In these experiments, we trained PVNs with Network Fingerprinting on a dataset of 2000 deterministic policies and 500 rollouts each to estimate their returns. The return distributions are discretized into 51 evenly sized bins. Once the PVN is trained, we do gradient ascent with 5 randomly initialized starting policies for 1000 steps each. The rest of the algorithmic details are in the Appendix. In Figure 5, we compare the performance of the 5 policy ascents with 3 baselines, DDPG (Lillicrap et al., 2015), SAC (Haarnoja et al., 2018) and TD3 (Fujimoto et al., 2018), which are state of the art model-free RL algorithms. We can see that the set of policies all finished with an expected return above all baselines. The best of the curves achieved expected returns around 250, substantially outperforming other algorithms.

Figure 5: Gradient ascent performed on Swimmer. We compare the improvement of 5 starting policies and plot the average improvement in bold. Horizontal dashed lines are baselines. Their scores were taken from https://spinningup.openai.com/en/latest/spinningup/bench.html

7 Related Work

Methods that aim to solve RL problems by searching directly in policy space have a long history, and often different terminology. Sometimes they are characterized as black-box optimization, as they treat the mapping from policy parameters to return (or “fitness”) as a black box, sometimes as evolutionary algorithms, with recent incarnations in 

(Salimans et al., 2017; Such et al., 2017; Mania et al., 2018). They are related to this work in two dimensions: a number of black-box methods also pursue a form of policy gradient ascent (Spall et al., 1992; Peters et al., 2010; Wierstra et al., 2014), generally by employing a noisy form of finite differences. In theory, one could use finite differences to compute the exact gradient; however, there are too many parameters to make this a tractable solution. Our method on the other hand, is a low-variance but biased estimate of a policy’s expected return, which in turn gives us a biased gradient. The second dimension of similarity is the analogue of our PVNs, also called ‘surrogate models’ or ‘response surfaces’ which aim to generalize across the fitness landscape, for example (Booker et al., 1998; Box & Wilson, 1951; Moore & Schneider, 1996; Ong et al., 2003; Loshchilov et al., 2012); see (Jones, 2001) for an overview. In contrast to our approach, which explicitly introduces an inductive bias to make suitable generalizations across policies, these methods make fewer assumptions and model only (often local) surface-level regularities of the fitness.

Our work has some similarities to synthetic gradients (Jaderberg et al., 2017a), which are networks that learn to output another network’s gradients. However, our networks are never trained to output gradients; these are available to us as a byproduct of the architecture.

Generalized policy improvement (Barreto et al., 2017) finds a better policy as a mixture of a set of policies, which is a similar objective to ours, but the methodology is very different, as it relies on having Q-values for each policy.

Universal Value Function Approximators (UVFA, Schaul et al., 2015) are value functions that generalize across both states and goals. More specifically, since many policies can achieve the same goals, UVFAs output the value of the optimal policy for a certain goal. In contrast, our method generalizes across policies, regardless of their optimality, and is less complex because it does not depend on state.

Finally, our work can be considered a case of off-policy learning (Sutton & Barto (2018), Precup (2000), Munos et al. (2016)), a class of algorithms which allow one to evaluate and improve policies given data generated from a set of different policies. One major difference is that our method only looks at expected returns of policies, as opposed to all transitions generated by the set of policies, as is usually done.

8 Conclusion and future work

We introduced a network that can generalize in policy space, by taking policy fingerprints as inputs. These fingerprints are differentiable policy embeddings obtained by inspecting the policy’s behaviour in a set of key states. We also described a novel policy gradient algorithm which is performed directly through the Policy Evaluation Network, allowing the computation of a policy gradient estimate for any policy, even if it has never been seen by the network.

Extension to value functions. Until now, we have only looked at Policy Evaluation Networks which output the expected return of a policy, from the initial state distribution. While this has benefits over the usual way of doing policy gradient, there are also disadvantages. Traditional RL algorithms can usually learn on-line, that is learn as more samples are seen, whereas our method requires entire trajectories before learning. This, however, does not have to be the case. Our method is extendable to the state-dependent value function setting . This can give rise to zero-shot policy evaluation for a variety of algorithms. In actor-critic algorithms for instance, when the policy updates, the value function is lagging behind and requires samples from the new policy before it becomes accurate. Our method would allow a value function to generalize to unseen policies, meaning when the policy is updated, the value function would immediately update as well. This has the potential to improve data efficiency, as policies would not have to wait for value functions to catch up.

Inductive Biases.

We designed PVNs in the simplest way possible, as feed-forward neural networks. However, the structure of the network is very important. Inductive biases incorporated into the architecture can substantially improve data efficiency and generalization in policy space

(Wolpert & Macready, 1997). There is much structure in MDPs that can be leveraged. Instead of simply building an MLP, one can build a state transition model and use this information to make value predictions. Other works, such as TreeQN (Farquhar et al., 2018), the Predictron (Silver et al., 2017) and Value Prediction Networks (Oh et al., 2017) are examples of value functions built with inductive biases, looking to improve generalization.


We’d like to acknowledge Simon Osindero, Kory Mathewson, Tyler Jackson, Chantal Remillard and Sasha Vezhnevets for useful discussions and feedback on the paper. Most importantly, we’d like to thank Emma Brunskill for inviting Jean Harb to her lab, where this work was started. Pierre-Luc Bacon is supported by the Facebook CIFAR AI chair program and IVADO. Doina Precup is a CIFAR fellow and is supported by the Canada CIFAR chair program.


  • Barreto et al. (2017) Barreto, A., Dabney, W., Munos, R., Hunt, J. J., Schaul, T., van Hasselt, H. P., and Silver, D. Successor features for transfer in reinforcement learning. In Advances in neural information processing systems, pp. 4055–4065, 2017.
  • Baxter & Bartlett (2001) Baxter, J. and Bartlett, P. L. Infinite-horizon policy-gradient estimation. J. Artif. Int. Res., 15(1):319–350, November 2001. ISSN 1076-9757.
  • Bellemare et al. (2017a) Bellemare, M. G., Dabney, W., and Munos, R. A distributional perspective on reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 449–458, 2017a.
  • Bellemare et al. (2017b) Bellemare, M. G., Dabney, W., and Munos, R. A distributional perspective on reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 449–458. JMLR. org, 2017b.
  • Booker et al. (1998) Booker, A., Dennis, J. J., Frank, P., and Serafini, D. Optimization using surrogate objectives on a helicopter test example. Computational Methods in Optimal Design and Control, 1998.
  • Box & Wilson (1951) Box, G. E. P. and Wilson, K. B. On the experimental attainment of optimum conditions. Journal of the Royal Statistical Society, 13(1):1–45, 1951.
  • Brockman et al. (2016) Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. OpenAI gym. arXiv preprint arXiv:1606.01540, 2016.
  • Caruana (1997) Caruana, R. Multitask learning. Machine Learning, 28(1):41–75, Jul 1997.
  • Cohn et al. (1995) Cohn, D. A., Ghahramani, Z., and Jordan, M. I. Active learning with statistical models.

    Journal of Artificial Intelligence Research

    , 4:129–145, 1995.
  • Dadashi et al. (2019) Dadashi, R., Taiga, A. A., Roux, N. L., Schuurmans, D., and Bellemare, M. G. The value function polytope in reinforcement learning. arXiv preprint arXiv:1901.11524, 2019.
  • Farquhar et al. (2018) Farquhar, G., Rocktäschel, T., Igl, M., and Whiteson, S. TreeQN and ATreeC: Differentiable tree planning for deep reinforcement learning. In International Conference on Learning Representations. International Conference on Learning Representations, 2018.
  • Fujimoto et al. (2018) Fujimoto, S., van Hoof, H., and Meger, D. Addressing function approximation error in actor-critic methods. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pp. 1582–1591, 2018.
  • Glynn & Olvera-Cravioto (2019) Glynn, P. W. and Olvera-Cravioto, M. Likelihood ratio gradient estimation for steady-state parameters. Stochastic Systems, 9(2):83–100, June 2019.
  • Haarnoja et al. (2018) Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018.
  • Jaderberg et al. (2017a) Jaderberg, M., Czarnecki, W. M., Osindero, S., Vinyals, O., Graves, A., Silver, D., and Kavukcuoglu, K. Decoupled neural interfaces using synthetic gradients. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1627–1635. JMLR. org, 2017a.
  • Jaderberg et al. (2017b) Jaderberg, M., Dalibard, V., Osindero, S., Czarnecki, W. M., Donahue, J., Razavi, A., Vinyals, O., Green, T., Dunning, I., Simonyan, K., et al. Population based training of neural networks. arXiv preprint arXiv:1711.09846, 2017b.
  • Jaderberg et al. (2017c) Jaderberg, M., Mnih, V., Czarnecki, W. M., Schaul, T., Leibo, J. Z., Silver, D., and Kavukcuoglu, K. Reinforcement learning with unsupervised auxiliary tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017c.
  • Jones (2001) Jones, D. R. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization, 21:345–383, 2001.
  • Konda & Tsitsiklis (2000) Konda, V. R. and Tsitsiklis, J. N. Actor-critic algorithms. In NIPS, pp. 1008–1014, 2000.
  • Lillicrap et al. (2015) Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
  • Loshchilov et al. (2012) Loshchilov, I., Schoenauer, M., and Sebag, M. Self-adaptive surrogate-assisted covariance matrix adaptation evolution strategy. In

    Proceedings of the 14th annual conference on Genetic and evolutionary computation

    , pp. 321–328, 2012.
  • Lyle et al. (2019) Lyle, C., Bellemare, M. G., and Castro, P. S. A comparative analysis of expected and distributional reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 4504–4511, 2019.
  • Mania et al. (2018) Mania, H., Guy, A., and Recht, B. Simple random search provides a competitive approach to reinforcement learning. arXiv preprint arXiv:1803.07055, 2018.
  • Moore & Schneider (1996) Moore, A. W. and Schneider, J. Memory-based stochastic optimization. In Advances in Neural Information Processing Systems, 1996.
  • Munos et al. (2016) Munos, R., Stepleton, T., Harutyunyan, A., and Bellemare, M. Safe and efficient off-policy reinforcement learning. In Advances in Neural Information Processing Systems, pp. 1054–1062, 2016.
  • Oh et al. (2017) Oh, J., Singh, S., and Lee, H. Value prediction network. In Advances in Neural Information Processing Systems, pp. 6118–6128, 2017.
  • Ong et al. (2003) Ong, Y. S., Nair, P. B., and Keane, A. J. Evolutionary optimization of computationally expensive problems via surrogate modeling. AIAA journal, 41(4):687–696, 2003.
  • Peters et al. (2010) Peters, J., Mulling, K., and Altun, Y. Relative entropy policy search. In Twenty-Fourth AAAI Conference on Artificial Intelligence, 2010.
  • Precup (2000) Precup, D. Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, pp.  80, 2000.
  • Salimans et al. (2017) Salimans, T., Ho, J., Chen, X., Sidor, S., and Sutskever, I. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017.
  • Schaul et al. (2015) Schaul, T., Horgan, D., Gregor, K., and Silver, D. Universal value function approximators. In International Conference on Machine Learning, pp. 1312–1320, 2015.
  • Schulman et al. (2016) Schulman, J., Moritz, P., Levine, S., Jordan, M. I., and Abbeel, P. High-dimensional continuous control using generalized advantage estimation. In International Conference on Learning Representations (ICLR), 2016.
  • Silver et al. (2017) Silver, D., van Hasselt, H., Hessel, M., Schaul, T., Guez, A., Harley, T., Dulac-Arnold, G., Reichert, D., Rabinowitz, N., Barreto, A., et al. The predictron: End-to-end learning and planning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3191–3199. JMLR. org, 2017.
  • Spall et al. (1992) Spall, J. C. et al. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE transactions on automatic control, 37(3):332–341, 1992.
  • Such et al. (2017) Such, F. P., Madhavan, V., Conti, E., Lehman, J., Stanley, K. O., and Clune, J. Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567, 2017.
  • Sutton (1984) Sutton, R. S. Temporal credit assignment in reinforcement learning. PhD thesis, University of Massachusetts Amherst, 1984.
  • Sutton & Barto (2018) Sutton, R. S. and Barto, A. G. Reinforcement learning: An introduction. MIT press, 2018.
  • Sutton et al. (2000) Sutton, R. S., McAllester, D. A., Singh, S. P., and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pp. 1057–1063, 2000.
  • Sutton et al. (2011) Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., and Precup, D. Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2011), Taipei, Taiwan, May 2-6, 2011, Volume 1-3, pp. 761–768, 2011.
  • Thomas (2014) Thomas, P. Bias in natural actor-critic algorithms. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pp. 441–448, 2014.
  • Todorov et al. (2012) Todorov, E., Erez, T., and Tassa, Y. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. IEEE, 2012.
  • van den Oord et al. (2016) van den Oord, A., Kalchbrenner, N., and Kavukcuoglu, K.

    Pixel recurrent neural networks.

    In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pp. 1747–1756, 2016.
  • Wierstra et al. (2014) Wierstra, D., Schaul, T., Glasmachers, T., Sun, Y., Peters, J., and Schmidhuber, J. Natural evolution strategies. The Journal of Machine Learning Research, 15(1):949–980, 2014.
  • Wolpert & Macready (1997) Wolpert, D. H. and Macready, W. G. No free lunch theorems for optimization. IEEE transactions on evolutionary computation, 1(1):67–82, 1997.
  • Xi-Ren Cao & Yat-Wah Wan (1998) Xi-Ren Cao and Yat-Wah Wan. Algorithms for sensitivity analysis of markov systems through potentials and perturbation realization. IEEE Transactions on Control Systems Technology, 6(4):482–494, July 1998. ISSN 2374-0159.

Appendix A Polytope Experiment Details

a.1 Markov Decision Process Specifications

To describe the 2 state MDP used in our polytope experiments, we use the following convention (Dadashi et al., 2019):

where our MDP has the following properties.

a.2 Visualization of Predictions

In the polytope experiments, 40 points were randomly sampled and split into equally sized training and test sets of 20 points. In Figure 0(a), we show the sampled points projected in the value polytope. In contrast, Figure 0(b) shows the values of the points predicted by a trained PVN.

(a) Exact policy values
(b) Policy values predicted by the trained PVN
Figure A.1: Results of training a network to predict policy values, projected in a value polytope. The training set is in blue and the test set is in red.

Appendix B Hyperparameters

Parameter Polytope Cartpole Linear Cartpole MLP Swimmer
Policy architecture (hidden layer sizes) N/A [] [30] [30]
PVN Architecture (hidden layer sizes) [50] [80] [80] [80]
NN activations ReLU ReLU ReLU ReLU
Temperature N/A 3 3 3
# Bins N/A 41 41 51
Grad ascent learning rate 0.1 0.001 0.001 0.002
Grad ascent optimizer SGD Adam Adam Adam
Grad ascent steps 100 100 400 1000
Discount factor () 0.8 1 1 1
Batch size 32 32 32 32
PVN learning rate 0.01 0.003 0.003 0.003
PVN optimizer RMSProp Adam Adam Adam
Training steps 20000 3000 3000 5000
# Policies 20 1000 1000 2000
# Returns per policy N/A 100 100 500
Training set performance limit N/A 30 30 N/A
Train probing states N/A True True True
Randomly generate probing states N/A True True True
# Probing states N/A 20 20 20
Table B.1: Hyperparameters used in various experiments. The last three rows are only applicable when using probing states.

Appendix C Dataset collection algorithm

  Choose number of policies and rollouts
  for policy to  do
     Initialize policy parameters using Glorot Initialization
     for rollout to  do
        Sample Monte-Carlo return using policy
     end for
  end for
Algorithm 3 Collect dataset of policies for PVN training