Policy Prediction Network: Model-Free Behavior Policy with Model-Based Learning in Continuous Action Space

09/15/2019 ∙ by Zac Wellmer, et al. ∙ 0

This paper proposes a novel deep reinforcement learning architecture that was inspired by previous tree structured architectures which were only useable in discrete action spaces. Policy Prediction Network offers a way to improve sample complexity and performance on continuous control problems in exchange for extra computation at training time but at no cost in computation at rollout time. Our approach integrates a mix between model-free and model-based reinforcement learning. Policy Prediction Network is the first to introduce implicit model-based learning to Policy Gradient algorithms for continuous action space and is made possible via the empirically justified clipping scheme. Our experiments are focused on the MuJoCo environments so that they can be compared with similar work done in this area.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reinforcement learning algorithms can be model-free or model-based. Model-free reinforcement learning attempts to find a policy through interacting with the environment and improving the policy based on previous states and rewards. Model-based reinforcement learning attempts to learn the dynamics of the environment and uses the model to improve the policy through various methods such as planning, exploration, and even training on generated data [21, 5]. Though historically, model-based methods capable of predicting near perfect observations [10, 2] usually have the benefit of reduced sample complexity, they still struggle to perform as well as model-free methods [17, 9, 8]. It is therefore appealing to explore achieving the best of both worlds, as collecting the large amount of experience required by model-free methods is oftentimes expensive or infeasible.

Model-based agents traditionally learn a model of the environment that predicts future observations conditioned on previous actions and observations. This approach is sometimes referred to as observation-prediction models [11]. Recreating the original observation is sometimes a questionable objective as the original observation can be dominated by irrelevant information. For example, if the observation is an image and contains a complex background, a large part of the model’s capacity can be spent on modeling the background even though it may be irrelevant to information necessary for planning.

As a response to the issues faced by observation-prediction models, several implicit model-based methods [11, 4, 18] were introduced and learn an implicit transition module that predicts the value/reward of future states without being subjected to observation reconstruction. Value Prediction Networks (VPN) [11] and TreeQN [4]

operate by expanding a tree of predicted reward and value estimates. However, this is feasible only because each branch is linked to a discrete action. ATreeC

[4] introduces a policy gradient method but is still not applicable to continuous action spaces because their policy is a multinomial distribution parameterized by the Q-values associated with each branch. Implicit models have seen success in Q-learning approaches, but are not straightforward to apply to policy gradient methods. Many real-world problems, such as robotics or autonomous vehicle applications, lie in continuous action spaces and Q-learning approaches do not naturally extend to continuous action spaces like policy gradient methods.

Policy gradient methods are of primary interest in this paper because of their inherent flexibility in terms of their application to both discrete and continuous action spaces. In particular, we will focus on a model-free policy gradient algorithm called Proximal Policy Optimization (PPO) [17]. PPO is of high interest because of its high performance on popular benchmarks and simplicity.

We propose Policy Prediction Networks (PPN), where the value, reward, policy, and abstract-state are predicted by leveraging a transition model. PPN uses an implicit model-based approach at training time but a model-free approach at rollout time. An implicit model-based approach at training time helps accelerate feature learning via predicting future policies, values, and rewards. All of which encourage the dynamics model to learn features that are well aligned with our objective of finding a policy that maximizes returns. To the best of our knowledge, this is the first work on developing implicit model-based learning for policy gradient methods.

Our contribution is a training procedure that leverages model-based learning for policy gradient algorithms to improve performance and does not trade off computational costs at rollout time. This work introduces implicit transition models for Policy Gradient methods, depth-based objectives, auxiliary reward objectives, and an empirically justified clipping scheme. Furthermore, our work lays down the foundation for future research on using implicit transition models to perform decision-time planning. Empirical results demonstrate the advantage of PPN over the model-free baseline (PPO), which suggests that PPN finds a better state embedding and reduces sample complexity.

2 Background and Related Work

In this section, we will give a brief review of related work on model-based and model-free reinforcement learning. We also introduce terminologies to differentiate between two general approaches in model-based reinforcement learning.

Notations: abstract state, action, reward from taking at , discount, value of state with respect to parameters , advantage, policy entropy, and the subscript denotes timesteps.

2.1 Policy Gradient Methods

Policy Gradient Methods [20] are a type of reinforcement learning algorithm that directly optimizes policy parameters to maximize expected returns. Policy Gradient methods are more naturally applied to environments with continuous action spaces in comparison to Q-learning approaches. Generally, the policy gradient loss is of the shape:

where is an advantage estimate [20, 19, 16], is the policy with parameter , is the policy entropy, is the state at time , and is the action taken in state .

2.2 Trust Region Policy Optimization

Trust Region Policy Optimization (TRPO) attempts to generate monotonically improving policies following inspiration from conservative policy iteration [7]. However, TRPO contains a few theoretical relaxations that are required to make a practical algorithm. TRPO is left with a hard KL divergence constraint to be less than or equal to . This hard constraint can be seen as a trust region on the mean KL divergence.

Let be the old parameters, be the new proposed parameters, be the generalized advantage estimate [16]. Which is defined as

(1)

where

is a hyperparameter controlling the bias-variance trade-off, and

.TRPO’s optimization problem is then formulated as:

where is the objective using importance sampling to estimate expected advantage under the new policy, and is the mean KL divergence between the new and old policy. At this point TRPO offers theoretical inspiration but does not actually offer theoretical guarantees for monotonically improving policies.

2.3 Proximal Policy Optimization

Proximal Policy Optimization (PPO) [17] was introduced as offering similar benefits as TRPO [15], but via a simpler approach. PPO replaces a KL divergence constraint in TRPO via a clipped policy gradient loss:

(2)

where

(3)

Clipping no longer guarantees . Instead, it serves to approximate it (see [6] for further details).

PPO2 [3] is a GPU implementation from OpenAI that offers a key difference from PPO, namely, that the critic is also clipped. More specifically, the critic loss () is now:

(4)

Where is the clipped value estimate, is the bootstrapped -step return at time , and is the number of steps in the bootstrapped estimate. At this point theoretical guarantees in Conservative Policy Iteration have been dropped to make TRPO a practical algorithm, and the theoretical justifications in TRPO have again been weakened to make the more versatile and empirically superior PPO.

2.4 Model-based Reinforcement Learning

The essence of model-based reinforcement learning revolves around using a model or learning a model of the environment, and using this to improve a policy. We will be focused on the challenging class of problems where the environment dynamics are unknown and must be learned. In this case, the problem can be broken down further into dynamics models that are learned implicitly and dynamics models that are learned explicitly. It was not until recent years that implicitly learned model-based algorithms received attention [11, 4, 18].

Explicit Model-based Methods

Cases of explicit model-based methods involve some form of directly predicting future observations and including this in the loss function, as in:

where is the predicted observation at time and is the ground truth observation. Several variations exist that involve learning to predict in an abstract state space [2, 5, 14, 13], and predicting the grounded observation over multiple time steps [10]. This has seen some success and can be useful for learning, planning, and exploration.

These methods are particularly useful when observations contain entirely useful information. However, it can be misleading in the class of problems where parts of the observation do not include useful information. For example, when generating an image frame, a large part of the network’s capacity could be dedicated to learning less useful information like the background or objects that are not well aligned with the agent’s interest [13].

Implicit Model-based Methods

Implicit model-based methods are interesting because they are not explicitly tied to reproducing original observations or an encoded observation. Rather, the dynamics model is indirectly learned by finding parameters that allow for an agent to perform optimally. This is done by learning to predict future characteristics such as the value or reward [11, 4], but without having a constraint on predicting the ground truth observation. Unfortunately, a downside to implicit approaches is that it is difficult to know what is actually taking place during planning since it is hard to reconstruct the predicted observations.

VPN [11] involve expanding a Q-tree, performing a linear backup along the maximal path, and selecting the maximal backed-up path. At training time, the loss is computed along the tree path followed by rollout actions. The Predictron [18] is similar to VPN except learning is done entirely in an abstract space, whereas VPN is grounded to transitions experienced by the rollout policy. Predictron also offers a meta-objective called consistency which lines up individual estimates with respect to backed-up estimates. We do not explore this in our work, but note that it could serve as an orthogonal improvement. TreeQN and ATreeC [4]

introduce a Q learning approach (TreeQN) and a policy gradient approach (ATreeC) that make use of a differentiable tree structure. ATreeC involves expanding a pseudo Q-tree. This is pseudo because the nested value predictions are not directly constrained to represent the value. The backed-up pseudo Q-values are treated as logits and are used to parameterize and sample from a multinomial distribution. These samples are then used as the actions. VPN, ATreeC, and TreeQN are limited to only operating in discrete action spaces. Predictron was used in a continuous action space, the MuJoCo pool environment

[22], but was done through discretizing the action space.

3 Policy Prediction Network

Policy Prediction Network uses a combination of model-free and model-based techniques. Actions are made with a model-free approach by the behavior policy at rollout time. However, learning is done with a model-based approach that follows the rollout trajectory. A latent space transition model is embedded into the architecture so that we are able to backpropagate from multiple simulation steps into the future back to a grounded observation. Backpropagation from predictions through the dynamics model, and back to a grounded observation enables the dynamics model to learn features which align with accurate reward predictions, accurate value predictions, and maximizing advantage. This is as opposed to maximizing observation reconstruction as is traditionally done in explicit model-based reinforcement learning.

Our novel contribution is a training scheme that integrates model-free and model-based reinforcement learning to improve sample complexity and performance in exchange for extra computation at training time but at no extra cost in computation at rollout time. Additionally our work offers a foundation for decision-time planning for policy gradient methods and implicit transition models. Our empirical results in Section 4 demonstrate the advantage of PPN over model-free baseline (PPO), which suggests that PPN finds a better state embedding and reduces sample complexity.

3.1 Architecture

Figure 1: PPN learns to predict policies, rewards, abstract states, and the value of the abstract states.

PPN is comprised of a few components. The components are parameterized by described below. In the following, a hat over variables represents that it is an estimate as opposed to a grounded observation or reward. The superscript represents the forward step predictions. The depth-rollout is expanded to a depth . For example, is the predicted state steps (where ) forward in time from .

Encoding () function embeds the observation () in an abstract state ().

Value () function estimates the value () of the abstract state.

Policy () function parameterizes a distribution over actions to take given a state . The policy module has two parts. The first () producing estimates of the mean ( where is dimensionality of action space) and the second () producing estimates of a diagonal covariance matrix (

) to parameterize a normal distribution for the policy (

). This is further described in Section 3.2.

Reward () function predicts the reward () for executing the action at abstract state .

Transition () function transforms the abstract state given an action to the next abstract state () by predicting .

We adopt a similar convention from VPN [11] which defines a core module. Figure 1 shows the core module which performs a depth-1 rollout by composing the modules:

There are 4 subtle but important differences between PPN depth rollouts and Value Prediction Network depth rollouts.

  1. PPN estimates the policy based on the abstract state () at each step of the core module; while in VPN there is no need to predict a policy because it is a Q-learning method. However, this means it does not naturally apply to continuous action-spaces.

  2. PPN produces a value estimate () at the base of the depth rollout and uses this as the critic. In VPN, this is not necessary because it is not an actor-critic method.

  3. The actions used in PPN come from samples generated by the behavior policy (), seen later in Equation (5); while in VPN, the actions are chosen by exhaustively simulating all possible actions. Simulating all possible actions is only feasible in a discrete action space.

  4. PPN only uses the depth-based rollout at training time. VPN’s behavior policy can use decision-time planning [19]. However, this is not straightforward to apply to continuous action spaces and we leave this for future work.

If , the PPN recursively calls the core function () to generate a trajectory of simulated rewards, policies, values, and abstract states conditioned on an initial abstract state () and action trajectory (). Each recursive call passes on the predicted abstract state ().

3.2 Planning

Here we introduce our approach to background planning [19] in continuous action spaces performed at training time. PPN has the ability to predict the future abstract states and based on these predicted future abstract states make additional predictions of future rewards, values, and policies. We use a basic planning method which simulates up to a certain depth collecting reward, value, and policy estimates along the way.

Background planning is done by following the actions performed by the behavior policy and recursively calling with the predicted abstract state. Action generation by the rollout policy () is done by sampling from a normal distribution defined as follows:

(5)

is a function of the number of samples

seen since the beginning of training, and does not depend on the model parameters. In our experiments the standard deviation used to parameterize a diagonal covariance matrix is exponentially decayed with respect to the number of samples seen, as is done in PPO

[17].

3.3 Learning

PPN is trained in a similar manner to typical Policy Gradient algorithms. The novel differences we introduce are depth-based losses, a latent transition model () embedded into the architecture, auxiliary reward objectives, and a clipping scheme for depth-based losses. Depth-based losses are necessary to train the implicit transition model. The implicit transition model and auxiliary reward help with feature learning via background-planning.

PPN seeks to optimize auxiliary objectives and perform multiple updates on a batch. Trust regions as seen in TRPO can not be directly applied to both cases described above [17] and thus we introduce a clipping approach. Clipping all the network heads is crucial because the parameters of the reward network and value network all share parameters () with the policy network. For a visual reference of parameter sharing please see Figure 1. This means that if any of the networks are updated in an uncontrolled fashion, it can also cause dramatic changes to the policy.

In addition, in Algorithm 1, we show that PPN performs a similar learning algorithm as was done in PPO.

  Initialize parameters
  
  for iteration=1, 2,  do
     Run policy in environment for time steps
     Compute advantage estimates
     for

 epoch

, ,  do
        Shuffle samples into mini-batches of size
        for each mini-batch  do
            is the set of samples selected for the mini-batch
           
           Optimize w.r.t.
        end for
     end for
     
  end for
Algorithm 1 Policy Prediction Network(PPN), PPO style.

The major differences in training between PPN and PPO come from the loss formulation (). The behavior policy with parameters () generates an -step trajectory . The depth- predictions are grounded based on the generated -step action trajectories. The loss at time accumulates error over the planned trajectory up to a depth and is defined as:

(6)

where minimizing corresponds to maximizing expected advantage, results in an accurate critic, and leads to reward predictions that represent the environment’s actual reward for a state action pair. are the penalty coefficients for the value loss and reward loss respectively.

Specifically, we define

(7)

where is the importance sampling ratio between the new policy and the old policy at depth , is the clipped ratio used to ensure the new parameter’s estimate to be near the old parameter’s estimate. We offer two possible formulations to clipping in Section 3.4. is the generalized advantage estimate defined in (1), and is a hyperparameter for the entropy coefficient.

As for the critic objective, we have the critic loss

(8)

which encourages the current value estimate to be close to the bootstrapped return at each depth without moving closer to the target than the clipped estimate (). The clipped estimate is guaranteed to be near the old parameter’s estimate. Notice that is over an extra iteration of the summation. This is because value estimates are made at every state () in the forward plan.

Similarly, the reward loss is

(9)

encourages the reward estimate to be close to the reward at each depth without moving closer to the target than the clipped estimate ().

The maximum in Equations (7)-(9) is taken between the unclipped surrogate objective and clipped surrogate objective. In the case of the critic and reward losses ( and ), this means that updates only take place when the estimate from the new parameters () are farther from the target ( in Equation 8 and in Equation 9) than the clipped estimate. When the new parameter’s estimate is closer, the max in Equation 8 and 9 will select the clipped surrogate. The gradient of the clipped surrogate with respect to parameters () will be zero, and thus will not change any parameters. This is desirable because it attempts to prevent destructive updates that push estimates made by far from estimates made by .

Remark 1

PPN can be reduced to PPO2 if (), , and .

It’s possible to use different values of depth for the objectives but unless otherwise noted .

3.4 Clipping

We present two approaches to clipping called grounded and ungrounded clipping. In this case, grounded and ungrounded refer to whether we have access to the ground truth observation (). Grounded clipping offers a less strict clipping region, while ungrounded clipping is more aligned with theoretical justifications found in Conservative Policy Iteration [7], TRPO [15], and PPO [17]. Our clipped objectives are advantageous for two reasons. First, they allow for auxiliary reward and depth based updates. Second, they allow us to share parameters between the transition, policy, value, reward, and embedding networks. Both of these are essential to learn the implicit transition model and are helpful with feature learning.

3.4.1 Grounded Clipping

The clipping region is grounded with respect to both the action trajectory () and the latent state (). The three grounded clipped estimates are shown in Equations (10), (11), and (12):

(10)
(11)
(12)

The clipping region is based on the grounded estimates from the old parameters () rather than predicted estimates from old parameters.

3.4.2 Ungrounded Clipping

The clipping region in this case is grounded with respect to only the action trajectory, but ungrounded with respect to the latent state. The ungrounded clipping estimates are defined as:

(13)
(14)

and . Notice the change in how is defined. We first clip the ratio between new and old ungrounded policies to be no more or less and then perform importance sampling to account for the advantage being calculated with respect to the rollout policy.

observation dimensions action dimensions
Hopper-v2 11 3
Walker2d-v2 17 6
Swimmer-v2 8 2
HalfCheetah-v2 17 6
InvertedPendulum-v2 4 1
InvertedDoublePendulum-v2 11 1
Humanoid-v2 376 17
Ant-v2 111 8
Table 1: Summary of the MuJoCo environments used.

4 Experiments

Our experiments seek to answer the following questions: (1) Is clipping necessary? If so, which type of clipping performs best(Section 4.2)? (2) Does PPN outperform model-free baselines(Section 4.3)? (3) What effect does depth have on performance(Section 4.4)? (4) Is the implicit transition module actually predicting abstract states that are useful to the policy(Section 4.5)?

4.1 Experimental Setup

Our experiments investigate the comparison of PPN to PPO2 [3] on the OpenAI Gym MuJoCo environments [22, 1]. Preprocessing was done similarly to that of PPO [17]

. Both PPO2 and PPN were implemented in Pytorch

[12].

The comparison against PPO2 is run over all 8 environments listed in Table 1. Due to returns being subject to high variance, we run tests over 15 seeds (which is 3-5 times more than the related works in [17, 11]). 111Due to computational constraints InvertedDoublePendulum-v2 only uses 5 seeds Due to computational constraints in other experiments we run over 5 seeds on the Walker2d-v2 and Ant-v2 environments.

Our PPO2 implementation uses the same hyperparameters as the baselines implementation [3]. The largest difference in our PPO2 implementation is that we do not perform orthogonal initialization. We did not include orthogonal initialization because it was not mentioned in the original PPO and we did not notice any clear performance benefits. For example, our implementation receives roughly double the returns on the HalfCheetah-v2 environment than the results reported by the baselines [3] implementation of PPO2 using orthogonal initialization.

Our PPN implementation uses similar hyperparameters: 2 fully connected layers for the embedding. 2 fully connected residual layers with unit length projections of the abstract-state  [4] for the transition module. 1 fully connected layer for the policy mean, value and reward. All hidden layers have 128 hidden units and activations. In practice we use Huber losses instead of L2 losses, as was done in related implicit model based works [11].

4.2 Clipping

We first look into the effect grounded clipping the network heads has on returns gathered by PPN agents. Here we test to see if a strong policy can still be learned if the other network heads () are not clipped.

(a) Walker2d-v2
(b) Ant-v2
Figure 2: Comparison of returns with and without (grounded) clipping of reward and critic.

As similarly done by Ilyas et al. [6]

, we fit a normal distribution to the returns achieved by the random seeds. Then we compare points on the cumulative distribution functions (CDF) that correspond to returns of

for Ant-v2 and for Walker2d-v2.

In Figure 1(a) and  1(b) we can see that clipping all the network heads turns out to be imperative to learn a useful policy. As stated in Section 3.3 clipping all the network heads is imperative because they all share parameters () with the policy. Additionally, we look into which type of clipping performs best. For the most part Table 2 shows grounded clipping offers the most robust returns. For all other PPN experiments we use the grounded clipping scheme.

Figure 3: Results on 1 million step MuJoCo benchmark. Dark lines represent the mean return and the shaded region is a standard deviation above and below the mean.
Grounded Ungrounded
Hopper-v2 2172.28 1356.14
Walker2d-v2 2937.20 1717.23
Swimmer-v2 83.22 85.27
HalfCheetah-v2 3509.34 3485.59
InvertedPendulum-v2 996.44 998.47
InvertedDoublePendulum-v2 4336.93 4071.19
Humanoid-v2 574.15 676.31
Ant-v2 1602.15 1566.06
Table 2: Returns using grounded and ungrounded clipping

4.3 Baseline Comparison

To test our model, we chose to benchmark against PPO2 and use the environments in Table 1. As is done in related works [4], we include depth and in our baseline comparison. However, we note that it is possible that larger depth values could be better on other environments.

As can be seen in Figure 3, we find that PPN finds a better if not comparable policy in all of the environments. We notice that PPN tends to do well in complex environments such as Ant-v2. Indeed Humanoid-v2 is an exception to this observation. Perhaps this is because Humanoid-v2’s number of observation dimensions(376) are far larger than the latent space(128). Additionally, we notice optimal depth is environment dependent and is further studied in Section 4.4.

4.4 Depth

In this section, we explore the effect depth has on returns gathered by PPN agents. Increasing depth forces the agent to learn abstract state which contains information relevant to longer-term environment dynamics.

(a) Walker2d-v2
(b) Ant-v2
Figure 4: Returns with respect to values of 1, 2, 5, and 10.

As seen in Figure 4 increasing depth () offers performance improvements but only up to a certain point. As the depth grows, we become more reliant on having a good transition function and eventually leads to a worse policy.

In Walker2d-v2 (Figure 3(a)) we can clearly see a depth of 2 offers performance gains over a depth of 1. However after this point returns decrease as we increase depth. We suspect that optimal depth for Walker2d-v2 may be less than Ant-v2 because the implicit transition module is less accurate. A similar conclusion can be drawn from our observation in Section 4.5. Optimal depth is a recurring issue in implicit model-based approaches [11, 4].

4.5 Transition Ablation

(a) Walker2d-v2
(b) Ant-v2
Figure 5: Returns with respect to three difference action selection approaches.

We are curious whether the implicit transition module is actually predicting abstract states that resemble reality closely enough to actually be useful to the policy. To test this we perform an ablation study of 3 different types of policy prediction networks. The first type Model Predictive Control (MPC) represents a perfect implicit transition module as the policy in this case has access to the ground truth observation. The standard MPC approach is where only the first action is followed and the rest are replanned. The second type, ”trajectory”, represents the strength of the transition module. In this case, every steps a new trajectory is generated by recursively calling with the predicted abstract states and the sampled actions the predicted policy. The third type, ”repeat”, represents a meaningless transition module. In this case, every steps a new action is generated by the policy and repeated for steps. If the implicit transition module is bad we expect the returns from trajectory and repeat to be more or less the same. If the implicit transition module is good we expect returns somewhere in between the MPC and repeat curves. Note that all 3 of these approaches are trained in the same manner and have exactly the same parameters.

In Figures 4(a) and 4(b) we see that the trajectory approach performs much better than repeat but not quite as well as MPC. This is interesting because the trajectory approach only has access to the grounded observation and must simulate -steps into the future, where as in the MPC approach the action taken at time always has access to the observation from time . These results show that the implicit transition module is indeed useful and could be used in future work for decision-time planning.

5 Conclusion

Introduced in this work is a learning scheme for Policy Gradient methods which integrates model-free and model-based learning that reduces sample complexity at no extra cost in computation at rollout time. Additionally, PPN’s implicit transition model acts as a first step towards decision-time planning with tree structured architectures in continuous action-spaces. It is interesting to note that while we only explored continuous action spaces in this work it is also possible to extend this to discrete action spaces.

For future work we would like to adapt PPNs to be less sensitive to planning depth and to leverage the transition model for decision-time planning. Decision-time planning is interesting but not straight forward to apply because it changes the behavior policy distribution in ways that are hard to measure.

References

  • [1] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba (2016) Openai gym. arXiv preprint arXiv:1606.01540. Cited by: §4.1.
  • [2] S. Chiappa, S. Racaniere, D. Wierstra, and S. Mohamed (2017) Recurrent environment simulators. arXiv preprint arXiv:1704.02254. Cited by: §1, §2.4.
  • [3] P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, Y. Wu, and P. Zhokhov (2017) OpenAI baselines. GitHub. Note: https://github.com/openai/baselines Cited by: §2.3, §4.1, §4.1.
  • [4] G. Farquhar, T. Rocktaeschel, M. Igl, and S. Whiteson (2018) TreeQN and ATreec: differentiable tree planning for deep reinforcement learning. In International Conference on Learning Representations, External Links: Link Cited by: §1, §2.4, §2.4, §2.4, §4.1, §4.3, §4.4.
  • [5] D. Ha and J. Schmidhuber (2018) Recurrent world models facilitate policy evolution. In Advances in Neural Information Processing Systems, pp. 2455–2467. Cited by: §1, §2.4.
  • [6] A. Ilyas, L. Engstrom, S. Santurkar, D. Tsipras, F. Janoos, L. Rudolph, and A. Madry (2018) Are deep policy gradient algorithms truly policy gradient algorithms?. arXiv preprint arXiv:1811.02553. Cited by: §2.3, §4.2.
  • [7] S. Kakade and J. Langford (2002) Approximately optimal approximate reinforcement learning. In ICML, Vol. 2, pp. 267–274. Cited by: §2.2, §3.4.
  • [8] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu (2016) Asynchronous methods for deep reinforcement learning. In

    International conference on machine learning

    ,
    pp. 1928–1937. Cited by: §1.
  • [9] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. (2015) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529. Cited by: §1.
  • [10] J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh (2015) Action-conditional video prediction using deep networks in atari games. In Advances in neural information processing systems, pp. 2863–2871. Cited by: §1, §2.4.
  • [11] J. Oh, S. Singh, and H. Lee (2017) Value prediction network. In Advances in Neural Information Processing Systems, pp. 6118–6128. Cited by: §1, §1, §2.4, §2.4, §2.4, §3.1, §4.1, §4.1, §4.4.
  • [12] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §4.1.
  • [13] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell (2017) Curiosity-driven exploration by self-supervised prediction. In International Conference on Machine Learning (ICML), Vol. 2017. Cited by: §2.4, §2.4.
  • [14] J. Schmidhuber (2015) On learning to think: algorithmic information theory for novel combinations of reinforcement learning controllers and recurrent neural world models. arXiv preprint arXiv:1511.09249. Cited by: §2.4.
  • [15] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz (2015) Trust region policy optimization. In International Conference on Machine Learning, pp. 1889–1897. Cited by: §2.3, §3.4.
  • [16] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel (2016) High-dimensional continuous control using generalized advantage estimation. In Proceedings of the International Conference on Learning Representations (ICLR), Cited by: §2.1, §2.2.
  • [17] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov (2017) Proximal policy optimization algorithms. CoRR abs/1707.06347. External Links: Link, 1707.06347 Cited by: §1, §1, §2.3, §3.2, §3.3, §3.4, §4.1, §4.1.
  • [18] D. Silver, H. van Hasselt, M. Hessel, T. Schaul, A. Guez, T. Harley, G. Dulac-Arnold, D. P. Reichert, N. C. Rabinowitz, A. Barreto, and T. Degris (2017) The predictron: end-to-end learning and planning. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 3191–3199. External Links: Link Cited by: §1, §2.4, §2.4.
  • [19] R. S. Sutton and A. G. Barto (2018) Reinforcement learning: an introduction. MIT press. Cited by: §2.1, item 4, §3.2.
  • [20] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour (2000) Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pp. 1057–1063. Cited by: §2.1.
  • [21] R. S. Sutton (1990) Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Machine Learning Proceedings 1990, pp. 216–224. Cited by: §1.
  • [22] E. Todorov, T. Erez, and Y. Tassa (2012) Mujoco: a physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026–5033. Cited by: §2.4, §4.1.