1 Introduction
Modelbased reinforcement learning (RL) approaches attempt to learn a model that predicts future observations conditioned on actions and can thus be used to simulate the real environment and do multistep lookaheads for planning. We will call such models an observationprediction model to distinguish it from another form of model introduced in this paper. Building an accurate observationprediction model is often very challenging when the observation space is large (oh2015action, ; Finn2016UnsupervisedLF, ; Kalchbrenner2016VideoPN, ; Chiappa2017RecurrentES, ) (e.g., highdimensional pixellevel image frames), and even more difficult when the environment is stochastic. Therefore, a natural question is whether it is possible to plan without predicting future observations.
In fact, raw observations may contain information unnecessary for planning, such as dynamically changing backgrounds in visual observations that are irrelevant to their value/utility. The starting point of this work is the premise that what planning truly requires is the ability to predict the rewards and values of future states. An observationprediction model relies on its predictions of observations to predict future rewards and values. What if we could predict future rewards and values directly without predicting future observations? Such a model could be more easily learnable for complex domains or more flexible for dealing with stochasticity. In this paper, we address the problem of learning and planning from a valueprediction model that can directly generate/predict the value/reward of future states without generating future observations.
Our main contribution is a novel neural network architecture we call the Value Prediction Network (VPN). The VPN combines modelbased RL (i.e., learning the dynamics of an abstract state space sufficient for computing future rewards and values) and modelfree RL (i.e., mapping the learned abstract states to rewards and values) in a unified framework. In order to train a VPN, we propose a combination of temporaldifference search (Silver2012TemporaldifferenceSI, ) (TD search) and nstep Qlearning (mnih2016asynchronous, )
. In brief, VPNs learn to predict values via Qlearning and rewards via supervised learning. At the same time, VPNs perform lookahead planning to choose actions and compute bootstrapped target Qvalues.
Our empirical results on a 2D navigation task demonstrate the advantage of VPN over modelfree baselines (e.g., Deep QNetwork (mnih2015human, )). We also show that VPN is more robust to stochasticity in the environment than an observationprediction model approach. Furthermore, we show that our VPN outperforms DQN on several Atari games (bellemare2012arcade, ) even with shortlookahead planning, which suggests that our approach can be potentially useful for learning better abstractstate representations and reducing samplecomplexity.
2 Related Work
Modelbased Reinforcement Learning.
DynaQ (Sutton1990IntegratedAF, ; Sutton2008DynaStylePW, ; yao2009multi, ) integrates modelfree and modelbased RL by learning an observationprediction model and using it to generate samples for Qlearning in addition to the modelfree samples obtained by acting in the real environment. Gu et al. (Gu2016ContinuousDQ, )
extended these ideas to continuous control problems. Our work is similar to DynaQ in the sense that planning and learning are integrated into one architecture. However, VPNs perform a lookahead tree search to choose actions and compute bootstrapped targets, whereas DynaQ uses a learned model to generate imaginary samples. In addition, DynaQ learns a model of the environment separately from a value function approximator. In contrast, the dynamics model in VPN is combined with the value function approximator in a single neural network and indirectly learned from reward and value predictions through backpropagation.
Another line of work oh2015action ; Chiappa2017RecurrentES ; Guo2016DeepLF ; stadie2015incentivizing
uses observationprediction models not for planning, but for improving exploration. A key distinction from these prior works is that our method learns abstractstate dynamics not to predict future observations, but instead to predict future rewards/values. For continuous control problems, deep learning has been combined with model predictive control (MPC)
Finn2016DeepVF ; Lenz2015DeepMPCLD ; raiko2009variational , a specific way of using an observationprediction model. In cases where the observationprediction model is differentiable with respect to continuous actions, backpropagation can be used to find the optimal action Mishra2017PredictionAC or to compute value gradients Heess2015LearningCC . In contrast, our work focuses on learning and planning using lookahead for discrete control problems.Our VPNs are related to Value Iteration Networks Tamar2016ValueIN
(VINs) which perform value iteration (VI) by approximating the Bellmanupdate through a convolutional neural network (CNN). However, VINs perform VI over the entire state space, which in practice requires that 1) the state space is small and representable as a vector with each dimension corresponding to a separate state and 2) the states have a topology with local transition dynamics (e.g., 2D grid). VPNs do not have these limitations and are thus more generally applicable, as we will show empirically in this paper.
VPN is close to and inpart inspired by Predictron Silver2016ThePE
in that a recurrent neural network (RNN) acts as a transition function over abstract states. VPN can be viewed as a
grounded Predictron in that each rollout corresponds to the transition in the environment, whereas each rollout in Predictron is purely abstract. In addition, Predictrons are limited to uncontrolled settings and thus policy evaluation, whereas our VPNs can learn an optimal policy in controlled settings.Modelfree Deep Reinforcement Learning.
Mnih et al. mnih2015human
proposed the Deep QNetwork (DQN) architecture which learns to estimate Qvalues using deep neural networks. A lot of variations of DQN have been proposed for learning better state representation
Wang2016DuelingNA ; Kulkarni2016DeepSR ; Hausknecht2015DeepRQ ; Oh2016ControlOM ; Vezhnevets2016StrategicAW ; Parisotto2017NeuralMS , including the use of memorybased networks for handling partial observability Hausknecht2015DeepRQ ; Oh2016ControlOM ; Parisotto2017NeuralMS , estimating both statevalues and advantagevalues as a decomposition of Qvalues Wang2016DuelingNA , learning successor state representations Kulkarni2016DeepSR , and learning several auxiliary predictions in addition to the main RL values Jaderberg2016ReinforcementLW . Our VPN can be viewed as a modelfree architecture which 1) decomposes Qvalue into reward, discount, and the value of the next state and 2) uses multistep reward/value predictions as auxiliary tasks to learn a good representation. A key difference from the prior work listed above is that our VPN learns to simulate the future rewards/values which enables planning. Although STRAW Vezhnevets2016StrategicAW can maintain a sequence of future actions using an external memory, it cannot explicitly perform planning by simulating future rewards/values.MonteCarlo Planning.
MonteCarlo Tree Search (MCTS) methods (kocsis2006bandit, ; browne2012survey, ) have been used for complex search problems, such as the game of Go, where a simulator of the environment is already available and thus does not have to be learned. Most recently, AlphaGo silver2016mastering introduced a value network that directly estimates the value of state in Go in order to better approximate the value of leafnode states during tree search. Our VPN takes a similar approach by predicting the value of abstract future states during tree search using a value function approximator. Temporaldifference search Silver2012TemporaldifferenceSI (TD search) combined TDlearning with MCTS by computing target values for a value function approximator through MCTS. Our algorithm for training VPN can be viewed as an instance of TD search, but it learns the dynamics of future rewards/values instead of being given a simulator.
3 Value Prediction Network
The value prediction network is developed for semiMarkov decision processes (SMDPs). Let
be the observation or a history of observations for partially observable MDPs (henceforth referred to as just observation) and let be the option (sutton1999between, ; Stolle2002LearningOI, ; precup2000temporal, ) at time . Each option maps observations to primitive actions, and the following Bellman equation holds for all policies : , where is a discount factor, is the immediate reward at time , and is the number of time steps taken by the option before terminating in observation .A VPN not only learns an optionvalue function through a neural network parameterized by like modelfree RL, but also learns the dynamics of the rewards/values to perform planning. We describe the architecture of VPN in Section 3.1. In Section 3.2, we describe how to perform planning using VPN. Section 3.3 describes how to train VPN in a QLearninglike framework watkins1992q .
3.1 Architecture
The VPN consists of the following modules parameterized by :
Encoding  Value  
Outcome  Transition 

[leftmargin=*]

Encoding module maps the observation (x) to the abstract state () using neural networks (e.g., CNN for visual observations). Thus, s is an abstractstate representation which will be learned by the network (and not an environment state or even an approximation to one).

Value module estimates the value of the abstractstate (). Note that the value module is not a function of the observation, but a function of the abstractstate.

Outcome module predicts the optionreward () for executing the option o at abstractstate s. If the option takes primitive actions before termination, the outcome module should predict the discounted sum of the immediate rewards as a scalar. The outcome module also predicts the optiondiscount () induced by the number of steps taken by the option.

Transition module transforms the abstractstate to the next abstractstate () in an optionconditional manner.
Figure 0(a) illustrates the core module which performs 1step rollout by composing the above modules: . The core module takes an abstractstate and option as input and makes separate optionconditional predictions of the optionreward (henceforth, reward), the optiondiscount (henceforth, discount), and the value of the abstractstate at optiontermination. By combining the predictions, we can estimate the Qvalue as follows: . In addition, the VPN recursively applies the core module to predict the sequence of future abstractstates as well as rewards and discounts given an initial abstractstate and a sequence of options as illustrated in Figure 0(b).
3.2 Planning
VPN has the ability to simulate the future and plan based on the simulated future abstractstates. Although many existing planning methods (e.g., MCTS) can be applied to the VPN, we implement a simple planning method which performs rollouts using the VPN up to a certain depth (say ), henceforth denoted as planning depth, and aggregates all intermediate value estimates as described in Algorithm 1 and Figure 2. More formally, given an abstractstate and an option o, the Qvalue calculated from step planning is defined as:
(1) 
where , and . Our planning algorithm is divided into two steps: expansion and backup. At the expansion step (see Figure 1(a)), we recursively simulate options up to a depth of by unrolling the core module. At the backup step, we compute the weighted average of the direct value estimate and to compute (i.e., value from step planning) in Equation 1. Note that is the average over possible value estimates. We propose to compute the uniform average over all possible returns by using weights proportional to and for and respectively. Thus, is the uniform average of expected returns along the path of the best sequence of options as illustrated in Figure 1(b).
To reduce the computational cost, we simulate only best options at each expansion step based on
. We also find that choosing only the best option after a certain depth does not compromise the performance much, which is analogous to using a default policy in MCTS beyond a certain depth. This heuristic visits reasonably good abstract states during planning, though a more principled way such as UCT
kocsis2006bandit can also be used to balance exploration and exploitation. This planning method is used for choosing options and computing target Qvalues during training, as described in the following section.3.3 Learning
VPN can be trained through any existing valuebased RL algorithm for the value predictions combined with supervised learning for reward and discount predictions. In this paper, we present a modification of nstep Qlearning (mnih2016asynchronous, ) and TD search (Silver2012TemporaldifferenceSI, ). The main idea is to generate trajectories by following greedy policy based on the planning method described in Section 3.2. Given an nstep trajectory generated by the greedy policy, step predictions are defined as follows:
Intuitively, is the VPN’s step prediction of the abstractstate at time predicted from following options in the trajectory as illustrated in Figure 3. By applying the value and the outcome module, VPN can compute the step prediction of the value, the reward, and the discount. The step prediction loss at step is defined as:
where is the target value, and is the Qvalue computed by the step planning method described in 3.2. Intuitively, accumulates losses over 1step to step predictions of values, rewards, and discounts. We find that applying for the discount prediction loss helps optimization, which amounts to computing the squared loss with respect to the number of steps.
Our learning algorithm introduces two hyperparameters: the number of prediction steps (
) and planning depth () used for choosing options and computing bootstrapped targets. We also make use of a target network parameterized by which is synchronized with after a certain number of steps to stabilize training as suggested by mnih2016asynchronous . The loss is accumulated over nsteps and the parameter is updated by computing its gradient as follows: . The full algorithm is described in the Appendix.3.4 Relationship to Existing Approaches
VPN is modelbased in the sense that it learns an abstractstate transition function sufficient to predict rewards/discount/values. Meanwhile, VPN can also be viewed as modelfree in the sense that it learns to directly estimate the value of the abstractstate. From this perspective, VPN exploits several auxiliary prediction tasks, such as reward and discount predictions to learn a good abstractstate representation. An interesting property of VPN is that its planning ability is used to compute the bootstrapped target as well as choose options during Qlearning. Therefore, as VPN improves the quality of its future predictions, it can not only perform better during evaluation through its improved planning ability, but also generate more accurate target Qvalues during training, which encourages faster convergence compared to conventional Qlearning.
4 Experiments
Our experiments investigated the following questions: 1) Does VPN outperform modelfree baselines (e.g., DQN)? 2) What is the advantage of planning with a VPN over observationbased planning? 3) Is VPN useful for complex domains with highdimensional sensory inputs, such as Atari games?
4.1 Experimental Setting
Network Architecture.
A CNN was used as the encoding module of VPN, and the transition module consists of one optionconditional convolution layer which uses different weights depending on the option followed by a few more convolution layers. We used a residual connection
(He2016DeepRL, ) from the previous abstractstate to the next abstractstate so that the transition module learns the change of the abstractstate. The outcome module is similar to the transition module except that it does not have a residual connection and two fullyconnected layers are used to produce reward and discount. The value module consists of two fullyconnected layers. The number of layers and hidden units vary depending on the domain. These details are described in the Appendix.Implementation Details.
Our algorithm is based on asynchronous nstep Qlearning (mnih2016asynchronous, ) where n is 10 and 16 threads are used. The target network is synchronized after every 10K steps. We used the Adam optimizer (Kingma2014AdamAM, ), and the best learning rate and its decay were chosen from and
respectively. The learning rate is multiplied by the decay every 1M steps. Our implementation is based on TensorFlow
(Abadi2015TensorFlowLM, ).^{1}^{1}1The code is available on https://github.com/junhyukoh/valuepredictionnetwork.VPN has four more hyperparameters: 1) the number of predictions steps (k) during training, 2) the plan depth () during training, 3) the plan depth () during evaluation, and 4) the branching factor (b) which indicates the number of options to be simulated for each expansion step during planning. We used throughout the experiment unless otherwise stated. VPN(d) represents our model which learns to predict and simulate up to dstep futures during training and evaluation. The branching factor () was set to 4 until depth of 3 and set to 1 after depth of 3, which means that VPN simulates 4best options up to depth of 3 and only the best option after that.
Baselines.
We compared our approach to the following baselines.

[leftmargin=*]

DQN: This baseline directly estimates Qvalues as its output and is trained through asynchronous nstep Qlearning. Unlike the original DQN, however, our DQN baseline takes an option as additional input and applies an optionconditional convolution layer to the top of the last encoding convolution layer, which is very similar to our VPN architecture.^{2}^{2}2This architecture outperformed the original DQN architecture in our preliminary experiments.

VPN(1): This is identical to our VPN with the same training procedure except that it performs only 1step rollout to estimate Qvalue as shown in Figure 0(a). This can be viewed as a variation of DQN that predicts reward, discount, and the value of the next state as a decomposition of Qvalue.

OPN(d): We call this Observation Prediction Network (OPN), which is similar to VPN except that it directly predicts future observations. More specifically, we train two independent networks: a model network () which predicts reward, discount, and the next observation, and a value network () which estimates the value from the observation. The training scheme is similar to our algorithm except that a squared loss for observation prediction is used to train the model network. This baseline performs dstep planning like VPN(d).
4.2 Collect Domain
Task Description.
We defined a simple but challenging 2D navigation task where the agent should collect as many goals as possible within a time limit, as illustrated in Figure 5. In this task, the agent, goals, and walls are randomly placed for each episode. The agent has four options: move left/right/up/down to the first crossing branch or the end of the corridor in the chosen direction. The agent is given 20 steps for each episode and receives a positive reward () when it collects a goal by moving on top of it and a timepenalty () for each step. Although it is easy to learn a suboptimal policy which collects nearby goals, finding the optimal trajectory in each episode requires careful planning because the optimal solution cannot be computed in polynomial time.
An observation is represented as a 3D tensor (
) with binary values indicating the presence/absence of each object type. The time remaining is normalized to and is concatenated to the 3rd convolution layer of the network as a channel.We evaluated all architectures first in a deterministic environment and then investigated the robustness in a stochastic environment separately. In the stochastic environment, each goal moves by one block with probability of 0.3 for each step. In addition, each option can be repeated multiple times with probability of 0.3. This makes it difficult to predict and plan the future precisely.



Overall Performance.
The result is summarized in Figure 6. To understand the quality of different policies, we implemented a greedy algorithm which always collects the nearest goal first and a shortestpath algorithm which finds the optimal solution through exhaustive search assuming that the environment is deterministic. Note that even a small gap in terms of reward can be qualitatively substantial as indicated by the small gap between greedy and shortestpath algorithms.
The results show that many architectures learned a betterthangreedy policy in the deterministic and stochastic environments except that OPN baselines perform poorly in the stochastic environment. In addition, the performance of VPN is improved as the plan depth increases, which implies that deeper predictions are reliable enough to provide more accurate value estimates of future states. As a result, VPN with 5step planning represented by ‘VPN(5)’ performs best in both environments.
Comparison to Modelfree Baselines.
Our VPNs outperform DQN and VPN(1) baselines by a large margin as shown in Figure 6. Figure 5 (bc) shows an example of trajectories of DQN and VPN(5) given the same initial state. Although DQN’s behavior is reasonable, it ended up with collecting one less goal compared to VPN(5). We hypothesize that 6 convolution layers used by DQN and VPN(1) are not expressive enough to find the best route in each episode because finding an optimal path requires a combinatorial search in this task. On the other hand, VPN can perform such a combinatorial search to some extent by simulating future abstractstates, which has advantages over modelfree approaches for dealing with tasks that require careful planning.
Comparison to Observationbased Planning.
Compared to OPNs which perform planning based on predicted observations, VPNs perform slightly better or equally well in the deterministic environment. We observed that OPNs can predict future observations very accurately because observations in this task are simple and the environment is deterministic. Nevertheless, VPNs learn faster than OPNs in most cases. We conjecture that it takes additional training steps for OPNs to learn to predict future observations. In contrast, VPNs learn to predict only minimal but sufficient information for planning: reward, discount, and the value of future abstractstates, which may be the reason why VPNs learn faster than OPNs.
In the stochastic Collect domain, VPNs significantly outperform OPNs. We observed that OPNs tend to predict the average of possible future observations () because OPN is deterministic. Estimating values on such blurry predictions leads to estimating which is different from the true expected value . On the other hand, VPN is trained to approximate the true expected value because there is no explicit constraint or loss for the predicted abstract state. We hypothesize that this key distinction allows VPN to learn different modes of possible future states more flexibly in the abstract state space. This result suggests that a valueprediction model can be more beneficial than an observationprediction model when the environment is stochastic and building an accurate observationprediction model is difficult.
Deterministic  Stochastic  
Original  FGs  MWs  Original  FGs  MWs  
Greedy  
Shortest  
DQN  
VPN(1)  
OPN(5)  9.30  5.45  8.36  
VPN(5)  9.29  5.43  8.31  8.11  4.45  7.46 
Generalization Performance.
One advantage of modelbased RL approach is that it can generalize well to unseen environments as long as the dynamics of the environment remains similar. To see if our VPN has such a property, we evaluated all architectures on two types of previously unseen environments with either reduced number of goals (from 8 to 5) or increased number of walls. It turns out that our VPN is much more robust to the unseen environments compared to modelfree baselines (DQN and VPN(1)), as shown in Table 1. The modelfree baselines perform worse than the greedy algorithm on unseen environments, whereas VPN still performs well. In addition, VPN generalizes as well as OPN which can learn a nearperfect model in the deterministic setting, and VPN significantly outperforms OPN in the stochastic setting. This suggests that VPN has a good generalization property like modelbased RL methods and is robust to stochasticity.
Effect of Planning Depth.
To further investigate the effect of planning depth in a VPN, we measured the average reward in the deterministic environment by varying the planning depth () from 1 to 10 during evaluation after training VPN with a fixed number of prediction steps and planning depth (), as shown in Figure 7. Since VPN does not learn to predict observations, there is no guarantee that it can perform deeper planning during evaluation () than the planning depth used during training (). Interestingly, however, the result in Figure 7 shows that if , VPN achieves better performance during evaluation through deeper tree search (). We also tested a VPN with and and found that a planning depth of achieved the best performance during evaluation. Thus, with a suitably large number of prediction steps during training, our VPN is able to benefit from deeper planning during evaluation relative to the planning depth during training. Figure 5 shows examples of good plans of length greater than found by a VPN trained with planning depth . Another observation from Figure 7 is that the performance of planning depth of 1 () degrades as the planning depth during training () increases. This means that a VPN can improve its value estimations through longterm planning at the expense of the quality of shortterm planning.
4.3 Atari Games
To investigate how VPN deals with complex visual observations, we evaluated it on several Atari games bellemare2012arcade . Unlike in the Collect domain, in Atari games most primitive actions have only small value consequences and it is difficult to handdesign useful extended options. Nevertheless, we explored if VPNs are useful in Atari games even with shortlookahead planning using simple options that repeat the same primitive action over extended time periods by using a frameskip of 10.^{3}^{3}3Much of the previous work on Atari games has used a frameskip of 4. Though using a larger frameskip generally makes training easier, it may make training harder in some games if they require more finegrained control (Lakshminarayanan2017DynamicAR, ). We preprocessed the game screen to grayscale images. All architectures take last 4 frames as input. We doubled the number of hidden units of the fullyconnected layer for DQN to approximately match the number of parameters. VPN learns to predict rewards and values but not discount (since it is fixed), and was trained to make 3optionstep predictions for planning which means that the agent predicts up to 0.5 seconds ahead in realtime.
Frostbite  Seaquest  Enduro  Alien  Q*Bert  Ms. Pacman  Amidar  Krull  Crazy Climber  

DQN  3058  2951  326  1804  12592  2804  535  12438  41658 
VPN  3811  5628  382  1429  14517  2689  641  15930  54119 





As summarized in Table 2 and Figure 8, our VPN outperforms DQN baseline on 7 out of 9 Atari games and learned significantly faster than DQN on Seaquest, QBert, Krull, and Crazy Climber. One possible reason why VPN outperforms DQN is that even 3step planning is indeed helpful for learning a better policy. Figure 9 shows an example of VPN’s 3step planning in Seaquest. Our VPN predicts reasonable values given different sequences of actions, which can potentially help choose a better action by looking at the shortterm future. Another hypothesis is that the architecture of VPN itself, which has several auxiliary prediction tasks for multistep future rewards and values, is useful for learning a good abstractstate representation as a modelfree agent. Finally, our algorithm which performs planning to compute the target Qvalue can potentially speed up learning by generating more accurate targets as it performs value backups multiple times from the simulated futures, as discussed in Section 3.4. These results show that our approach is applicable to complex visual environments without needing to predict observations.
5 Conclusion
We introduced value prediction networks (VPNs) as a new deep RL way of integrating planning and learning while simultaneously learning the dynamics of abstractstates that make optionconditional predictions of future rewards/discount/values rather than future observations. Our empirical evaluations showed that VPNs outperform modelfree DQN baselines in multiple domains, and outperform traditional observationbased planning in a stochastic domain. An interesting future direction would be to develop methods that automatically learn the options that allow good planning in VPNs.
Acknowledgement
This work was supported by NSF grant IIS1526059. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the views of the sponsor.
References
 (1) M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. J. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Józefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. G. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. A. Tucker, V. Vanhoucke, V. Vasudevan, F. B. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensorflow: Largescale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
 (2) M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. arXiv preprint arXiv:1207.4708, 2012.
 (3) C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton. A survey of monte carlo tree search methods. Computational Intelligence and AI in Games, IEEE Transactions on, 4(1):1–43, 2012.
 (4) S. Chiappa, S. Racaniere, D. Wierstra, and S. Mohamed. Recurrent environment simulators. In ICLR, 2017.
 (5) D.A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv:1511.07289, 2015.
 (6) C. Finn, I. J. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016.
 (7) C. Finn and S. Levine. Deep visual foresight for planning robot motion. In ICRA, 2017.
 (8) S. Gu, T. P. Lillicrap, I. Sutskever, and S. Levine. Continuous deep qlearning with modelbased acceleration. In ICML, 2016.
 (9) X. Guo, S. P. Singh, R. L. Lewis, and H. Lee. Deep learning for reward design to improve monte carlo tree search in atari games. In IJCAI, 2016.
 (10) M. Hausknecht and P. Stone. Deep recurrent qlearning for partially observable MDPs. arXiv preprint arXiv:1507.06527, 2015.
 (11) K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
 (12) N. Heess, G. Wayne, D. Silver, T. P. Lillicrap, Y. Tassa, and T. Erez. Learning continuous control policies by stochastic value gradients. In NIPS, 2015.
 (13) M. Jaderberg, V. Mnih, W. Czarnecki, T. Schaul, J. Z. Leibo, D. Silver, and K. Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In ICLR, 2017.
 (14) N. Kalchbrenner, A. van den Oord, K. Simonyan, I. Danihelka, O. Vinyals, A. Graves, and K. Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016.
 (15) D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
 (16) L. Kocsis and C. Szepesvári. Bandit based montecarlo planning. In ECML, 2006.
 (17) T. D. Kulkarni, A. Saeedi, S. Gautam, and S. Gershman. Deep successor reinforcement learning. arXiv preprint arXiv:1606.02396, 2016.
 (18) A. S. Lakshminarayanan, S. Sharma, and B. Ravindran. Dynamic action repetition for deep reinforcement learning. In AAAI, 2017.
 (19) I. Lenz, R. A. Knepper, and A. Saxena. DeepMPC: Learning deep latent features for model predictive control. In RSS, 2015.
 (20) N. Mishra, P. Abbeel, and I. Mordatch. Prediction and control with temporal segment models. In ICML, 2017.
 (21) V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In ICML, 2016.
 (22) V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
 (23) J. Oh, V. Chockalingam, S. Singh, and H. Lee. Control of memory, active perception, and action in minecraft. In ICML, 2016.
 (24) J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Actionconditional video prediction using deep networks in atari games. In NIPS, 2015.
 (25) E. Parisotto and R. Salakhutdinov. Neural map: Structured memory for deep reinforcement learning. arXiv preprint arXiv:1702.08360, 2017.
 (26) D. Precup. Temporal abstraction in reinforcement learning. PhD thesis, University of Massachusetts, Amherst, 2000.
 (27) T. Raiko and M. Tornio. Variational bayesian learning of nonlinear hidden statespace models for model predictive control. Neurocomputing, 72(16):3704–3712, 2009.
 (28) D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
 (29) D. Silver, R. S. Sutton, and M. Müller. Temporaldifference search in computer go. Machine Learning, 87:183–219, 2012.
 (30) D. Silver, H. van Hasselt, M. Hessel, T. Schaul, A. Guez, T. Harley, G. DulacArnold, D. Reichert, N. Rabinowitz, A. Barreto, and T. Degris. The predictron: Endtoend learning and planning. In ICML, 2017.
 (31) B. C. Stadie, S. Levine, and P. Abbeel. Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.
 (32) M. Stolle and D. Precup. Learning options in reinforcement learning. In SARA, 2002.
 (33) R. S. Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In ICML, 1990.
 (34) R. S. Sutton, D. Precup, and S. Singh. Between MDPs and semiMDPs: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1):181–211, 1999.
 (35) R. S. Sutton, C. Szepesvári, A. Geramifard, and M. H. Bowling. Dynastyle planning with linear function approximation and prioritized sweeping. In UAI, 2008.
 (36) A. Tamar, S. Levine, P. Abbeel, Y. Wu, and G. Thomas. Value iteration networks. In NIPS, 2016.
 (37) A. Vezhnevets, V. Mnih, S. Osindero, A. Graves, O. Vinyals, J. Agapiou, and K. Kavukcuoglu. Strategic attentive writer for learning macroactions. In NIPS, 2016.
 (38) Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas. Dueling network architectures for deep reinforcement learning. In ICML, 2016.
 (39) C. J. Watkins and P. Dayan. Qlearning. Machine learning, 8(34):279–292, 1992.
 (40) H. Yao, S. Bhatnagar, D. Diao, R. S. Sutton, and C. Szepesvári. Multistep Dyna planning for policy evaluation and control. In NIPS, 2009.
Appendix A Comparison between VPN and DQN in the Deterministic Collect
Appendix B Comparison between VPN and OPN in the Stochastic Collect




















Appendix C Examples of Planning on Atari Games
Appendix D Details of Learning
Algorithm 2 describes our algorithm for training value prediction network (VPN). We observed that training the outcome module (reward and discount prediction) on additional data collected from a random policy slightly improves the performance because it reduces a bias towards the agent’s behavior. More specifically, we fill a replay memory with transitions from a random policy before training and sample transitions from the replay memory to train the outcome module. This procedure is described in Line 4 and Lines 2024 in Algorithm 2. This method was used only for Collect domain (not for Atari) in our experiment by generating 1M transitions from a random policy.
Appendix E Details of Hyperparameters
Transition module used for Collect domain. The first convolution layer uses different weights depending on the given option. Sigmoid activation function is used for the last 1x1 convolution such that its output forms a mask. This mask is multiplied to the output from the 3rd convolution layer. Note that there is a residual connection from
s to . Thus, the transition module learns the change of the consecutive abstract states.e.1 Collect
The encoding module of our VPN consists of Conv(323x31)Conv(323x31)Conv(644x42) where Conv(NKxKS) represents N filters with size of KxK with a stride of S. The transition module is illustrated in Figure
13. It consists of OptionConv(643x31)Conv(643x31)Conv(643x31) and a separate Conv(641x11) for the mask which is multiplied to the output of the 3rd convolution layer of the transition module. ‘OptionConv’ uses different convolution weights depending on the given option. We also used a residual connection from the previous abstract state to the next abstract state such that the transition module learns the difference between two states. The outcome module has OptionConv(643x31)Conv(643x31)FC(64)FC(2) where FC(N) represents a fullyconnected layer with N hidden units. The value module consists of FC(64)FC(1). Exponential linear unit (ELU) [5] was used as an activation function for all architectures.Our DQN baseline consists of the encoding module followed by the transition module followed by the value module. Thus, the overall architecture is very similar to VPN except that it does not have the outcome module. To match the number of parameters, we used 256 hidden units for DQN’s value module. We found that this architecture outperforms the original DQN architecture [22] on Collect domain and several Atari games.
The model network of OPN baseline has the same architecture as VPN except that it has an additional decoding module which consists of Deconv(644x42)Deconv(323x31)Deconv(323x31). This module is applied to the predicted abstractstate so that it can predict the future observations. The value network of OPN has the same architecture as our DQN baseline.
A discount factor of was used, and the target network was synchronized after every 10K steps. The epsilon for greedy policy was linearly decreased from to for the first 1M steps.
e.2 Atari Games
The encoding module consists of Conv(168x84)Conv(324x42), and the transition module has OptionConv(323x31)Conv(323x31) with a mask and a residual connection as described above. The outcome module has OptionConv(323x31)Conv(323x31)FC(128)FC(1), and the value module consists of FC(128)FC(1). The DQN baseline has the same encoding module followed by the transition module and the value module, and we used 256 hidden units for the value module of DQN to approximately match the number of parameters. The other hyperparameters are same as the ones used in the Collect domain except that a discount factor of was used.