Model-based planning involves proposing sequences of actions, evaluating them under a model of the world, and refining these proposals to optimize expected rewards. Several key advantages of model-based planning over model-free methods are that models support generalization to states not previously experienced, help express the relationship between present actions and future rewards, and can resolve states which are aliased in value-based approximations. These advantages are especially pronounced in problems with complex and stochastic environmental dynamics, sparse rewards, and restricted trial-and-error experience. Yet even with an accurate model, planning is often very challenging because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan.
Existing techniques for model-based planning are most effective in small-scale problems, often require background knowledge of the domain, and use pre-defined solution strategies. Planning in discrete state and action spaces typically involves exploring the search tree for sequences of actions with high expected value, and apply fixed heuristics to control complexity (e.g., A*, beam search, Monte Carlo Tree Searchcoulom2006efficient ). In problems with continuous state and action spaces the search tree is effectively infinite, so planning usually involves sampling sequences of actions to evaluate according to assumptions about smoothness and other regularities in the state and action spaces busoniu2010reinforcement ; hren2008optimistic ; munos2011optimistic ; weinstein2012bandit . While most modern methods for planning exploit the statistics of an individual episode, few can learn across episodes and be optimized for a given task. And even fewer attempt to learn the actual planning strategy itself, including the transition model, the policy for choosing how to sample sequences of actions, and procedures for aggregating the proposed actions and evaluations into a useful plan.
Here we introduce the Imagination-based Planner (IBP), a model-based agent which learns from experience all aspects of the planning process: how to construct, evaluate, and execute a plan. The IBP learns when to act versus when to imagine, and if imagining, how to select states and actions to evaluate which will help minimize its external task loss and internal resource costs. Through training, it effectively develops a planning algorithm tailored to the target problem. The learned algorithm allows it to flexibly explore, and exploit regularities in, the state and action spaces. The IBP framework can be applied to both continuous and discrete problems. In two experiments we evaluated a continuous IBP implementation on a challenging continuous control task, and a discrete IBP in a maze-solving problem.
Our novel contributions are:
A fully learnable model-based planning agent for continuous control.
An agent that learns to construct a plan via model-based imagination.
An agent which uses its model of the environment in two ways: for imagination-based planning and gradient-based policy optimization.
A novel approach for learning to build, navigate, and exploit “imagination trees”.
1.1 Related work
Planning with ground truth models has been studied heavily and led to remarkable advances. AlphaGo silver2016mastering , the world champion computer Go system, trains a policy to decide how to expand the search tree using a known transition model. Planning in continuous domains with fixed models must usually exploit background knowledge to sample actions efficiently busoniu2010reinforcement ; hren2008optimistic ; munos2011optimistic . Several recent efforts watter2015embed ; lenz2015deepmpc ; finn2017deep addressed planning in complex systems, however, the planning itself uses classical methods, e.g., Stochastic Optimal Control, trajectory optimization, and model-predictive control.
There have also been various efforts to learn to plan. The classic “Dyna” algorithm learns a model, which is then used to train a policy sutton1991dyna . Vezhnevets et al. vezhnevets2016strategic proposed a method that learns to initialize and update a plan, but which does not use a model and instead directly maps new observations to plan updates. The value iteration network Tamar2016 and predictron silver2016predictron
both train deep networks to implicitly plan via iterative rollouts. However the former does not use a model, and the latter uses an abstract model which does not capture the world dynamics, and was only applied to learning Markov reward processes rather than solving Markov decision processes (MDPs).
, in which an internal MDP schedules computations, which carry costs, in order to solve a task. More recently, neural networks have been trained to perform “conditional” and “adaptive computation”Bengio2013 ; Bengio2015 ; graves2016adaptive , which results in a dynamic computational graph.
Recently Fragkiadaki2015 trained a “visual imagination” model to control simulated billiards systems, though their system did not learn to plan. Our IBP was most inspired by Hamrick et al.’s “imagination-based metacontroller” (IBMC) hamrick2017metacontrol , which learned an adaptive optimization policy for one-shot decision-making in contextual bandit problems. Our IBP, however, learns an adaptive planning policy in the more general and challenging class of sequential decision-making problems.Similar to our work is the study of I2A that looks in detail at dealing with imperfect complex models of the world, working on pixels, in the discrete sequential decision making processes.
The definition of planning we adopt here involves an agent proposing sequences of actions, evaluating them with a model, and following a policy that depends on these proposals and their predicted results. Our IBP implements a recurrent policy capable of planning via four key components (Figure 1). On each step, the manager chooses whether to imagine or act. If acting, the controller produces an action which is executed in the world. If imagining, the controller produces an action which is evaluated by the model-based imagination. In both cases, data resulting from each step are aggregated by the memory and used to influence future actions. The collective activity of the IBP’s components supports various strategies for constructing, evaluating, and executing a plan.
On each iteration, , the IBP either executes an action in the world or imagines the consequences of a proposed action. The actions executed in the world are indexed by , and the sequence of imagination steps the IBP performs before an action are indexed by . Through the planning and acting processes, two types of data are generated: external and internal. External data includes observable world states, , executed actions, , and obtained rewards, . Internal data includes imagined states, , actions, , and rewards, , as well as the manager’s decision about whether to act or imagine (and how to imagine), termed the “route”, , the number of actions and imaginations performed, and all other auxiliary information from each step. We summarize the external and internal data for a single iteration as , and the history of all external and internal data up to, and including, the present iteration as, . The set of all imagined states since the previous executed action are , where , is initialized as the current world state, .
The manager, , is a discrete policy which maps a history, , to a route, . The determines whether the agent will execute an action in the environment, or imagine the consequences of a proposed action. If imagining, the route can also select which previously imagined, or real, state to imagine from. We define , where is the signal to act in the world, and the signals to propose and evaluate an action from imagined state, .
The controller, , is a contextualized action policy which maps a state and a history to an action, . The state which is provided as input to the controller is determined by the manager’s choice of . If executing, the actual state, , is always used. If imagining, the state is used, as mentioned above. There are different possible imagination strategies, detailed below, which determine which state is used for imagination.
The imagination, is a model of the world, which maps states, , and actions, , to consequent states, , and scalar rewards, .
The memory, , recurrently aggregates the external and internal data generated from one iteration, , to update the history, i.e., .
Constructing a plan involves the IBP choosing to propose and imagine actions, and building up a record of possible sequences of actions’ expected quality. If a sequence of actions predicted to yield high reward is identified, the manager can then choose to act and the controller can produce the appropriate actions.
We explored three distinct imagination-based planning strategies: “-step”, “-step”, and “tree” (see Figure 2). They differ only by how the manager selects, from the actual state and all imagined states since the last action, the state from which to propose and evaluate an action. For -step imagination, the IBP must imagine from the actual state, . This induces a depth- tree of imagined states and actions (see Figure 2, first row of graphs). For -step imagination, the IBP must imagine from the most recent previously imagined state, . This induces a depth- chain of imagined states and actions (see Figure 2, second row of graphs). For trees, the manager chooses whether to imagine from the actual state or any previously imagined state since the last actual action, . This induces an “imagination tree”, because imagined actions can be proposed from any previously imagined state (see Figure 2, third row of graphs).
3 Experiment 1: Continuous control
3.1 Spaceship task
We evaluated our model in a challenging continuous control task adapted from the “Spaceship Task” in hamrick2017metacontrol , which we call “Spaceship Task 2.0” (see Figure 3 and videos URL: https://drive.google.com/open?id=0B3u8dCFTG5iVaUxzbzRmNldGcU0). The agent must pilot an initially stationary spaceship in 2-D space from a random initial position at a radius between and distance units, and a random mass between and mass units, to its mothership, which is always set to position . The agent can fire its thrusters with a force of its choice, which accelerates its velocity by . There are five stationary planets in the scene, at random positions at a radius between and distance units and with masses that vary between and mass units. The planets’ gravitational fields accelerate the spaceship, which induces complex, non-linear dynamics. A single action entailed the ship firing its thrusters on the first time step, then traveling under ballistic motion for 11 time steps under the gravitational dynamics.
There are several other factors that influence possible solutions to the problem. The spaceship pilot must pay a linearly increasing price for fuel ( cost units), when the force magnitude is greater then a threshold value of distance units, i.e., ). This incentivizes the pilot to choose small thruster forces. We also included multiplicative noise in the control, which further incentivizes small controls and also bounds the resolution at the future states of the system can be accurately predicted.
3.2 Neural network implementation and training
We implemented our continuous IBP for the spaceship task using standard deep learning building blocks. The memory,, was an LSTM hochreiter1997long . Crucially, the way the continuous IBP encodes a plan is by embedding into a “plan context”, , using . Here the inputs to were the concatenation of a subset of . For imagining, they were: . For acting, they were: . The manager, , and controller,
, were multi-layer perceptrons (MLP). Thetook and as input, and output . The took and as input, and output or , for imagining or acting, respectively. And the imagination-based model of the environment, was an interaction network (IN) battaglia2016interaction , a powerful neural architecture to model graph-like data. It took as input and returned for imagining, and took and returned for acting.
The continuous IBP was trained to jointly optimize two loss terms, the external task and internal resource losses. The task loss reflects the cost of executing an action in the world, including the fuel cost and final distance to the mothership. The resource loss reflects the cost of using the imagination on a particular time step and only affects the manager. It could be fixed across an episode, or vary with the number of actions taken so far, expressing the constraint that imagining early is less expensive than imagining on-the-fly. The total loss that was optimized was the sum of the task and resource losses.
The training consisted of optimizing, by gradient descent (Adam kingma2014adam ), the parameters of the agent with respect to the task loss and internal resource costs. The computational graph of the IBP architecture was divided into three distinct subgraphs: 1) the model, 2) the manager, and 3) the controller and memory. Each subgraph’s learnable parameters were trained simultaneously, on-policy, but independently from one another.
model was trained to make next-step predictions of the state in a supervised fashion, with error gradients computed by backpropagation. The data was collected from the observations the agent makes when acting in the real world. Because’s outputs were discrete, we used the REINFORCE algorithm williams1992simple
to estimate its gradients. Its policy was stochastic, so we also applied entropy regularization during training to encourage exploration. The rewards for the manager consist of the negative sum between the internal and external loss (the cost of each step is payed at each step, while theloss is payed at the end of the sequence).
The and were trained jointly. Because the ’s output was non-differentiable, we treated the routes it chose for a given episode as constants during learning. This induced a computational graph which varied as a function of the route value, where the route was a switch that determined the gradients’ backward flow. In order to approximate the error gradient with respect to an action executed in the world, we used stochastic value gradients (SVG) heess2015learning . We unrolled the full recurrent loop of imagined and real controls, and computed the error gradients using backpropagation through time. and are trained only to minimize the external loss, i.e. the fuel cost and the final L2 distance to the mothership, but not including the imagination cost. The training regime is similar to the one used in hamrick2017metacontrol .
In terms of strategies that we relied on, beside 1-step and n-step we used only a restricted version of the imagination-tree strategy where the manager only selected from as depicted in Figure 2. While this reduced the range of trees the model can construct, it was presumably easier to train because it had a fixed number of discrete route choices, independent of . Future work should explore using RNNs or local navigation within the tree to handle variable sized sets of route choices for the manager.
Our first results show that the -step lookahead IBP can learn to use its model-based imagination to improve its performance in a complex, continuous, sequential control task. Figure 3’s first row depicts several example trajectories from a -step IBP that was granted three external actions and up to two imagined actions per external action (a maximum of nine imagined and external actions). The IBP learned to often use its first two external actions to navigate to open regions of space, presumably to avoid the complex nonlinear dynamics imposed by the planets’ gravitational fields, and then use its final action to seek the target location. It used its imagined actions to try out potential actions, which it refined iteratively. Figure 4a shows the performance of different IBP agents, which are granted one, two, and three external actions, and different maximum numbers of imagined actions. The task loss always decreased as a function of the maximum number of imagined actions. The version which was allowed only one external action (blue line) roughly corresponded to Hamrick et al.’s hamrick2017metacontrol IBMC. The IBP agents that could not use their imagination (left-most points, 0 imaginations per action) represents the SVG baselines on this domain: they could use their model to compute policy gradients during training, but could not use their model to evaluate proposed actions. These results provide clear evidence of the value of imagination for sequential decision-making, even with only 1-step lookahead.
We also examined the -step lookahead IBP’s use of imagination as greater resource costs, , were imposed (Figure 4b), and found that it smoothly decreased the total number of imagination steps to avoid this additional penalty, eventually using no imagination at all when under high values of . The reason the low-cost agents (toward the left side of Figure 4
b) did not use the full six imaginations allowed is because of the small entropy reward, which incentivized it to learn a policy with increased variance in its route choices. Figure4c shows that the result of increased imagination cost, and decreased use of imagination was that the task loss increased.
We also trained an -step IBP in order to evaluate whether the IBP could construct longer-term plans, as well as a restricted imagination-tree IBP to determine whether it could learn more complex plan construction strategies. Figure 3’s second row shows -step IBP trajectories, and the third and fourth rows show two imagination-tree IBP trajectories. In this praticular task, after each execution the cost of an imagination step increases, making it more advantageous to plan early. Our results (Figure 4d-e) show that -step IBP performance is better than -step, and that an imagination-tree IBP outperforms both, especially when more maximum imagination steps are allowed, for two values of the fuel cost we applied. Note that imagination-tree IBP becomes more useful as the agent has more imagination steps to use.
4 Experiment 2: Discrete mazes
In the second, preliminary experiment, a discrete 2D maze-solving task, we implemented a discrete IBP to probe its use of different imagination strategies. For simplicity, the imagination used a perfect world model. The controller is given by a tabular representation, and the history represents incremental changes to it, caused by imagination steps. The memory’s aggregation functionality was implemented by accumulating the updates induced by imagination steps into the history. The controller’s policy was determined by the sum of the learned Q-value and the history tensor. The manager was a convolutional neural network (CNN) that took as input the current Q table, history tensor, and map layout.
In this particular instance we explored IBP as a search strategy. We created mazes for which states would be aliased in the tabular Q representation. Each maze could have multiple candidate goal locations, and in each episode a goal location was selected at random from the candidates. An agent was instantiated to use a single set of Q-values for each maze and all its different goal locations. This would also allow a model-based planning agent to use imagined rollouts to disambiguate states and generalize to goal locations unseen during training.
More results on these maze tasks can be found in the supplementary material.
Single Maze Multiple Goals: We start by exploring state aliasing and generalization to out-of-distribution states. Figure 6 top row shows the prototypical setup we consider for this exploration, along with imagination rollout examples. During training, the agent only saw the first three goal positions, selected at random, but never the fourth. The reward was -1 for every step, except at the end, when the agent received as reward the number of steps left in its total budget of 20 steps.
The amount of available actions at each position in the maze was small, so we limited the manager to a fixed policy that always expanded the next leaf node in the search tree which had the highest value. Figure 6 also showed the effect of having different number of imaginations to disambiguate the evaluated maze. With sufficient imagination steps, not only could the agent find all possible goals, but most importantly, it could resolve new goal positions not experienced during training.
Multiple Mazes Multiple Goals: Here we illustrate how a learned manager adapts to different environments and proposes different imagination strategies. We used different mazes, some with more possible goal positions than others, and with different optimal strategies. We introduced a regularity that the manager could exploit: we made the wall layouts be correlated with the number of possible goal positions. The IBP learned a separate tabular control policy for each maze, but used a shared manager across all mazes.
Figure 6’s bottom row shows several mazes and one example run of a learned agent. Note, the maze with two goals cannot be disambiguated using just -step imagination–longer imagination sequences are needed. And the -step IBP will have trouble resolving the maze with four goals in a small number of steps because it cannot jump back and check for a different hypothesized goal position. The imagination-tree IBP could adapt its search strategy to the maze, and thus achieved an overall average reward that was roughly 25% closer to the optimum than either the -step or -step IBPs.
This experiment also exposes a limitation of the simple imagination-tree strategy, where the manager can only choose to imagine only from . It can only proceed with imagining or reset back to the current world state. So if a different path should be explored, but part of its path overlaps with a previously explored path, the agent must waste imagination steps in reconstructing this overlapping segment.
One preliminary way to address this is to allow the manager to work with fixed-length macro actions done by the control policy, which effectively increases the imagination trees’ branching factor while making the tree more shallow. In Supplementary Materials we show the benefits of using macros on a scaled up (77) version of the tasks considered in Figure 6. Figure 5 shows the most common imagination trees constructed by the agent, highlighting the diversity of the IBP’s learned planning strategies.
We introduced a new approach for model-based planning that learns to construct, evaluate, and execute plans. Our continuous IBP represents the first fully learnable method for model-based planning in continuous control problems, using its model for both imagination-based planning and gradient-based policy optimization. Our results in the spaceship task show how the IBP’s learned planning strategies can strike favorable trade-offs between external task loss and internal imagination costs, by sampling alternative imagined actions, chaining together sequences of imagined actions, and developing more complex imagination-based planning strategies. Our results on a 2D maze-solving task illustrate how it can learn to build flexible imagination trees, which inform its plan construction process and improve overall performance.
In the future, the IBP should be applied to more diverse and natural decision-making problems, such as robotic control and higher-level problem solving. Other environment models should be explored, especially those which operate over raw sensory observations. The fact that the imagination is differentiable can be exploited for another form of model-based reasoning: online gradient-based control optimization.
Our work demonstrates that the core mechanisms for model-based planning can be learned from experience, including implicit and explicit domain knowledge, as well as flexible plan construction strategies. By implementing essential components of a planning system with powerful, neural network function approximators, the IBP helps realize the promise of model-based planning and opens new doors for learning solution strategies in challenging problem domains.
-  Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in Neural Information Processing Systems, pages 4502–4510, 2016.
-  Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computation in neural networks for faster models. arXiv:1511.06297, 2015.
-  Yoshua Bengio. Deep learning of representations: Looking forward. arXiv:1305.0445, 2013.
-  Lucian Busoniu, Robert Babuska, Bart De Schutter, and Damien Ernst. Reinforcement learning and dynamic programming using function approximators, volume 39. CRC press, 2010.
-  Rémi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In International Conference on Computers and Games, pages 72–83. Springer, 2006.
-  Chelsea Finn and Sergey Levine. Deep visual foresight for planning robot motion. In IEEE International Conference on Robotics and Automation (ICRA), 2017.
-  Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. Learning Visual Predictive Models of Physics for Playing Billiards. Proceedings of the International Conference on Learning Representations (ICLR 2016), pages 1–12, 2015.
-  Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016.
-  Jessica B. Hamrick, Andrew J. Ballard, Razvan Pascanu, Oriol Vinyals, Nicolas Heess, and Peter W. Battaglia. Metacontrol for adaptive imagination-based optimization, 2017.
Nicholas Hay, Stuart J. Russell, David Tolpin, and Solomon Eyal Shimony.
Selecting computations: Theory and applications.
Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence, 2012.
-  Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pages 2944–2952, 2015.
-  Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
-  Eric J. Horvitz. Reasoning about beliefs and actions under computational resource constraints. In Uncertainty in Artificial Intelligence, Vol. 3, 1988.
-  Jean-Francois Hren and Rémi Munos. Optimistic planning of deterministic systems. In European Workshop on Reinforcement Learning, pages 151–164. Springer Berlin Heidelberg, 2008.
-  Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  Ian Lenz, Ross A Knepper, and Ashutosh Saxena. DeepMPC: Learning deep latent features for model predictive control. In Robotics: Science and Systems, 2015.
-  Rémi Munos. Optimistic optimization of a deterministic function without the knowledge of its smoothness. In NIPS, pages 783–791, 2011.
-  Stuart Russell and Eric Wefald. Principles of metareasoning. Artificial Intelligence, 49(1):361 – 395, 1991.
-  David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
-  David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, et al. The predictron: End-to-end learning and planning. arXiv preprint arXiv:1612.08810, 2016.
-  Richard S Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACM SIGART Bulletin, 2(4):160–163, 1991.
-  Aviv Tamar, Sergey Levine, and Pieter Abbeel. Value Iteration Networks. Advances in Neural Information Processing Systems, 2016.
-  Alexander Vezhnevets, Volodymyr Mnih, John Agapiou, Simon Osindero, Alex Graves, Oriol Vinyals, Koray Kavukcuoglu, et al. Strategic attentive writer for learning macro-actions. arXiv preprint arXiv:1606.04695, 2016.
-  Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems, pages 2746–2754, 2015.
-  Theophane Weber, Sebastien Racaniere, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, David Silver, and Daan Wierstra. Imagination-augmented agents for deep reinforcement learning. arXiv, 2017.
-  Ari Weinstein and Michael L Littman. Bandit-based planning and learning in continuous-action markov decision processes. In ICAPS, 2012.
-  Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229–256, 1992.
Appendix: More About the Maze Tasks
Single maze multiple goals: The 4 mazes used in the single maze, multiple goals study are shown in Figure 7. According to the reward scheme introduced in the main paper, the optimal reward (normalized by 20, the episode length) for each of the 4 mazes are , , and .
For an agent that does not use imagination, the optimal strategy would be to go to the closest goal all the time. However, given enough imagination steps, the agent would be able to disambiguate different environments, and act accordingly.
Figure 8 shows the imagination rollouts along with the corresponding updates to the planning context . It is clear from the updates that imagination helps to adjust the agent’s policy to adapt better to the particular episode.
Multiple mazes multiple goals: Figure 9 shows the configuration of the 55 mazes we used for this exploration. Here each maze may have more than one goal positions, and the agent has one set of -values for each of the mazes. Along with the maze configurations we are also showing the default policy learned for these mazes, which basically points to the closest possible goal location.
Figure 11 left shows the comparison between different imagination strategies, where we compare the learned neural net imagination strategy with fixed strategies including 1-step, -step () and no imagination. For this comparison, all the imagination based methods have the same imagination budget at each step. But the learned strategy can adapt better to different scenarios.
Figure 10 show the 77 mazes we used as a scaled up version of the 55 mazes. These are more challenging as the agent needs to explore in a much bigger space to get useful information. We proposed to use “macro-actions” using rollouts of length more than 1 as basic units. Figure 11 right shows the comparison between different imagination strategies for this task. Using macro-actions consistently improves performance, while not using “macro-actions” does not perform as well, potentially because this is a much harder learning problem, in particular, the action sequences are much longer, and the useful reward information are delayed even further.
7 maze, all methods have the same budget of 8 imagination steps. To make the overall trend more visible, the solid lines show the exponential moving average and the shaded regions correspond to exponential moving standard deviation. On the 77 maze, it is beneficial to operate on 2 or 4-step “macro actions” instead of exploring simple actions only.
Appendix: More About the Spaceship Task
The setup we used followed closely that of , and most details are identical
with the ones described in that paper. To re-iterate, we
used Adam as the optimizer, where the learning rate was decreased by 5% percent
every time the validation error increased. Validation error was
computed every iterations. Each iteration consists of evaluating the gradient over a minibatch of episodes. We used
different initial learning rates for the different components. For training the model of the world we used a learning rate of ,
for the controller and for the manager . We rely on gradient clipping, with the max gradient norm allowed of
. We rely on gradient clipping, with the max gradient norm allowed of. The agents are trained for a total of iterations. The pseudocode of our most complex agent is given below: