1 Introduction
Reinforcement learning (RL) is a field of research that has quickly become one of the most promising branches of machine learning algorithms to solve artificial general intelligence
[10, 12, 2, 16]. There have been several breakthroughs in reinforcement learning in recent years for relatively simple environments [14, 15, 6, 21], but no algorithms are capable of human performance in situations where complex policies must be learned. Due to this, a number of open research questions remain in reinforcement learning. It is possible that many of the problems can be resolved with algorithms that adequately accounts for planning, exploration, and memory at different timehorizons.In current stateoftheart RL algorithms, longhorizon RL tasks are difficult to master because there is as of yet no optimal exploration algorithm that is capable of proper statespace pruning. Exploration strategies such as
greedy is widely used in RL, but cannot find an adequate exploration/exploitation balance without significant hyperparametertuning. Environment modeling is a promising exploration technique where the goal is for the model to imitate the behavior of the target environment. This limits the required interaction with the target environment, enabling nearly unlimited access to exploration without the cost of exhausting the target environment. In addition to environmentmodeling, a balance between exploration and exploitation must be accounted for, and it is, therefore, essential for the environment model to receive feedback from the RL agent.
By combining the ideas of variational autoencoders with deep RL agents, we find that it is possible for agents to learn optimal policies using only generated training data samples. The approach is presented as the dreaming variational autoencoder. We also show a new learning environment, Deep Maze, that aims to bring a vast set of challenges for reinforcement learning algorithms and is the environment used for testing the DVAE algorithm.
This paper is organized as follows. Section 3 briefly introduces the reader to preliminaries. Section 4 proposes The Dreaming Variational Autoencoder for environment modeling to improve exploration in RL. Section 5 introduces the Deep Maze learning environment for exploration, planning and memory management research for reinforcement learning. Section 6 shows results in the Deep Line Wars environment and that RL agents can be trained to navigate through the deep maze environment using only artificial training data.
2 Related Work
In machine learning, the goal is to create an algorithm that is capable of constructing a model of some environment accurately. There is, however, little research in game environment modeling in the scale we propose in this paper. The primary focus of recent RL research has been on the value and policy aspect of RL algorithm, while less attention has been put into perfecting environment modeling methods.
In 2016, the work in [3] proposed a method of deducing the Markov Decision Process (MDP) by introducing an adaptive exploration signal (pseudoreward), which was obtained using deep generative model. Their method was to compute the Jacobian of each state and used it as the pseudoreward when using deep neural networks to learn the stategeneralization.
Xiao et al. proposed in [22] the use of generative adversarial networks (GAN) for modelbased reinforcement learning. The goal was to utilize GAN for learning dynamics of the environment in a shorthorizon timespan and combine this with the strength of farhorizon value iteration RL algorithms. The GAN architecture proposed illustrated near authentic generated images giving comparable results to [14].
In [9] Higgins et al. proposed DARLA, an architecture for modeling the environment using VAE [8]. The trained model was used to extract the optimal policy of the environment using algorithms such as DQN [15], A3C [13], and Episodic Control [4]. DARLA is to the best of our knowledge, the first algorithm to properly introduce learning without access to the target environment during training.
Buesing et al. recently compared several methods of environment modeling, showing that it is far better to model the statespace then to utilize MonteCarlo rollouts (RAR). The proposed architecture, statespace models (SSM) was significantly faster and produced acceptable results compared to autoregressive (AR) methods. [5]
Ha and Schmidhuber proposed in [7] World Models, a novel architecture for training RL algorithms using variational autoencoders. This paper showed that agents could successfully learn the environment dynamics and use this as an exploration technique requiring no interaction with the target domain.
3 Background
We base our work on the wellestablished theory of reinforcement learning and formulate the problem as a MDP [20]. An MDP contains pairs that define the environment as a model. The statespace, represents all possible states while the actionspace, represents all available actions the agent can perform in the environment. denotes the transition function , which is a mapping from state and action to the future state . After each performed action, the environment dispatches a reward signal, .
We call a sequence of states and actions a trajectory denoted as
and the sequence is sampled through the use of a stochastic policy that predicts the optimal action in any state: , where is the policy and are the parameters. The primary goal of the reinforcement learning is to reinforce good behavior. The algorithm should try to learn the policy that maximizes the total expected discounted reward given by, [15].
4 The Dreaming Variational Autoencoder
The Dreaming Variational Autoencoder (DVAE) is an endtoend solution for generating probable future states
from an arbitrary statespace using stateaction pairs explored prior to and .The DVAE algorithm, seen in Figure 1 works as follows. First, the agent collects experiences for utilizing experiencereplay in the RunAgent
function. At this stage, the agent explores the statespace guided by a Gaussian distributed policy. The agent acts, observes, and stores the observations into the experiencereplay buffer
. After the agent reaches terminal state, the DVAE algorithm encodes stateaction pairs from the replaybuffer into probable future states. This is stored in the replaybuffer for artificial futurestates .1  0  0  1  0  0  

Real States  0  0  0  0  0  1  
0  1  0  0  
Generated States  N/A  0  0  0  1  
Table 1 illustrates how the algorithm can generate sequences of artificial trajectories using , where is the encoder, and is the decoder. With state and action as input, the algorithm generates state which in the table can be observed is similar to the real state . With the next input, , the DVAE algorithm generates the next state which again can be observed to be equal to . Note that this is without ever observing state . Hence, the DVAE algorithm needs to be initiated with a state, e.g. , and actions follows. It then generates (dreams) next states,
The requirement is that the environment must be partially discovered so that the algorithm can learn to behave similarly to the target environment. To predict a trajectory of three timesteps, the algorithm does nesting to generate the whole sequence: . The algorithm does this well in early on, but have difficulties with long sequences beyond eight in continuous environments.
5 Environments
The DVAE algorithm was tested on two game environments. The first environment is Deep Line Wars [1], a simplified RealTime Strategy game. We introduce Deep Maze, a flexible environment with a wide range of challenges suited for reinforcement learning research.
5.1 The Deep Maze Environment
The Deep Maze is a flexible learning environment for controlled research in exploration, planning, and memory for reinforcement learning algorithms. Maze solving is a wellknown problem, and is used heavily throughout the RL literature [20], but is often limited to small and fullyobservable scenarios. The Deep Maze environment extends the maze problem to over 540 unique scenarios including PartiallyObservable Markov Decision Processes (POMDP). Figure 2 illustrates a small subset of the available environments for Deep Maze, ranging from smallscale MDP’s to largescale POMDP’s. The Deep Maze further features custom game mechanics such as relocated exits and dynamically changing mazes.
The game engine is modularized and has an API that enables a flexible tool set for thirdparty scenarios. This extends the capabilities of Deep Maze to support nearly all possible scenario combination in the realm of maze solving.^{1}^{1}1The Deep Maze is opensource and publicly available at https://github.com/CAIR/deepmaze.
5.1.1 State Representation
RL agents depend on sensory input to evaluate and predict the best action at current timestep. Preprocessing of data is essential so that agents can extract features from the input. For this reason, Deep Maze has builtin state representation for RGB Images, Grayscale Images, and raw state matrices.
5.1.2 Scenario Setup
The Deep Maze learning environment ships with four scenario modes: (1) Normal, (2) POMDP, (3) Limited POMDP, and (4) Timed Limited POMDP.
The first mode exposes a seedbased randomly generated maze where the statespace is modeled as an MDP. The second mode narrows the statespace observation to a configurable area around the player. In addition to radius based vision, the POMDP mode also features raytracing vision that better mimic the sight of a physical agent. The third and fourth mode is intended for memory research where the agent must find the goal in a limited number of timesteps. In addition to this, the agent is presented with the solution but fades after a few initial time steps. The objective is the for the agent to remember the solution to find the goal. All scenario setups have a variable mapsize ranging between and tiles.
5.2 The Deep Line Wars Environment
The Deep Line Wars environment was first introduced in [1]. Deep Line Wars is a realtime strategy environment that makes an extensive statespace reduction to enable swift research in reinforcement learning for RTS games.
The game objective of Deep Line Wars is to invade the enemy player with mercenary units until all health points are depleted, see Figure 3). For every friendly unit that enters the far edge of the enemy base, the enemy health pool is reduced by one. When a player purchases a mercenary unit, it spawns at a random location inside the edge area of the buyers base. Mercenary units automatically move towards the enemy base. To protect the base, players can construct towers that shoot projectiles at the opponents mercenaries. When a mercenary dies, a fair percentage of its gold value is awarded to the opponent. When a player sends a unit, the income is increased by a percentage of the units gold value. As a part of the income system, players gain gold at fixed intervals.
6 Experiments
6.1 Deep Maze Environment Modeling using DVAE
The DVAE algorithm must be able to generalize over many similar states to model a vast statespace. DVAE aims to learn the transition function, bringing the state from to . We use the deep maze environment because it provides simple rules, with a controllable statespace complexity. Also, we can omit the importance of reward for some scenarios.
We trained the DVAE model on two NoWall Deep Maze scenarios of size and . For the encoder and decoder, we used the same convolution architecture as proposed by [17]
and trained for 5000 epochs for
and 1000 epochs forrespectively. For the encoding of actions and states, we concatenated the flattened statespace and actionspace, having a fullyconnected layer with ReLU activation before calculating the latentspace. We used the Adam optimizer
[11] with a learningrate of 1e08 to update the parameters.Figure 4 illustrates the loss of the DVAE algorithm in the NoWall Deep Maze scenario. In the scenario, DVAE is trained on only 50% of the state space, which results in noticeable graphic artifacts in the prediction of future states, see Figure 5. Because the environment is fully visible, we see in Figure 6 that the artifacts are exponentially reduced.
Algorithm  Avg Performance  Avg Performance 
DQN  @ 9314  @ N/A 
TRPO  @ 5320  @ 7401 
PPO  @ 3151  @ 7195 
DQN  @ 4314  @ 8241 
TRPO  @ 3320  @ 4120 
PPO  @ 2453  @ 2904 
6.2 Using for RL Agents in Deep Maze
The goal of this experiment is to observe the performance of RL agents using the generated experiencereplay from Figure 1 in Deep Maze environments of size and . In Table 2, we compare the performance of DQN [14], TRPO [18], and PPO [19] using the DVAE generated to tune the parameters.
Figure 7 illustrates three maze variations of size , where the AI has learned the optimal path. We see that the best performing algorithm, PPO [19] beats DQN and TRPO using either or . The DQN agent did not converge in the environment, but it is likely that valuebased algorithms could struggle with graphical artifacts generated from the DVAE algorithm. These artifacts significantly increase the statespace so that directpolicy algorithms could perform better.
6.3 Deep Line Wars Environment Modeling using DVAE
The DVAE algorithm works well in more complex environments, such as the Deep Line Wars game environment [1]. Here, we expand the DVAE algorithm with LSTM to improve the interpretation of animations, illustrated Figure 1.
Figure 8 illustrates the state quality during training of DVAE in a total of 6000 episodes (epochs). Both players draw actions from a Gaussian distributed policy. The algorithm understands that the player units can be located in any tiles after only 50 epochs, and at 1000 we observe the algorithm makes a more accurate statement of the probability of unit locations (i.e., some units have increased intensity). At the end of the training, the DVAE algorithm is to some degree capable of determining both towers, and unit locations at any given timestep during the game episode.
7 Conclusion and Future Work
This paper introduces the Dreaming Variational Autoencoder
(DVAE) as a neural network based generative modeling architecture to enable exploration in environments with sparse feedback. The DVAE shows promising results in modeling simple noncontinuous environments. For continuous environments, such as Deep Line Wars, DVAE performs better using a recurrent neural network architecture (LSTM) while it is sufficient to use only a sequential feedforward architecture to model noncontinuous environments such as Chess, Go, and Deep Maze.
There are, however, several fundamental issues that limit DVAE from fully modeling environments. In some situations, exploration may be a costly act that makes it impossible to explore all parts of the environment in its entirety. DVAE cannot accurately predict the outcome of unexplored areas of the statespace, making the prediction blurry or false.
Reinforcement learning has many unresolved problems, and the hope is that the Deep Maze learning environment can be a useful tool for future research. For future work, we plan to expand the model to model the reward function using inverse reinforcement learning. DVAE is an ongoing research question, and the goal is that reinforcement learning algorithms could utilize this form of dreaming to reduce the need for exploration in real environments.
References

[1]
Andersen, P.A., Goodwin, M., Granmo, O.C.: Towards a deep reinforcement learning approach for tower line wars. In: Bramer, M., Petridis, M. (eds.) Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 10630 LNAI, pp. 101–114 (2017)
 [2] Arulkumaran, K., Deisenroth, M.P., Brundage, M., Bharath, A.A.: Deep reinforcement learning: A brief survey. IEEE Signal Processing Magazine 34(6), 26–38 (2017)
 [3] Bangaru, S.P., Suhas, J., Ravindran, B.: Exploration for Multitask Reinforcement Learning with Deep Generative Models. arxiv preprint arXiv:1611.09894 (nov 2016)
 [4] Blundell, C., Uria, B., Pritzel, A., Li, Y., Ruderman, A., Leibo, J.Z., Rae, J., Wierstra, D., Hassabis, D.: ModelFree Episodic Control. arxiv preprint arXiv:1606.04460 (jun 2016)
 [5] Buesing, L., Weber, T., Racaniere, S., Eslami, S.M.A., Rezende, D., Reichert, D.P., Viola, F., Besse, F., Gregor, K., Hassabis, D., Wierstra, D.: Learning and Querying Fast Generative Models for Reinforcement Learning. arxiv preprint arXiv:1802.03006 (feb 2018)
 [6] Chen, K.: Deep Reinforcement Learning for Flappy Bird. cs229.stanford.edu p. 6 (2015)
 [7] Ha, D., Schmidhuber, J.: World Models. arxiv preprint arXiv:1803.10122 (mar 2018)
 [8] Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., Lerchner, A.: betaVAE: Learning Basic Visual Concepts with a Constrained Variational Framework. International Conference on Learning Representations (nov 2016)
 [9] Higgins, I., Pal, A., Rusu, A., Matthey, L., Burgess, C., Pritzel, A., Botvinick, M., Blundell, C., Lerchner, A.: DARLA: Improving ZeroShot Transfer in Reinforcement Learning. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 1480–1490. PMLR, International Convention Centre, Sydney, Australia (2017)
 [10] Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research (apr 1996)
 [11] Kingma, D.P., Ba, J.L.: Adam: A Method for Stochastic Optimization. Proceedings, International Conference on Learning Representations 2015 (2015)
 [12] Li, Y.: Deep Reinforcement Learning: An Overview. arxiv preprint arXiv:1701.07274 (jan 2017)
 [13] Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., Kavukcuoglu, K.: Asynchronous Methods for Deep Reinforcement Learning. In: Balcan, M.F., Weinberger, K.Q. (eds.) Proceedings of The 33rd International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 48, pp. 1928–1937. PMLR, New York, New York, USA (2016)
 [14] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing Atari with Deep Reinforcement Learning. Neural Information Processing Systems (dec 2013)
 [15] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Humanlevel control through deep reinforcement learning. Nature 518(7540), 529–533 (feb 2015)
 [16] Mousavi, S.S., Schukat, M., Howley, E.: Deep Reinforcement Learning: An Overview. In: Bi, Y., Kapoor, S., Bhatia, R. (eds.) Proceedings of SAI Intelligent Systems Conference (IntelliSys) 2016. pp. 426–440. Springer International Publishing, Cham (2018)

[17]
Pu, Y., Gan, Z., Henao, R., Yuan, X., Li, C., Stevens, A., Carin, L.: Variational Autoencoder for Deep Learning of Images, Labels and Captions. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., R., G. (eds.) Advances in Neural Information Processing Systems. pp. 2352–2360. Curran Associates, Inc. (2016)
 [18] Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.: Trust Region Policy Optimization. In: Bach, F., Blei, D. (eds.) Proceedings of the 32nd International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 37, pp. 1889–1897. PMLR, Lille, France (2015)
 [19] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal Policy Optimization Algorithms. arxiv preprint arXiv:1707.06347 (jul 2017)
 [20] Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, vol. 9. MIT Press (1998)
 [21] Van Seijen, H., Fatemi, M., Romoff, J., Laroche, R., Barnes, T., Tsang, J.: Hybrid Reward Architecture for Reinforcement Learning. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems 30, pp. 5392–5402. Curran Associates, Inc. (2017)
 [22] Xiao, T., Kesineni, G.: Generative Adversarial Networks for Model Based Reinforcement Learning with Tree Search. Tech. rep., University of California, Berkeley (2016)
Comments
There are no comments yet.